浏览代码

[DOCS] Add Downsampling docs (#88571)

This adds documentation for downsampling of time series indices.
David Kilfoyle 3 年之前
父节点
当前提交
cad87c4d5a

+ 7 - 0
docs/reference/data-streams/data-stream-apis.asciidoc

@@ -12,6 +12,11 @@ The following APIs are available for managing <<data-streams,data streams>>:
 * <<promote-data-stream-api>>
 * <<modify-data-streams-api>>
 
+The following API is available for <<tsds,time series data streams>>:
+
+* <<indices-downsample-data-stream>>
+
+
 For concepts and tutorials, see <<data-streams>>.
 
 include::{es-repo-dir}/indices/create-data-stream.asciidoc[]
@@ -27,3 +32,5 @@ include::{es-repo-dir}/indices/data-stream-stats.asciidoc[]
 include::{es-repo-dir}/data-streams/promote-data-stream-api.asciidoc[]
 
 include::{es-repo-dir}/data-streams/modify-data-streams-api.asciidoc[]
+
+include::{es-repo-dir}/indices/downsample-data-stream.asciidoc[]

+ 535 - 0
docs/reference/data-streams/downsampling-ilm.asciidoc

@@ -0,0 +1,535 @@
+[[downsampling-ilm]]
+=== Run downsampling with ILM
+++++
+<titleabbrev>Run downsampling with ILM</titleabbrev>
+++++
+
+preview::[]
+
+This is a simplified example that allows you to see quickly how
+<<downsampling,downsampling>> works as part of an ILM policy to reduce the
+storage size of a sampled set of metrics. The example uses typical Kubernetes
+cluster monitoring data. To test out downsampling with ILM, follow these steps:
+
+. Check the <<downsampling-ilm-prereqs,prerequisites>>.
+. <<downsampling-ilm-policy>>.
+. <<downsampling-ilm-create-index-template>>.
+. <<downsampling-ilm-ingest-data>>.
+. <<downsampling-ilm-view-results>>.
+
+[discrete]
+[[downsampling-ilm-prereqs]]
+==== Prerequisites
+
+Refer to <<tsds-prereqs,time series data stream prerequisites>>.
+
+Before running this example you may want to try the
+<<downsampling-manual,Run downsampling manually>> example.
+
+[discrete]
+[[downsampling-ilm-policy]]
+==== Create an index lifecycle policy
+
+Create an ILM policy for your time series data. While not required, an ILM
+policy is recommended to automate the management of your time series data stream
+indices.
+
+To enable downsampling, add a <<ilm-downsample,Downsample action>> and set
+<<ilm-downsample-options,`fixed_interval`>> to the downsampling interval at
+which you want to aggregate the original time series data.
+
+In this example, an ILM policy is configired for the `hot` phase. The downsample
+takes place after the initial index rollover, which for demonstration
+purposes is set to run after five minutes.
+
+[source,console]
+----
+PUT _ilm/policy/datastream_policy
+{
+  "policy": {
+    "phases": {
+      "hot": {
+        "actions": {
+          "rollover" : {
+            "max_age": "5m"
+          },
+          "downsample": {
+  	        "fixed_interval": "1h"
+  	      }
+        }
+      }
+    }
+  }
+}
+----
+
+[discrete]
+[[downsampling-ilm-create-index-template]]
+==== Create an index template
+
+This creates an index template for a basic data stream. The available parameters
+for an index template are described in detail in <<set-up-a-data-stream,Set up a
+time series data stream>>.
+
+For simplicity, in the time series mapping all `time_series_metric` parameters
+are set to type `gauge`, but the `counter` metric type may also be used. The
+`time_series_metric` values determine the kind of statistical representations
+that are used during downsampling.
+
+The index template includes a set of static <<time-series-dimension,time series
+dimensions>>: `host`, `namespace`, `node`, and `pod`. The time series dimensions
+are not changed by the downsampling process.
+
+[source,console]
+----
+PUT _index_template/datastream_template
+{
+    "index_patterns": [
+        "datastream*"
+    ],
+    "data_stream": {},
+    "template": {
+        "settings": {
+            "index": {
+                "mode": "time_series",
+                "number_of_replicas": 0,
+                "number_of_shards": 2
+            },
+            "index.lifecycle.name": "datastream_policy"
+        },
+        "mappings": {
+            "properties": {
+                "@timestamp": {
+                    "type": "date"
+                },
+                "kubernetes": {
+                    "properties": {
+                        "container": {
+                            "properties": {
+                                "cpu": {
+                                    "properties": {
+                                        "usage": {
+                                            "properties": {
+                                                "core": {
+                                                    "properties": {
+                                                        "ns": {
+                                                            "type": "long"
+                                                        }
+                                                    }
+                                                },
+                                                "limit": {
+                                                    "properties": {
+                                                        "pct": {
+                                                            "type": "float"
+                                                        }
+                                                    }
+                                                },
+                                                "nanocores": {
+                                                    "type": "long",
+                                                    "time_series_metric": "gauge"
+                                                },
+                                                "node": {
+                                                    "properties": {
+                                                        "pct": {
+                                                            "type": "float"
+                                                        }
+                                                    }
+                                                }
+                                            }
+                                        }
+                                    }
+                                },
+                                "memory": {
+                                    "properties": {
+                                        "available": {
+                                            "properties": {
+                                                "bytes": {
+                                                    "type": "long",
+                                                    "time_series_metric": "gauge"
+                                                }
+                                            }
+                                        },
+                                        "majorpagefaults": {
+                                            "type": "long"
+                                        },
+                                        "pagefaults": {
+                                            "type": "long",
+                                            "time_series_metric": "gauge"
+                                        },
+                                        "rss": {
+                                            "properties": {
+                                                "bytes": {
+                                                    "type": "long",
+                                                    "time_series_metric": "gauge"
+                                                }
+                                            }
+                                        },
+                                        "usage": {
+                                            "properties": {
+                                                "bytes": {
+                                                    "type": "long",
+                                                    "time_series_metric": "gauge"
+                                                },
+                                                "limit": {
+                                                    "properties": {
+                                                        "pct": {
+                                                            "type": "float"
+                                                        }
+                                                    }
+                                                },
+                                                "node": {
+                                                    "properties": {
+                                                        "pct": {
+                                                            "type": "float"
+                                                        }
+                                                    }
+                                                }
+                                            }
+                                        },
+                                        "workingset": {
+                                            "properties": {
+                                                "bytes": {
+                                                    "type": "long",
+                                                    "time_series_metric": "gauge"
+                                                }
+                                            }
+                                        }
+                                    }
+                                },
+                                "name": {
+                                    "type": "keyword"
+                                },
+                                "start_time": {
+                                    "type": "date"
+                                }
+                            }
+                        },
+                        "host": {
+                            "type": "keyword",
+                            "time_series_dimension": true
+                        },
+                        "namespace": {
+                            "type": "keyword",
+                            "time_series_dimension": true
+                        },
+                        "node": {
+                            "type": "keyword",
+                            "time_series_dimension": true
+                        },
+                        "pod": {
+                            "type": "keyword",
+                            "time_series_dimension": true
+                        }
+                    }
+                }
+            }
+        }
+    }
+}
+----
+// TEST[continued]
+
+////
+[source,console]
+----
+DELETE _index_template/*
+----
+// TEST[continued]
+////
+
+[discrete]
+[[downsampling-ilm-ingest-data]]
+==== Ingest time series data
+
+Use a bulk API request to automatically create your TSDS and index a set of ten
+documents.
+
+**Important:** Before running this bulk request you need to update the
+timestamps to within three to five hours after your current time. That is,
+search `2022-06-21T15` and replace with your present date, and adjust the hour
+to your current time plus three hours.
+
+[source,console]
+----
+PUT /datastream/_bulk?refresh
+{"create": {}}
+{"@timestamp":"2022-06-21T15:49:00Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":91153,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":463314616},"usage":{"bytes":307007078,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":585236},"rss":{"bytes":102728},"pagefaults":120901,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:45:50Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":124501,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":982546514},"usage":{"bytes":360035574,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":1339884},"rss":{"bytes":381174},"pagefaults":178473,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:44:50Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":38907,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":862723768},"usage":{"bytes":379572388,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":431227},"rss":{"bytes":386580},"pagefaults":233166,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:44:40Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":86706,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":567160996},"usage":{"bytes":103266017,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":1724908},"rss":{"bytes":105431},"pagefaults":233166,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:44:00Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":150069,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":639054643},"usage":{"bytes":265142477,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":1786511},"rss":{"bytes":189235},"pagefaults":138172,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:42:40Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":82260,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":854735585},"usage":{"bytes":309798052,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":924058},"rss":{"bytes":110838},"pagefaults":259073,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:42:10Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":153404,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":279586406},"usage":{"bytes":214904955,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":1047265},"rss":{"bytes":91914},"pagefaults":302252,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:40:20Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":125613,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":822782853},"usage":{"bytes":100475044,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":2109932},"rss":{"bytes":278446},"pagefaults":74843,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:40:10Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":100046,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":567160996},"usage":{"bytes":362826547,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":1986724},"rss":{"bytes":402801},"pagefaults":296495,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+{"create": {}}
+{"@timestamp":"2022-06-21T15:38:30Z","kubernetes":{"host":"gke-apps-0","node":"gke-apps-0-0","pod":"gke-apps-0-0-0","container":{"cpu":{"usage":{"nanocores":40018,"core":{"ns":12828317850},"node":{"pct":2.77905e-05},"limit":{"pct":2.77905e-05}}},"memory":{"available":{"bytes":1062428344},"usage":{"bytes":265142477,"node":{"pct":0.01770037710617187},"limit":{"pct":9.923134671484496e-05}},"workingset":{"bytes":2294743},"rss":{"bytes":340623},"pagefaults":224530,"majorpagefaults":0},"start_time":"2021-03-30T07:59:06Z","name":"container-name-44"},"namespace":"namespace26"}}
+
+----
+// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+
+[discrete]
+[[downsampling-ilm-view-results]]
+==== View the results
+
+Now that you've created and added documents to the data stream, check to confirm
+the current state of the new index.
+
+[source,console]
+----
+GET _data_stream
+----
+// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+
+If the ILM policy has not yet been applied, your results will be like the
+following. Note the original `index_name`: `.ds-datastream-<timestamp>-000001`.
+
+```
+{
+  "data_streams": [
+    {
+      "name": "datastream",
+      "timestamp_field": {
+        "name": "@timestamp"
+      },
+      "indices": [
+        {
+          "index_name": ".ds-datastream-2022.08.26-000001",
+          "index_uuid": "5g-3HrfETga-5EFKBM6R-w"
+        },
+        {
+          "index_name": ".ds-datastream-2022.08.26-000002",
+          "index_uuid": "o0yRTdhWSo2pY8XMvfwy7Q"
+        }
+      ],
+      "generation": 2,
+      "status": "GREEN",
+      "template": "datastream_template",
+      "ilm_policy": "datastream_policy",
+      "hidden": false,
+      "system": false,
+      "allow_custom_routing": false,
+      "replicated": false,
+      "time_series": {
+        "temporal_ranges": [
+          {
+            "start": "2022-08-26T13:29:07.000Z",
+            "end": "2022-08-26T19:29:07.000Z"
+          }
+        ]
+      }
+    }
+  ]
+}
+```
+
+Next, run a search query:
+
+[source,console]
+----
+GET datastream/_search
+----
+// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+
+The query returns your ten newly added documents.
+
+```
+{
+  "took": 17,
+  "timed_out": false,
+  "_shards": {
+    "total": 4,
+    "successful": 4,
+    "skipped": 0,
+    "failed": 0
+  },
+  "hits": {
+    "total": {
+      "value": 10,
+      "relation": "eq"
+    },
+...
+```
+
+By default, index lifecycle management checks every ten minutes for indices that
+meet policy criteria. Wait for about ten minutes (maybe brew up a quick coffee
+or tea &#9749; ) and then re-run the `GET _data_stream` request.
+
+[source,console]
+----
+GET _data_stream
+----
+// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+
+After the ILM policy has taken effect, the original
+`.ds-datastream-2022.08.26-000001` index is replaced with a new, downsampled
+index, in this case `downsample-6tkn-.ds-datastream-2022.08.26-000001`.
+
+```
+{
+  "data_streams": [
+    {
+      "name": "datastream",
+      "timestamp_field": {
+        "name": "@timestamp"
+      },
+      "indices": [
+        {
+          "index_name": "downsample-6tkn-.ds-datastream-2022.08.26-000001",
+          "index_uuid": "qRane1fQQDCNgKQhXmTIvg"
+        },
+        {
+          "index_name": ".ds-datastream-2022.08.26-000002",
+          "index_uuid": "o0yRTdhWSo2pY8XMvfwy7Q"
+        }
+      ],
+...
+```
+
+Run a search query on the datastream.
+
+[source,console]
+----
+GET datastream/_search
+----
+// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+
+The new downsampled index contains just one document that includes the `min`,
+`max`, `sum`, and `value_count` statistics based off of the original sampled
+metrics.
+
+```
+{
+  "took": 6,
+  "timed_out": false,
+  "_shards": {
+    "total": 4,
+    "successful": 4,
+    "skipped": 0,
+    "failed": 0
+  },
+  "hits": {
+    "total": {
+      "value": 1,
+      "relation": "eq"
+    },
+    "max_score": 1,
+    "hits": [
+      {
+        "_index": "downsample-6tkn-.ds-datastream-2022.08.26-000001",
+        "_id": "0eL0wC_4-45SnTNFAAABgtpz0wA",
+        "_score": 1,
+        "_source": {
+          "@timestamp": "2022-08-26T14:00:00.000Z",
+          "_doc_count": 10,
+          "kubernetes.host": "gke-apps-0",
+          "kubernetes.namespace": "namespace26",
+          "kubernetes.node": "gke-apps-0-0",
+          "kubernetes.pod": "gke-apps-0-0-0",
+          "kubernetes.container.cpu.usage.nanocores": {
+            "min": 38907,
+            "max": 153404,
+            "sum": 992677,
+            "value_count": 10
+          },
+          "kubernetes.container.memory.available.bytes": {
+            "min": 279586406,
+            "max": 1062428344,
+            "sum": 7101494721,
+            "value_count": 10
+          },
+          "kubernetes.container.memory.pagefaults": {
+            "min": 74843,
+            "max": 302252,
+            "sum": 2061071,
+            "value_count": 10
+          },
+          "kubernetes.container.memory.rss.bytes": {
+            "min": 91914,
+            "max": 402801,
+            "sum": 2389770,
+            "value_count": 10
+          },
+          "kubernetes.container.memory.usage.bytes": {
+            "min": 100475044,
+            "max": 379572388,
+            "sum": 2668170609,
+            "value_count": 10
+          },
+          "kubernetes.container.memory.workingset.bytes": {
+            "min": 431227,
+            "max": 2294743,
+            "sum": 14230488,
+            "value_count": 10
+          },
+          "kubernetes.container.cpu.usage.core.ns": 12828317850,
+          "kubernetes.container.cpu.usage.limit.pct": 0.000027790500098490156,
+          "kubernetes.container.cpu.usage.node.pct": 0.000027790500098490156,
+          "kubernetes.container.memory.majorpagefaults": 0,
+          "kubernetes.container.memory.usage.limit.pct": 0.00009923134348355234,
+          "kubernetes.container.memory.usage.node.pct": 0.017700377851724625,
+          "kubernetes.container.name": "container-name-44",
+          "kubernetes.container.start_time": "2021-03-30T07:59:06.000Z"
+        }
+      }
+    ]
+  }
+}
+```
+
+Use the <<data-stream-stats-api,data stream stats API>> to get statistics for
+the data stream, including the storage size.
+
+[source,console]
+----
+GET /_data_stream/datastream/_stats?human=true
+----
+// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+
+```
+{
+  "_shards": {
+    "total": 4,
+    "successful": 4,
+    "failed": 0
+  },
+  "data_stream_count": 1,
+  "backing_indices": 2,
+  "total_store_size": "16.6kb",
+  "total_store_size_bytes": 17059,
+  "data_streams": [
+    {
+      "data_stream": "datastream",
+      "backing_indices": 2,
+      "store_size": "16.6kb",
+      "store_size_bytes": 17059,
+      "maximum_timestamp": 1661522400000
+    }
+  ]
+}
+```
+
+This example demonstrates how downsampling works as part of an ILM policy to
+reduce the storage size of metrics data as it becomes less current and less
+frequently queried.
+
+You can also try our <<downsampling-manual,Run downsampling manually>>
+example to learn how downsampling can work outside of an ILM policy.
+
+////
+[source,console]
+----
+DELETE _data_stream/*
+DELETE _index_template/*
+DELETE _ilm/policy/datastream_policy
+----
+// TEST[continued]
+////

+ 467 - 0
docs/reference/data-streams/downsampling-manual.asciidoc

@@ -0,0 +1,467 @@
+[[downsampling-manual]]
+=== Run downsampling manually
+++++
+<titleabbrev>Run downsampling manually</titleabbrev>
+++++
+
+preview::[]
+
+This is a simplified example that allows you to see quickly how
+<<downsampling,downsampling>> works to reduce the storage size of a time series
+index. The example uses typical Kubernetes cluster monitoring data. To test out
+downsampling, follow these steps:
+
+. Check the <<downsampling-manual-prereqs,prerequisites>>.
+. <<downsampling-manual-create-index>>.
+. <<downsampling-manual-ingest-data>>.
+. <<downsampling-manual-run>>.
+. <<downsampling-manual-view-results>>.
+
+[discrete]
+[[downsampling-manual-prereqs]]
+==== Prerequisites
+
+Refer to <<tsds-prereqs,time series data stream prerequisites>>.
+
+For the example you need a sample data file. Download the file from link:
+https://static-www.elastic.co/v3/assets/bltefdd0b53724fa2ce/bltf2fe7a300c3c59f7/631b4bc5cc56115de2f58e8c/sample-k8s-metrics.json[here]
+and save it in the local directory where you're running {es}.
+
+[discrete]
+[[downsampling-manual-create-index]]
+==== Create a time series index
+
+This creates an index for a basic data stream. The available parameters for an
+index are described in detail in <<set-up-a-data-stream,Set up a time series
+data stream>>.
+
+The time series boundaries are set so that sampling data for the index begins at
+`2022-06-10T00:00:00Z` and ends at `2022-06-30T23:59:59Z`.
+
+For simplicity, in the time series mapping all `time_series_metric` parameters
+are set to type `gauge`, but <<time-series-metric,other values>> such as
+`counter` and `histogram` may also be used. The `time_series_metric` values
+determine the kind of statistical representations that are used during
+downsampling.
+
+The index template includes a set of static 
+<<time-series-dimension,time series dimensions>>: `host`, `namespace`, 
+`node`, and `pod`. The time series dimensions are not changed by the
+downsampling process.
+
+[source,console]
+----
+PUT /sample-01
+{
+    "settings": {
+        "index": {
+            "mode": "time_series",
+            "time_series": {
+                "start_time": "2022-06-10T00:00:00Z",
+                "end_time": "2022-06-30T23:59:59Z"
+            },
+            "routing_path": [
+                "kubernetes.namespace",
+                "kubernetes.host",
+                "kubernetes.node",
+                "kubernetes.pod"
+            ],
+            "number_of_replicas": 0,
+            "number_of_shards": 2
+        }
+    },
+    "mappings": {
+        "properties": {
+            "@timestamp": {
+                "type": "date"
+            },
+            "kubernetes": {
+                "properties": {
+                    "container": {
+                        "properties": {
+                            "cpu": {
+                                "properties": {
+                                    "usage": {
+                                        "properties": {
+                                            "core": {
+                                                "properties": {
+                                                    "ns": {
+                                                        "type": "long"
+                                                    }
+                                                }
+                                            },
+                                            "limit": {
+                                                "properties": {
+                                                    "pct": {
+                                                        "type": "float"
+                                                    }
+                                                }
+                                            },
+                                            "nanocores": {
+                                                "type": "long",
+                                                "time_series_metric": "gauge"
+                                            },
+                                            "node": {
+                                                "properties": {
+                                                    "pct": {
+                                                        "type": "float"
+                                                    }
+                                                }
+                                            }
+                                        }
+                                    }
+                                }
+                            },
+                            "memory": {
+                                "properties": {
+                                    "available": {
+                                        "properties": {
+                                            "bytes": {
+                                                "type": "long",
+                                                "time_series_metric": "gauge"
+                                            }
+                                        }
+                                    },
+                                    "majorpagefaults": {
+                                        "type": "long"
+                                    },
+                                    "pagefaults": {
+                                        "type": "long",
+                                        "time_series_metric": "gauge"
+                                    },
+                                    "rss": {
+                                        "properties": {
+                                            "bytes": {
+                                                "type": "long",
+                                                "time_series_metric": "gauge"
+                                            }
+                                        }
+                                    },
+                                    "usage": {
+                                        "properties": {
+                                            "bytes": {
+                                                "type": "long",
+                                                "time_series_metric": "gauge"
+                                            },
+                                            "limit": {
+                                                "properties": {
+                                                    "pct": {
+                                                        "type": "float"
+                                                    }
+                                                }
+                                            },
+                                            "node": {
+                                                "properties": {
+                                                    "pct": {
+                                                        "type": "float"
+                                                    }
+                                                }
+                                            }
+                                        }
+                                    },
+                                    "workingset": {
+                                        "properties": {
+                                            "bytes": {
+                                                "type": "long",
+                                                "time_series_metric": "gauge"
+                                            }
+                                        }
+                                    }
+                                }
+                            },
+                            "name": {
+                                "type": "keyword"
+                            },
+                            "start_time": {
+                                "type": "date"
+                            }
+                        }
+                    },
+                    "host": {
+                        "type": "keyword",
+                        "time_series_dimension": true
+                    },
+                    "namespace": {
+                        "type": "keyword",
+                        "time_series_dimension": true
+                    },
+                    "node": {
+                        "type": "keyword",
+                        "time_series_dimension": true
+                    },
+                    "pod": {
+                        "type": "keyword",
+                        "time_series_dimension": true
+                    }
+                }
+            }
+        }
+    }
+}
+
+----
+// TEST
+
+[discrete]
+[[downsampling-manual-ingest-data]]
+==== Ingest time series data
+
+In a terminal window with {es} running, run the following curl command to load
+the documents from the downloaded sample data file:
+
+//[source,console]
+//----
+```
+curl -s -H "Content-Type: application/json" \
+   -XPOST http://<elasticsearch-node>/sample-01/_bulk?pretty \
+   --data-binary @sample-k8s-metrics.json
+```
+//----
+
+Approximately 18,000 documents are added. Check the search results for the newly
+ingested data:
+
+[source,console]
+----
+GET /sample-01*/_search
+----
+// TEST[continued]
+
+The query should return the first 10,000 hits. In each document you can see the
+time series dimensions (`host`, `node`, `pod` and `container`) as well as the
+various CPU and memory time series metrics.
+
+```
+  "hits": {
+    "total": {
+      "value": 10000,
+      "relation": "gte"
+    },
+    "max_score": 1,
+    "hits": [
+      {
+        "_index": "sample-01",
+        "_id": "WyHN6N6AwdaJByQWAAABgYOOweA",
+        "_score": 1,
+        "_source": {
+          "@timestamp": "2022-06-20T23:59:40Z",
+          "kubernetes": {
+            "host": "gke-apps-0",
+            "node": "gke-apps-0-1",
+            "pod": "gke-apps-0-1-0",
+            "container": {
+              "cpu": {
+                "usage": {
+                  "nanocores": 80037,
+                  "core": {
+                    "ns": 12828317850
+                  },
+                  "node": {
+                    "pct": 0.0000277905
+                  },
+                  "limit": {
+                    "pct": 0.0000277905
+                  }
+                }
+              },
+              "memory": {
+                "available": {
+                  "bytes": 790830121
+                },
+                "usage": {
+                  "bytes": 139548672,
+                  "node": {
+                    "pct": 0.01770037710617187
+                  },
+                  "limit": {
+                    "pct": 0.00009923134671484496
+                  }
+                },
+                "workingset": {
+                  "bytes": 2248540
+                },
+                "rss": {
+                  "bytes": 289260
+                },
+                "pagefaults": 74843,
+                "majorpagefaults": 0
+              },
+              "start_time": "2021-03-30T07:59:06Z",
+              "name": "container-name-44"
+            },
+            "namespace": "namespace26"
+          }
+        }
+      }
+...
+```
+
+Next, run a terms aggregation on the set of time series dimensions (`_tsid`) to
+create a date histogram on a fixed interval of one day.
+
+[source,console]
+----
+GET /sample-01*/_search
+{
+    "size": 0,
+    "aggs": {
+        "tsid": {
+            "terms": {
+                "field": "_tsid"
+            },
+            "aggs": {
+                "over_time": {
+                    "date_histogram": {
+                        "field": "@timestamp",
+                        "fixed_interval": "1d"
+                    },
+                    "aggs": {
+                        "min": {
+                            "min": {
+                                "field": "kubernetes.container.memory.usage.bytes"
+                            }
+                        },
+                        "max": {
+                            "max": {
+                                "field": "kubernetes.container.memory.usage.bytes"
+                            }
+                        },
+                        "avg": {
+                            "avg": {
+                                "field": "kubernetes.container.memory.usage.bytes"
+                            }
+                        }
+                    }
+                }
+            }
+        }
+    }
+}
+----
+// TEST[continued]
+
+Re-run your search query to view the aggregated time series data.
+
+[source,console]
+----
+GET /sample-01*/_search
+----
+// TEST[continued]
+
+[discrete]
+[[downsampling-manual-run]]
+==== Run downsampling for the index
+
+Before running downsampling, the index needs to be set to read only mode:
+
+[source,console]
+----
+PUT /sample-01/_block/write
+----
+// TEST[continued]
+
+And now, you can use the <<indices-downsample-data-stream,downsample API>> to
+downsample the index, setting the time series interval to one hour:
+
+[source,console]
+----
+POST /sample-01/_downsample/sample-01-downsample
+{
+  "fixed_interval": "1h"
+}
+----
+// TEST[continued]
+
+Finally, delete the original index:
+
+[source,console]
+----
+DELETE /sample-01
+----
+// TEST[continued]
+
+[discrete]
+[[downsampling-manual-view-results]]
+==== View the results
+
+
+Now, re-run your search query:
+
+[source,console]
+----
+GET /sample-01*/_search
+----
+// TEST[continued]
+
+In the query results, notice that the numer of hits has been reduced to only 288
+documents. As well, for each time series metric statistical representations have
+been calculated: `min`, `max`, `sum`, and `value_count`.
+
+```
+  "hits": {
+    "total": {
+      "value": 288,
+      "relation": "eq"
+    },
+    "max_score": 1,
+    "hits": [
+      {
+        "_index": "sample-01-downsample",
+        "_id": "WyHN6N6AwdaJByQWAAABgYNYIYA",
+        "_score": 1,
+        "_source": {
+          "@timestamp": "2022-06-20T23:00:00.000Z",
+          "_doc_count": 81,
+          "kubernetes.host": "gke-apps-0",
+          "kubernetes.namespace": "namespace26",
+          "kubernetes.node": "gke-apps-0-1",
+          "kubernetes.pod": "gke-apps-0-1-0",
+          "kubernetes.container.cpu.usage.nanocores": {
+            "min": 23344,
+            "max": 163408,
+            "sum": 7488985,
+            "value_count": 81
+          },
+          "kubernetes.container.memory.available.bytes": {
+            "min": 167751844,
+            "max": 1182251090,
+            "sum": 58169948901,
+            "value_count": 81
+          },
+          "kubernetes.container.memory.rss.bytes": {
+            "min": 54067,
+            "max": 391987,
+            "sum": 17550215,
+            "value_count": 81
+          },
+          "kubernetes.container.memory.pagefaults": {
+            "min": 69086,
+            "max": 428910,
+            "sum": 20239365,
+            "value_count": 81
+          },
+          "kubernetes.container.memory.workingset.bytes": {
+            "min": 323420,
+            "max": 2279342,
+            "sum": 104233700,
+            "value_count": 81
+          },
+          "kubernetes.container.memory.usage.bytes": {
+            "min": 61401416,
+            "max": 413064069,
+            "sum": 18557182404,
+            "value_count": 81
+          }
+        }
+      },
+...
+```
+
+This example demonstrates how downsampling can dramatically reduce the number of
+records stored for time series data, within whatever time boundaries you choose.
+It's also possible to perform downsampling on already downsampled data, to
+further reduce storage and associated costs, as the time series data ages and
+the data resolution becomes less critical.
+
+Downsampling is very easily integrated within an ILM policy. To learn more, try
+the <<downsampling-ilm,Run downsampling with ILM>> example.

+ 179 - 0
docs/reference/data-streams/downsampling.asciidoc

@@ -0,0 +1,179 @@
+ifeval::["{release-state}"=="unreleased"]
+[[downsampling]]
+=== Downsampling a time series data stream
+
+preview::[]
+
+Downsampling provides a method to reduce the footprint of your <<tsds,time
+series data>> by storing it at reduced granularity.
+
+Metrics solutions collect large amounts of time series data that grow over time.
+As that data ages, it becomes less relevant to the current state of the system.
+The downsampling process rolls up documents within a fixed time interval into a
+single summary document. Each summary document includes statistical
+representations of the original data: the `min`, `max`, `sum`, `value_count`,
+and `average` for each metric. Data stream <<time-series-dimension,time series
+dimensions>> are stored unchanged.
+
+Downsampling, in effect, lets you to trade data resolution and precision for
+storage size. You can include it in an <<index-lifecycle-management,{ilm}
+({ilm-init})>> policy to automatically manage the volume and associated cost of
+your metrics data at it ages.
+
+Check the following sections to learn more:
+
+* <<how-downsampling-works>>
+* <<running-downsampling>>
+* <<querying-downsampled-indices>>
+* <<downsampling-restrictions>>
+* <<try-out-downsampling>>
+
+[discrete]
+[[how-downsampling-works]]
+=== How it works
+
+A <<time-series,time series>> is a sequence of observations taken over time for
+a specific entity. The observed samples can be represented as a continuous
+function, where the time series dimensions remain constant and the time series
+metrics change over time.
+
+//.Sampling a continuous function
+image::images/data-streams/time-series-function.png[align="center"]
+
+In an Elasticsearch index, a single document is created for each timestamp,
+containing the immutable time series dimensions, together with the metrics names
+and the changing metrics values. For a single timestamp, several time series
+dimensions and metrics may be stored.
+
+//.Metric anatomy
+image::images/data-streams/time-series-metric-anatomy.png[align="center"]
+
+For your most current and relevant data, the metrics series typically has a low
+sampling time interval, so it's optimized for queries that require a high data
+resolution.
+
+.Original metrics series
+image::images/data-streams/time-series-original.png[align="center"]
+
+Downsampling works on older, less frequently accessed data by replacing the
+original time series with both a data stream of a higher sampling interval and
+statistical representations of that data. Where the original metrics samples may
+have been taken, for example, every ten seconds, as the data ages you may choose
+to reduce the sample granularity to hourly or daily. You may choose to reduce
+the granularity of `cold` archival data to monthly or less.
+
+.Downsampled metrics series
+image::images/data-streams/time-series-downsampled.png[align="center"]
+
+[discrete]
+[[running-downsampling]]
+=== Running downsampling on time series data
+
+To downsample a time series index, use the
+<<indices-downsample-data-stream,Downsample API>> and set `fixed_interval` to
+the level of granularity that you'd like:
+
+```
+POST /<source_index>/_downsample/<new_index>
+{
+    "fixed_interval": "1d"
+}
+```
+
+To downsample time series data as part of ILM, include a
+<<ilm-downsample,Downsample action>> in your ILM policy and set `fixed_interval`
+to the level of granularity that you'd like:
+
+```
+PUT _ilm/policy/my_policy
+{
+  "policy": {
+    "phases": {
+      "warm": {
+        "actions": {
+          "downsample" : {
+            "fixed_interval": "1h"
+          }
+        }
+      }
+    }
+  }
+}
+```
+
+[discrete]
+[[querying-downsampled-indices]]
+=== Querying downsampled indices
+
+You can use the <<search-search,`_search`>> and <<async-search,`_async_search`>>
+endpoints to query a downsampled index. Multiple raw data and downsampled
+indices can be queried in a single request, and a single request can include
+downsampled indices at different granularities (different bucket timespan). That
+is, you can query data streams that contain downsampled indices with multiple
+downsampling intervals (for example, `15m`, `1h`, `1d`).
+
+The result of a time based histogram aggregation is in a uniform bucket size and
+each downsampled index returns data ignoring the downsampling time interval. For
+example, if you run a `date_histogram` aggregation with `"fixed_interval": "1m"`
+on a downsampled index that has been downsampled at an hourly resolution
+(`"fixed_interval": "1h"`), the query returns one bucket with all of the data at
+minute 0, then 59 empty buckets, and then a bucket with data again for the next
+hour.
+
+NOTE:
+
+There are a few things to note when querying downsampled indices:
+
+* When you run queries in {kib} and through Elastic solutions, a normal
+response is returned without notification that some of the queried indices are
+downsampled.
+* For 
+<<search-aggregations-bucket-datehistogram-aggregation,date histogram aggregations>>, 
+only `fixed_intervals` (and not calendar-aware intervals) are supported.
+* Only Coordinated Universal Time (UTC) date-times are supported.
+
+[discrete]
+[[downsampling-restrictions]]
+=== Restrictions and limitations
+
+The following restrictions and limitations apply for downsampling:
+
+* Only indices in a <<tsds,time series data stream>> are supported. 
+
+* Data is downsampled based on the time dimension only. All other dimensions are
+copied to the new index without any modification.
+
+* Within a data stream, a downsampled index replaces the original index and the
+original index is deleted. Only one index can exist for a given time period. 
+
+* A source index must be in read-only mode for the downsampling process to
+succeed. Check the <<downsampling-manual,Run downsampling manually>> example for
+details.
+
+* Downsampling data for the same period many times (downsampling of a
+downsampled index) is supported. The downsampling interval must be a multiple of
+the interval of the downsampled index.
+
+* Downsampling is provided as an ILM action. See <<ilm-downsample,Downsample>>.
+
+* The new, downsampled index is created on the data tier of the original index
+and it inherits its settings (for example, the number of shards and replicas).
+
+* The numeric `gauge` and `counter` <<mapping-field-meta,metric types>> are
+supported.
+
+* The downsampling configuration is extracted from the time series data stream
+<<tsds-create-mappings-component-template,index mapping>>. The only additional
+required setting is the downsampling `fixed_interval`.
+
+[discrete]
+[[try-out-downsampling]]
+=== Try it out
+
+To take downsampling for a test run, try our example of
+<<downsampling-manual,running downsampling manually>>.
+
+Downsampling can easily be added to your ILM policy. To learn how, try our
+<<downsampling-ilm,Run downsampling with ILM>> example.
+
+endif::[]

+ 3 - 0
docs/reference/data-streams/tsds.asciidoc

@@ -290,3 +290,6 @@ Now that you know the basics, you're ready to <<set-up-tsds,create a TSDS>> or
 
 include::set-up-tsds.asciidoc[]
 include::tsds-index-settings.asciidoc[]
+include::downsampling.asciidoc[]
+include::downsampling-manual.asciidoc[]
+include::downsampling-ilm.asciidoc[]

+ 57 - 0
docs/reference/ilm/actions/ilm-downsample.asciidoc

@@ -0,0 +1,57 @@
+[role="xpack"]
+[[ilm-downsample]]
+=== Downsample
+
+preview::[]
+
+Phases allowed: hot, warm, cold.
+
+Aggregates a time series (TSDS) index and stores
+pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and
+`avg`) for each metric field grouped by a configured time interval. For example,
+a TSDS index that contains metrics sampled every 10 seconds can be downsampled
+to an hourly index. All documents within an hour interval are summarized and
+stored as a single document and stored in the downsample index.
+
+This action corresponds to the  <<indices-downsample-data-stream,downsample API>>.
+
+The name of the resulting downsample index is
+`downsample-<original-index-name>-<random-uuid>`. If {ilm-init} performs the
+`downsample` action on a backing index for a data stream, the downsample index
+becomes a backing index for the same stream and the source index is deleted.
+
+To use the `downsample` action in the `hot` phase, the `rollover` action *must*
+be present. If no rollover action is configured, {ilm-init} will reject the
+policy.
+
+[role="child_attributes"]
+[[ilm-downsample-options]]
+==== Options
+
+`fixed_interval`:: (Required, string) The
+<<rollup-understanding-group-intervals,fixed time interval>> into which the data
+will be downsampled.
+
+[[ilm-downsample-ex]]
+==== Example 
+
+[source,console]
+----
+PUT _ilm/policy/datastream_policy   
+{
+  "policy": {                       
+    "phases": {
+      "hot": {                      
+        "actions": {
+          "rollover": {             
+            "max_docs": 1
+          },
+          "downsample": {
+  	          "fixed_interval": "1h"
+  	      }
+        }
+      }
+    }
+  }
+}
+----

+ 5 - 0
docs/reference/ilm/ilm-actions.asciidoc

@@ -24,6 +24,10 @@ Block write operations to the index.
 Remove the index as the write index for the rollover alias and
 start indexing to a new index.
 
+<<ilm-downsample,Downsample>>::
+Aggregates an index's time series data and stores the results in a new read-only
+index. For example, you can downsample hourly data into daily or weekly summaries.
+
 <<ilm-searchable-snapshot, Searchable snapshot>>::
 Take a snapshot of the managed index in the configured repository
 and mount it as a searchable snapshot.
@@ -48,6 +52,7 @@ include::actions/ilm-forcemerge.asciidoc[]
 include::actions/ilm-migrate.asciidoc[]
 include::actions/ilm-readonly.asciidoc[]
 include::actions/ilm-rollover.asciidoc[]
+include::actions/ilm-downsample.asciidoc[]
 include::actions/ilm-searchable-snapshot.asciidoc[]
 include::actions/ilm-set-priority.asciidoc[]
 include::actions/ilm-shrink.asciidoc[]

+ 3 - 0
docs/reference/ilm/ilm-index-lifecycle.asciidoc

@@ -90,6 +90,7 @@ actions in the order listed.
   - <<ilm-shrink,Shrink>>
   - <<ilm-forcemerge,Force Merge>>
   - <<ilm-searchable-snapshot, Searchable Snapshot>>
+  - <<ilm-downsample,Downsample>>  
 * Warm
   - <<ilm-set-priority,Set Priority>>
   - <<ilm-unfollow,Unfollow>>
@@ -98,6 +99,7 @@ actions in the order listed.
   - <<ilm-migrate,Migrate>>
   - <<ilm-shrink,Shrink>>
   - <<ilm-forcemerge,Force Merge>>
+  - <<ilm-downsample,Downsample>>  
 * Cold
   - <<ilm-set-priority,Set Priority>>
   - <<ilm-unfollow,Unfollow>>
@@ -105,6 +107,7 @@ actions in the order listed.
   - <<ilm-searchable-snapshot, Searchable Snapshot>>
   - <<ilm-allocate,Allocate>>
   - <<ilm-migrate,Migrate>>
+  <<ilm-downsample,Downsample>>
 * Frozen
   - <<ilm-unfollow,Unfollow>>
   - <<ilm-searchable-snapshot, Searchable Snapshot>>

二进制
docs/reference/images/data-streams/time-series-downsampled.png


二进制
docs/reference/images/data-streams/time-series-function.png


二进制
docs/reference/images/data-streams/time-series-metric-anatomy.png


二进制
docs/reference/images/data-streams/time-series-original.png


+ 1 - 1
docs/reference/indices.asciidoc

@@ -20,7 +20,7 @@ index settings, aliases, mappings, and index templates.
 * <<indices-rollover-index>>
 * <<unfreeze-index-api>>
 * <<indices-resolve-index-api>>
-
+* <<indices-downsample-data-stream>>
 
 [discrete]
 [[mapping-management]]

+ 158 - 0
docs/reference/indices/downsample-data-stream.asciidoc

@@ -0,0 +1,158 @@
+[role="xpack"]
+[[indices-downsample-data-stream]]
+=== Downsample index API
+++++
+<titleabbrev>Downsample</titleabbrev>
+++++
+
+preview::[]
+
+Aggregates a time series (TSDS) index and stores
+pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and
+`avg`) for each metric field grouped by a configured time interval. For example,
+a TSDS index that contains metrics sampled every 10 seconds can be downsampled
+to an hourly index. All documents within an hour interval are summarized and
+stored as a single document in the downsample index.
+
+////
+[source,console]
+----
+PUT /my-time-series-index
+{
+    "settings": {
+        "index": {
+            "mode": "time_series",
+            "time_series": {
+                "start_time": "2022-06-10T00:00:00Z",
+                "end_time": "2022-06-30T23:59:59Z"
+            },
+            "routing_path": [
+                "test.namespace"
+            ],
+            "number_of_replicas": 0,
+            "number_of_shards": 2
+        }
+    },
+    "mappings": {
+        "properties": {
+            "@timestamp": {
+                "type": "date"
+            },
+            "metric": {
+                "type": "long",
+                "time_series_metric": "gauge"
+            },
+            "dimension": {
+                "type": "keyword",
+                "time_series_dimension": true
+            }
+        }
+    }
+}
+
+PUT /my-time-series-index/_block/write
+
+----
+// TEST
+////
+
+[source,console]
+----
+POST /my-time-series-index/_downsample/my-downsampled-time-series-index
+{
+    "fixed_interval": "1d"
+}
+----
+// TEST[continued]
+
+////
+[source,console]
+----
+DELETE /my-time-series-index*
+DELETE _data_stream/*
+DELETE _index_template/*
+----
+// TEST[continued]
+////
+
+[[downsample-api-request]]
+==== {api-request-title}
+
+`POST /<source-index>/_downsample/<output-downsampled-index>`
+
+[[downsample-api-prereqs]]
+==== {api-prereq-title}
+
+* Only indices in a <<tsds,time series data stream>> are supported.
+
+* If the {es} {security-features} are enabled, you must have the `all`
+or `manage` <<privileges-list-indices,index privilege>> for the data stream.
+
+* Neither <<field-and-document-access-control,field nor document level security>> can be defined on the source index.
+
+* The source index must be read only (`index.blocks.write: true`).
+
+[[downsample-api-path-params]]
+==== {api-path-parms-title}
+
+`<source-index>`::
+(Optional, string) Name of the time series index to downsample.
+
+`<output-downsampled_index>`::
++
+--
+(Required, string) Name of the index to create. 
+
+include::{es-repo-dir}/indices/create-index.asciidoc[tag=index-name-reqs]
+--
+
+[role="child_attributes"]
+[[downsample-api-query-parms]]
+==== {api-query-parms-title}
+
+`fixed_interval`:: (Required, <<time-units,time units>>) The interval at which
+to aggregate the original time series index. For example, `60m` produces a
+document for each 60 minute (hourly) interval. This follows standard time
+formatting syntax as used elsewhere in {es}.
++
+NOTE: Smaller, more granular intervals take up proportionally more space.
+
+[[downsample-api-process]]
+==== The downsampling process
+
+The downsampling operation traverses the source TSDS index and performs the
+following steps:
+
+. Creates a new document for each value of the `_tsid` field and each
+`@timestamp` value, rounded to the `fixed_interval` defined in the downsample
+configuration.
+. For each new document, copies all <<time-series-dimension,time
+series dimensions>> from the source index to the target index. Dimensions in a
+TSDS are constant, so this is done only once per bucket.
+. For each <<time-series-metric,time series metric>> field, computes aggregations
+for all documents in the bucket. Depending on the metric type of each metric
+field a different set of pre-aggregated results is stored:
+
+** `gauge`: The `min`, `max`, `sum`, and `value_count` are stored; `value_count`
+is stored as type `aggregate_metric_double`.
+** `counter`: The `last_value` is stored.
+. For all other fields, the most recent value is copied to the target index.
+
+[[downsample-api-mappings]]
+==== Source and target index field mappings
+
+Fields in the target, downsampled index are created based on fields in the
+original source index, as follows:
+
+. All fields mapped with the `time-series-dimension` parameter are created in
+the target downsample index with the same mapping as in the source index.
+. All fields mapped with the `time_series_metric` parameter are created
+in the target downsample index with the same mapping as in the source
+index. An exception is that for fields mapped as `time_series_metric: gauge`
+the field type is changed to `aggregate_metric_double`.
+. All other fields that are neither dimensions nor metrics (that is, label
+fields), are created in the target downsample index with the same mapping
+that they had in the source index.
+
+Check the <<downsampling,Downsampling>> documentation for an overview and
+examples of running downsampling manually and as part of an ILM policy.