downsampling.asciidoc 9.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243
  1. [[downsampling]]
  2. /////
  3. [source,console]
  4. --------------------------------------------------
  5. DELETE _ilm/policy/my_policy
  6. --------------------------------------------------
  7. // TEST
  8. // TEARDOWN
  9. /////
  10. === Downsampling a time series data stream
  11. Downsampling provides a method to reduce the footprint of your <<tsds,time
  12. series data>> by storing it at reduced granularity.
  13. Metrics solutions collect large amounts of time series data that grow over time.
  14. As that data ages, it becomes less relevant to the current state of the system.
  15. The downsampling process rolls up documents within a fixed time interval into a
  16. single summary document. Each summary document includes statistical
  17. representations of the original data: the `min`, `max`, `sum` and `value_count`
  18. for each metric. Data stream <<time-series-dimension,time series dimensions>>
  19. are stored unchanged.
  20. Downsampling, in effect, lets you to trade data resolution and precision for
  21. storage size. You can include it in an <<index-lifecycle-management,{ilm}
  22. ({ilm-init})>> policy to automatically manage the volume and associated cost of
  23. your metrics data at it ages.
  24. Check the following sections to learn more:
  25. * <<how-downsampling-works>>
  26. * <<running-downsampling>>
  27. * <<querying-downsampled-indices>>
  28. * <<downsampling-restrictions>>
  29. * <<try-out-downsampling>>
  30. [discrete]
  31. [[how-downsampling-works]]
  32. === How it works
  33. A <<time-series,time series>> is a sequence of observations taken over time for
  34. a specific entity. The observed samples can be represented as a continuous
  35. function, where the time series dimensions remain constant and the time series
  36. metrics change over time.
  37. //.Sampling a continuous function
  38. image::images/data-streams/time-series-function.png[align="center"]
  39. In an Elasticsearch index, a single document is created for each timestamp,
  40. containing the immutable time series dimensions, together with the metrics names
  41. and the changing metrics values. For a single timestamp, several time series
  42. dimensions and metrics may be stored.
  43. //.Metric anatomy
  44. image::images/data-streams/time-series-metric-anatomy.png[align="center"]
  45. For your most current and relevant data, the metrics series typically has a low
  46. sampling time interval, so it's optimized for queries that require a high data
  47. resolution.
  48. .Original metrics series
  49. image::images/data-streams/time-series-original.png[align="center"]
  50. Downsampling works on older, less frequently accessed data by replacing the
  51. original time series with both a data stream of a higher sampling interval and
  52. statistical representations of that data. Where the original metrics samples may
  53. have been taken, for example, every ten seconds, as the data ages you may choose
  54. to reduce the sample granularity to hourly or daily. You may choose to reduce
  55. the granularity of `cold` archival data to monthly or less.
  56. .Downsampled metrics series
  57. image::images/data-streams/time-series-downsampled.png[align="center"]
  58. [discrete]
  59. [[downsample-api-process]]
  60. ==== The downsampling process
  61. The downsampling operation traverses the source TSDS index and performs the
  62. following steps:
  63. . Creates a new document for each value of the `_tsid` field and each
  64. `@timestamp` value, rounded to the `fixed_interval` defined in the downsample
  65. configuration.
  66. . For each new document, copies all <<time-series-dimension,time
  67. series dimensions>> from the source index to the target index. Dimensions in a
  68. TSDS are constant, so this is done only once per bucket.
  69. . For each <<time-series-metric,time series metric>> field, computes aggregations
  70. for all documents in the bucket. Depending on the metric type of each metric
  71. field a different set of pre-aggregated results is stored:
  72. ** `gauge`: The `min`, `max`, `sum`, and `value_count` are stored; `value_count`
  73. is stored as type `aggregate_metric_double`.
  74. ** `counter`: The `last_value` is stored.
  75. . For all other fields, the most recent value is copied to the target index.
  76. [discrete]
  77. [[downsample-api-mappings]]
  78. ==== Source and target index field mappings
  79. Fields in the target, downsampled index are created based on fields in the
  80. original source index, as follows:
  81. . All fields mapped with the `time-series-dimension` parameter are created in
  82. the target downsample index with the same mapping as in the source index.
  83. . All fields mapped with the `time_series_metric` parameter are created
  84. in the target downsample index with the same mapping as in the source
  85. index. An exception is that for fields mapped as `time_series_metric: gauge`
  86. the field type is changed to `aggregate_metric_double`.
  87. . All other fields that are neither dimensions nor metrics (that is, label
  88. fields), are created in the target downsample index with the same mapping
  89. that they had in the source index.
  90. [discrete]
  91. [[running-downsampling]]
  92. === Running downsampling on time series data
  93. To downsample a time series index, use the
  94. <<indices-downsample-data-stream,Downsample API>> and set `fixed_interval` to
  95. the level of granularity that you'd like:
  96. include::../indices/downsample-data-stream.asciidoc[tag=downsample-example]
  97. To downsample time series data as part of ILM, include a
  98. <<ilm-downsample,Downsample action>> in your ILM policy and set `fixed_interval`
  99. to the level of granularity that you'd like:
  100. [source,console]
  101. ----
  102. PUT _ilm/policy/my_policy
  103. {
  104. "policy": {
  105. "phases": {
  106. "warm": {
  107. "actions": {
  108. "downsample" : {
  109. "fixed_interval": "1h"
  110. }
  111. }
  112. }
  113. }
  114. }
  115. }
  116. ----
  117. [discrete]
  118. [[querying-downsampled-indices]]
  119. === Querying downsampled indices
  120. You can use the <<search-search,`_search`>> and <<async-search,`_async_search`>>
  121. endpoints to query a downsampled index. Multiple raw data and downsampled
  122. indices can be queried in a single request, and a single request can include
  123. downsampled indices at different granularities (different bucket timespan). That
  124. is, you can query data streams that contain downsampled indices with multiple
  125. downsampling intervals (for example, `15m`, `1h`, `1d`).
  126. The result of a time based histogram aggregation is in a uniform bucket size and
  127. each downsampled index returns data ignoring the downsampling time interval. For
  128. example, if you run a `date_histogram` aggregation with `"fixed_interval": "1m"`
  129. on a downsampled index that has been downsampled at an hourly resolution
  130. (`"fixed_interval": "1h"`), the query returns one bucket with all of the data at
  131. minute 0, then 59 empty buckets, and then a bucket with data again for the next
  132. hour.
  133. [discrete]
  134. [[querying-downsampled-indices-notes]]
  135. ==== Notes on downsample queries
  136. There are a few things to note about querying downsampled indices:
  137. * When you run queries in {kib} and through Elastic solutions, a normal
  138. response is returned without notification that some of the queried indices are
  139. downsampled.
  140. * For
  141. <<search-aggregations-bucket-datehistogram-aggregation,date histogram aggregations>>,
  142. only `fixed_intervals` (and not calendar-aware intervals) are supported.
  143. * Timezone support comes with caveats:
  144. ** Date histograms at intervals that are multiples of an hour are based on
  145. values generated at UTC. This works well for timezones that are on the hour, e.g.
  146. +5:00 or -3:00, but requires offsetting the reported time buckets, e.g.
  147. `2020-01-01T10:30:00.000` instead of `2020-03-07T10:00:00.000` for
  148. timezone +5:30 (India), if downsampling aggregates values per hour. In this case,
  149. the results include the field `downsampled_results_offset: true`, to indicate that
  150. the time buckets are shifted. This can be avoided if a downsampling interval of 15
  151. minutes is used, as it allows properly calculating hourly values for the shifted
  152. buckets.
  153. ** Date histograms at intervals that are multiples of a day are similarly
  154. affected, in case downsampling aggregates values per day. In this case, the
  155. beginning of each day is always calculated at UTC when generated the downsampled
  156. values, so the time buckets need to be shifted, e.g. reported as
  157. `2020-03-07T19:00:00.000` instead of `2020-03-07T00:00:00.000` for timezone `America/New_York`.
  158. The field `downsampled_results_offset: true` is added in this case too.
  159. ** Daylight savings and similar peculiarities around timezones affect
  160. reported results, as <<datehistogram-aggregation-time-zone,documented>>
  161. for date histogram aggregation. Besides, downsampling at daily interval
  162. hinders tracking any information related to daylight savings changes.
  163. [discrete]
  164. [[downsampling-restrictions]]
  165. === Restrictions and limitations
  166. The following restrictions and limitations apply for downsampling:
  167. * Only indices in a <<tsds,time series data stream>> are supported.
  168. * Data is downsampled based on the time dimension only. All other dimensions are
  169. copied to the new index without any modification.
  170. * Within a data stream, a downsampled index replaces the original index and the
  171. original index is deleted. Only one index can exist for a given time period.
  172. * A source index must be in read-only mode for the downsampling process to
  173. succeed. Check the <<downsampling-manual,Run downsampling manually>> example for
  174. details.
  175. * Downsampling data for the same period many times (downsampling of a
  176. downsampled index) is supported. The downsampling interval must be a multiple of
  177. the interval of the downsampled index.
  178. * Downsampling is provided as an ILM action. See <<ilm-downsample,Downsample>>.
  179. * The new, downsampled index is created on the data tier of the original index
  180. and it inherits its settings (for example, the number of shards and replicas).
  181. * The numeric `gauge` and `counter` <<mapping-field-meta,metric types>> are
  182. supported.
  183. * The downsampling configuration is extracted from the time series data stream
  184. <<create-tsds-index-template,index mapping>>. The only additional
  185. required setting is the downsampling `fixed_interval`.
  186. [discrete]
  187. [[try-out-downsampling]]
  188. === Try it out
  189. To take downsampling for a test run, try our example of
  190. <<downsampling-manual,running downsampling manually>>.
  191. Downsampling can easily be added to your ILM policy. To learn how, try our
  192. <<downsampling-ilm,Run downsampling with ILM>> example.