瀏覽代碼

[DOCS] Add downsampling reference to rollup docs (#91295)

David Kilfoyle 2 年之前
父節點
當前提交
7722daa197

+ 3 - 0
docs/reference/rollup/api-quickref.asciidoc

@@ -7,6 +7,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 Most rollup endpoints have the following base:
 
 [source,js]

+ 3 - 0
docs/reference/rollup/apis/delete-job.asciidoc

@@ -10,6 +10,9 @@ Deletes an existing {rollup-job}.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-delete-job-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/get-job.asciidoc

@@ -9,6 +9,9 @@ Retrieves the configuration, stats, and status of {rollup-jobs}.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-get-job-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/put-job.asciidoc

@@ -10,6 +10,9 @@ Creates a {rollup-job}.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-put-job-api-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/rollup-caps.asciidoc

@@ -10,6 +10,9 @@ specific index or index pattern.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-get-rollup-caps-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/rollup-index-caps.asciidoc

@@ -10,6 +10,9 @@ index where rollup data is stored).
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-get-rollup-index-caps-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/rollup-search.asciidoc

@@ -9,6 +9,9 @@ Enables searching rolled-up data using the standard Query DSL.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-search-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/start-job.asciidoc

@@ -10,6 +10,9 @@ Starts an existing, stopped {rollup-job}.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-start-job-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/apis/stop-job.asciidoc

@@ -10,6 +10,9 @@ Stops an existing, started {rollup-job}.
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 [[rollup-stop-job-request]]
 ==== {api-request-title}
 

+ 3 - 0
docs/reference/rollup/index.asciidoc

@@ -4,6 +4,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 Keeping historical data around for analysis is extremely useful but often avoided due to the financial cost of
 archiving massive amounts of data. Retention periods are thus driven by financial realities rather than by the
 usefulness of extensive historical data.

+ 3 - 0
docs/reference/rollup/overview.asciidoc

@@ -7,6 +7,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 Time-based data (documents that are predominantly identified by their timestamp) often have associated retention policies
 to manage data growth. For example, your system may be generating 500 documents every second. That will generate
 43 million documents per day, and nearly 16 billion documents a year.

+ 3 - 0
docs/reference/rollup/rollup-agg-limitations.asciidoc

@@ -4,6 +4,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 There are some limitations to how fields can be rolled up / aggregated. This page highlights the major limitations so that
 you are aware of them.
 

+ 3 - 0
docs/reference/rollup/rollup-getting-started.asciidoc

@@ -7,6 +7,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 To use the Rollup feature, you need to create one or more "Rollup Jobs". These jobs run continuously in the background
 and rollup the index or indices that you specify, placing the rolled documents in a secondary index (also of your choosing).
 

+ 3 - 0
docs/reference/rollup/rollup-search-limitations.asciidoc

@@ -4,6 +4,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 While we feel the Rollup function is extremely flexible, the nature of summarizing data means there will be some limitations. Once
 live data is thrown away, you will always lose some flexibility.
 

+ 3 - 0
docs/reference/rollup/understanding-groups.asciidoc

@@ -4,6 +4,9 @@
 
 experimental[]
 
+NOTE: For version 8.5 and above we recommend <<downsampling,downsampling>> over
+rollups as a way to reduce your storage costs for time series data.
+
 To preserve flexibility, Rollup Jobs are defined based on how future queries may need to use the data. Traditionally, systems force
 the admin to make decisions about what metrics to rollup and on what interval. E.g. The average of `cpu_time` on an hourly basis. This
 is limiting; if, in the future, the admin wishes to see the average of `cpu_time` on an hourly basis _and_ partitioned by `host_name`,