|
@@ -376,7 +376,7 @@ DELETE /sample-01
|
|
|
==== View the results
|
|
|
|
|
|
|
|
|
-Now, re-run your search query:
|
|
|
+Re-run your search query:
|
|
|
|
|
|
[source,console]
|
|
|
----
|
|
@@ -450,6 +450,51 @@ been calculated: `min`, `max`, `sum`, and `value_count`.
|
|
|
----
|
|
|
// TEST[skip:todo]
|
|
|
|
|
|
+You can now re-run the earlier aggregation. Even though the aggregation runs on
|
|
|
+the downsampled data stream that only contains 288 documents, it returns the
|
|
|
+same results as the earlier aggregation on the original data stream.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+GET /sample-01*/_search
|
|
|
+{
|
|
|
+ "size": 0,
|
|
|
+ "aggs": {
|
|
|
+ "tsid": {
|
|
|
+ "terms": {
|
|
|
+ "field": "_tsid"
|
|
|
+ },
|
|
|
+ "aggs": {
|
|
|
+ "over_time": {
|
|
|
+ "date_histogram": {
|
|
|
+ "field": "@timestamp",
|
|
|
+ "fixed_interval": "1d"
|
|
|
+ },
|
|
|
+ "aggs": {
|
|
|
+ "min": {
|
|
|
+ "min": {
|
|
|
+ "field": "kubernetes.container.memory.usage.bytes"
|
|
|
+ }
|
|
|
+ },
|
|
|
+ "max": {
|
|
|
+ "max": {
|
|
|
+ "field": "kubernetes.container.memory.usage.bytes"
|
|
|
+ }
|
|
|
+ },
|
|
|
+ "avg": {
|
|
|
+ "avg": {
|
|
|
+ "field": "kubernetes.container.memory.usage.bytes"
|
|
|
+ }
|
|
|
+ }
|
|
|
+ }
|
|
|
+ }
|
|
|
+ }
|
|
|
+ }
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+// TEST[continued]
|
|
|
+
|
|
|
This example demonstrates how downsampling can dramatically reduce the number of
|
|
|
records stored for time series data, within whatever time boundaries you choose.
|
|
|
It's also possible to perform downsampling on already downsampled data, to
|