Bladeren bron

[DOCS] Downsampling code snippet formatting (#92981)

Abdon Pijpelink 2 jaren geleden
bovenliggende
commit
64ce4d1189

+ 27 - 12
docs/reference/data-streams/downsampling-ilm.asciidoc

@@ -292,7 +292,8 @@ GET _data_stream
 If the ILM policy has not yet been applied, your results will be like the
 following. Note the original `index_name`: `.ds-datastream-<timestamp>-000001`.
 
-```
+[source,console-result]
+----
 {
   "data_streams": [
     {
@@ -329,7 +330,9 @@ following. Note the original `index_name`: `.ds-datastream-<timestamp>-000001`.
     }
   ]
 }
-```
+----
+// TEST[skip:todo]
+// TEST[continued]
 
 Next, run a search query:
 
@@ -341,7 +344,8 @@ GET datastream/_search
 
 The query returns your ten newly added documents.
 
-```
+[source,console-result]
+----
 {
   "took": 17,
   "timed_out": false,
@@ -357,7 +361,9 @@ The query returns your ten newly added documents.
       "relation": "eq"
     },
 ...
-```
+----
+// TEST[skip:todo]
+// TEST[continued]
 
 By default, index lifecycle management checks every ten minutes for indices that
 meet policy criteria. Wait for about ten minutes (maybe brew up a quick coffee
@@ -373,7 +379,8 @@ After the ILM policy has taken effect, the original
 `.ds-datastream-2022.08.26-000001` index is replaced with a new, downsampled
 index, in this case `downsample-6tkn-.ds-datastream-2022.08.26-000001`.
 
-```
+[source,console-result]
+----
 {
   "data_streams": [
     {
@@ -392,7 +399,9 @@ index, in this case `downsample-6tkn-.ds-datastream-2022.08.26-000001`.
         }
       ],
 ...
-```
+----
+// TEST[skip:todo]
+// TEST[continued]
 
 Run a search query on the datastream.
 
@@ -400,13 +409,14 @@ Run a search query on the datastream.
 ----
 GET datastream/_search
 ----
-// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+// TEST[continued]
 
 The new downsampled index contains just one document that includes the `min`,
 `max`, `sum`, and `value_count` statistics based off of the original sampled
 metrics.
 
-```
+[source,console-result]
+----
 {
   "took": 6,
   "timed_out": false,
@@ -483,7 +493,9 @@ metrics.
     ]
   }
 }
-```
+----
+// TEST[skip:todo]
+// TEST[continued]
 
 Use the <<data-stream-stats-api,data stream stats API>> to get statistics for
 the data stream, including the storage size.
@@ -492,9 +504,10 @@ the data stream, including the storage size.
 ----
 GET /_data_stream/datastream/_stats?human=true
 ----
-// TEST[skip: The @timestamp value won't match an accepted range in the TSDS]
+// TEST[continued]
 
-```
+[source,console-result]
+----
 {
   "_shards": {
     "total": 4,
@@ -515,7 +528,9 @@ GET /_data_stream/datastream/_stats?human=true
     }
   ]
 }
-```
+----
+// TEST[skip:todo]
+// TEST[continued]
 
 This example demonstrates how downsampling works as part of an ILM policy to
 reduce the storage size of metrics data as it becomes less current and less

+ 17 - 14
docs/reference/data-streams/downsampling-manual.asciidoc

@@ -200,7 +200,6 @@ PUT /sample-01
 }
 
 ----
-// TEST
 
 [discrete]
 [[downsampling-manual-ingest-data]]
@@ -209,14 +208,13 @@ PUT /sample-01
 In a terminal window with {es} running, run the following curl command to load
 the documents from the downloaded sample data file:
 
-//[source,console]
-//----
-```
+[source,sh]
+----
 curl -s -H "Content-Type: application/json" \
    -XPOST http://<elasticsearch-node>/sample-01/_bulk?pretty \
    --data-binary @sample-k8s-metrics.json
-```
-//----
+----
+// NOTCONSOLE
 
 Approximately 18,000 documents are added. Check the search results for the newly
 ingested data:
@@ -227,11 +225,12 @@ GET /sample-01*/_search
 ----
 // TEST[continued]
 
-The query should return the first 10,000 hits. In each document you can see the
-time series dimensions (`host`, `node`, `pod` and `container`) as well as the
-various CPU and memory time series metrics.
+The query has at least 10,000 hits and returns the first 10. In each document
+you can see the time series dimensions (`host`, `node`, `pod` and `container`)
+as well as the various CPU and memory time series metrics.
 
-```
+[source,console-result]
+----
   "hits": {
     "total": {
       "value": 10000,
@@ -294,7 +293,9 @@ various CPU and memory time series metrics.
         }
       }
 ...
-```
+----
+// TEST[skip:todo]
+// TEST[continued]
 
 Next, run a terms aggregation on the set of time series dimensions (`_tsid`) to
 create a date histogram on a fixed interval of one day.
@@ -393,11 +394,12 @@ GET /sample-01*/_search
 ----
 // TEST[continued]
 
-In the query results, notice that the numer of hits has been reduced to only 288
+In the query results, notice that the number of hits has been reduced to only 288
 documents. As well, for each time series metric statistical representations have
 been calculated: `min`, `max`, `sum`, and `value_count`.
 
-```
+[source,console-result]
+----
   "hits": {
     "total": {
       "value": 288,
@@ -455,7 +457,8 @@ been calculated: `min`, `max`, `sum`, and `value_count`.
         }
       },
 ...
-```
+----
+// TEST[skip:todo]
 
 This example demonstrates how downsampling can dramatically reduce the number of
 records stored for time series data, within whatever time boundaries you choose.

+ 4 - 8
docs/reference/data-streams/downsampling.asciidoc

@@ -72,18 +72,14 @@ To downsample a time series index, use the
 <<indices-downsample-data-stream,Downsample API>> and set `fixed_interval` to
 the level of granularity that you'd like:
 
-```
-POST /<source_index>/_downsample/<new_index>
-{
-    "fixed_interval": "1d"
-}
-```
+include::../indices/downsample-data-stream.asciidoc[tag=downsample-example]
 
 To downsample time series data as part of ILM, include a
 <<ilm-downsample,Downsample action>> in your ILM policy and set `fixed_interval`
 to the level of granularity that you'd like:
 
-```
+[source,console]
+----
 PUT _ilm/policy/my_policy
 {
   "policy": {
@@ -98,7 +94,7 @@ PUT _ilm/policy/my_policy
     }
   }
 }
-```
+----
 
 [discrete]
 [[querying-downsampled-indices]]

+ 2 - 0
docs/reference/indices/downsample-data-stream.asciidoc

@@ -14,6 +14,7 @@ a TSDS index that contains metrics sampled every 10 seconds can be downsampled
 to an hourly index. All documents within an hour interval are summarized and
 stored as a single document in the downsample index.
 
+// tag::downsample-example[]
 ////
 [source,console]
 ----
@@ -74,6 +75,7 @@ DELETE _index_template/*
 ----
 // TEST[continued]
 ////
+// end::downsample-example[]
 
 [[downsample-api-request]]
 ==== {api-request-title}