Преглед изворни кода

[DOCS] Replace "// TESTRESPONSE" magic comments with "[source,console-result] (#46295)

James Rodewig пре 6 година
родитељ
комит
466c59a4a7
100 измењених фајлова са 192 додато и 328 уклоњено
  1. 2 2
      docs/plugins/analysis-icu.asciidoc
  2. 9 10
      docs/plugins/analysis-kuromoji.asciidoc
  3. 6 8
      docs/plugins/analysis-nori.asciidoc
  4. 1 2
      docs/plugins/analysis-smartcn.asciidoc
  5. 1 2
      docs/plugins/analysis-stempel.asciidoc
  6. 1 2
      docs/reference/aggregations/bucket/range-aggregation.asciidoc
  7. 1 2
      docs/reference/analysis/analyzers/configuring.asciidoc
  8. 2 4
      docs/reference/analysis/analyzers/custom-analyzer.asciidoc
  9. 2 4
      docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc
  10. 1 2
      docs/reference/analysis/analyzers/keyword-analyzer.asciidoc
  11. 3 6
      docs/reference/analysis/analyzers/pattern-analyzer.asciidoc
  12. 1 2
      docs/reference/analysis/analyzers/simple-analyzer.asciidoc
  13. 2 4
      docs/reference/analysis/analyzers/standard-analyzer.asciidoc
  14. 2 4
      docs/reference/analysis/analyzers/stop-analyzer.asciidoc
  15. 1 2
      docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc
  16. 2 4
      docs/reference/analysis/charfilters/htmlstrip-charfilter.asciidoc
  17. 3 4
      docs/reference/analysis/charfilters/mapping-charfilter.asciidoc
  18. 1 2
      docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc
  19. 1 2
      docs/reference/analysis/tokenfilters/common-grams-tokenfilter.asciidoc
  20. 2 2
      docs/reference/analysis/tokenfilters/condition-tokenfilter.asciidoc
  21. 2 4
      docs/reference/analysis/tokenfilters/keep-types-tokenfilter.asciidoc
  22. 2 4
      docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc
  23. 1 2
      docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc
  24. 1 2
      docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc
  25. 1 2
      docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc
  26. 1 3
      docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc
  27. 2 4
      docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc
  28. 2 4
      docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc
  29. 1 2
      docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc
  30. 1 2
      docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc
  31. 1 2
      docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc
  32. 2 4
      docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc
  33. 2 4
      docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc
  34. 3 6
      docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc
  35. 1 2
      docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc
  36. 1 2
      docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc
  37. 2 4
      docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc
  38. 1 2
      docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc
  39. 2 4
      docs/reference/analysis/tokenizers/uaxurlemail-tokenizer.asciidoc
  40. 1 2
      docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc
  41. 6 12
      docs/reference/api-conventions.asciidoc
  42. 1 2
      docs/reference/ccr/apis/auto-follow/delete-auto-follow-pattern.asciidoc
  43. 1 2
      docs/reference/ccr/apis/auto-follow/get-auto-follow-pattern.asciidoc
  44. 1 2
      docs/reference/ccr/apis/auto-follow/put-auto-follow-pattern.asciidoc
  45. 1 2
      docs/reference/ccr/apis/follow-request-body.asciidoc
  46. 1 2
      docs/reference/ccr/apis/follow/get-follow-info.asciidoc
  47. 1 2
      docs/reference/ccr/apis/follow/post-pause-follow.asciidoc
  48. 1 2
      docs/reference/ccr/apis/follow/post-resume-follow.asciidoc
  49. 1 2
      docs/reference/ccr/apis/follow/post-unfollow.asciidoc
  50. 1 2
      docs/reference/ccr/apis/follow/put-follow.asciidoc
  51. 4 7
      docs/reference/ccr/getting-started.asciidoc
  52. 2 4
      docs/reference/cluster/tasks.asciidoc
  53. 2 2
      docs/reference/data-frames/apis/delete-transform.asciidoc
  54. 2 2
      docs/reference/data-frames/apis/get-transform-stats.asciidoc
  55. 2 2
      docs/reference/data-frames/apis/get-transform.asciidoc
  56. 2 2
      docs/reference/data-frames/apis/put-transform.asciidoc
  57. 2 2
      docs/reference/data-frames/apis/start-transform.asciidoc
  58. 2 2
      docs/reference/data-frames/apis/stop-transform.asciidoc
  59. 4 6
      docs/reference/docs/delete-by-query.asciidoc
  60. 4 6
      docs/reference/docs/reindex.asciidoc
  61. 1 2
      docs/reference/docs/termvectors.asciidoc
  62. 5 10
      docs/reference/docs/update-by-query.asciidoc
  63. 1 2
      docs/reference/docs/update.asciidoc
  64. 1 3
      docs/reference/ilm/apis/delete-lifecycle.asciidoc
  65. 1 3
      docs/reference/ilm/apis/get-status.asciidoc
  66. 1 3
      docs/reference/ilm/apis/move-to-step.asciidoc
  67. 2 3
      docs/reference/ilm/apis/put-lifecycle.asciidoc
  68. 1 3
      docs/reference/ilm/apis/remove-policy-from-index.asciidoc
  69. 1 3
      docs/reference/ilm/apis/start.asciidoc
  70. 1 3
      docs/reference/ilm/apis/stop.asciidoc
  71. 4 12
      docs/reference/ilm/start-stop-ilm.asciidoc
  72. 2 2
      docs/reference/indices/analyze.asciidoc
  73. 1 2
      docs/reference/indices/close.asciidoc
  74. 1 2
      docs/reference/indices/create-index.asciidoc
  75. 4 6
      docs/reference/indices/get-alias.asciidoc
  76. 3 6
      docs/reference/indices/get-field-mapping.asciidoc
  77. 1 2
      docs/reference/indices/open-close.asciidoc
  78. 1 2
      docs/reference/indices/recovery.asciidoc
  79. 5 8
      docs/reference/indices/rollover-index.asciidoc
  80. 2 4
      docs/reference/ingest/apis/delete-pipeline.asciidoc
  81. 3 6
      docs/reference/ingest/apis/get-pipeline.asciidoc
  82. 1 2
      docs/reference/ingest/apis/put-pipeline.asciidoc
  83. 1 2
      docs/reference/ingest/ingest-node.asciidoc
  84. 2 4
      docs/reference/mapping.asciidoc
  85. 4 6
      docs/reference/mapping/removal_of_types.asciidoc
  86. 1 2
      docs/reference/mapping/types/percolator.asciidoc
  87. 2 3
      docs/reference/migration/migrate_8_0/snapshots.asciidoc
  88. 2 2
      docs/reference/ml/anomaly-detection/apis/close-job.asciidoc
  89. 1 2
      docs/reference/ml/anomaly-detection/apis/delete-calendar-job.asciidoc
  90. 2 2
      docs/reference/ml/anomaly-detection/apis/delete-calendar.asciidoc
  91. 2 2
      docs/reference/ml/anomaly-detection/apis/delete-datafeed.asciidoc
  92. 2 2
      docs/reference/ml/anomaly-detection/apis/delete-expired-data.asciidoc
  93. 2 2
      docs/reference/ml/anomaly-detection/apis/delete-filter.asciidoc
  94. 2 2
      docs/reference/ml/anomaly-detection/apis/delete-job.asciidoc
  95. 2 2
      docs/reference/ml/anomaly-detection/apis/delete-snapshot.asciidoc
  96. 2 2
      docs/reference/ml/anomaly-detection/apis/flush-job.asciidoc
  97. 2 2
      docs/reference/ml/anomaly-detection/apis/get-calendar.asciidoc
  98. 2 2
      docs/reference/ml/anomaly-detection/apis/get-filter.asciidoc
  99. 2 2
      docs/reference/ml/anomaly-detection/apis/open-job.asciidoc
  100. 1 2
      docs/reference/ml/anomaly-detection/apis/post-calendar-event.asciidoc

+ 2 - 2
docs/plugins/analysis-icu.asciidoc

@@ -185,7 +185,7 @@ GET icu_sample/_analyze
 
 The above `analyze` request returns the following:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
    "tokens": [
@@ -199,7 +199,7 @@ The above `analyze` request returns the following:
    ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-icu-normalization]]
 ==== ICU Normalization Token Filter

+ 9 - 10
docs/plugins/analysis-kuromoji.asciidoc

@@ -191,7 +191,7 @@ GET kuromoji_sample/_analyze
 
 The above `analyze` request returns the following:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -209,7 +209,7 @@ The above `analyze` request returns the following:
   } ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-kuromoji-baseform]]
 ==== `kuromoji_baseform` token filter
@@ -247,7 +247,7 @@ GET kuromoji_sample/_analyze
 
 which responds with:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -259,7 +259,7 @@ which responds with:
   } ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-kuromoji-speech]]
 ==== `kuromoji_part_of_speech` token filter
@@ -313,7 +313,7 @@ GET kuromoji_sample/_analyze
 
 Which responds with:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -331,7 +331,7 @@ Which responds with:
   } ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-kuromoji-readingform]]
 ==== `kuromoji_readingform` token filter
@@ -504,7 +504,7 @@ GET kuromoji_sample/_analyze
 
 The above request returns:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -516,7 +516,7 @@ The above request returns:
   } ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-kuromoji-number]]
 ==== `kuromoji_number` token filter
@@ -554,7 +554,7 @@ GET kuromoji_sample/_analyze
 
 Which results in:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -566,4 +566,3 @@ Which results in:
   } ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 6 - 8
docs/plugins/analysis-nori.asciidoc

@@ -125,7 +125,7 @@ GET nori_sample/_analyze
 
 The above `analyze` request returns the following:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -150,7 +150,6 @@ The above `analyze` request returns the following:
    }]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 <1> This is a compound token that spans two positions (`mixed` mode).
 --
@@ -210,7 +209,7 @@ GET _analyze
 
 Which responds with:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "detail": {
@@ -297,7 +296,7 @@ Which responds with:
     }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-nori-speech]]
 ==== `nori_part_of_speech` token filter
@@ -371,7 +370,7 @@ GET nori_sample/_analyze
 
 Which responds with:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -389,7 +388,7 @@ Which responds with:
   } ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[analysis-nori-readingform]]
 ==== `nori_readingform` token filter
@@ -426,7 +425,7 @@ GET nori_sample/_analyze
 
 Which responds with:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [ {
@@ -438,6 +437,5 @@ Which responds with:
   }]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 <1> The Hanja form is replaced by the Hangul translation.

+ 1 - 2
docs/plugins/analysis-smartcn.asciidoc

@@ -99,7 +99,7 @@ GET smartcn_example/_analyze
 
 The above request returns:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "tokens": [
@@ -428,4 +428,3 @@ The above request returns:
     ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/plugins/analysis-stempel.asciidoc

@@ -94,7 +94,7 @@ GET polish_stop_example/_analyze
 
 The above request returns:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [
@@ -115,4 +115,3 @@ The above request returns:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/aggregations/bucket/range-aggregation.asciidoc

@@ -242,7 +242,7 @@ GET /_search
 
 //////////////////////////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "aggregations": {
@@ -264,7 +264,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 

+ 1 - 2
docs/reference/analysis/analyzers/configuring.asciidoc

@@ -63,7 +63,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -91,6 +91,5 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////

+ 2 - 4
docs/reference/analysis/analyzers/custom-analyzer.asciidoc

@@ -89,7 +89,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -124,7 +124,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -216,7 +215,7 @@ are defined later in the request.
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -251,7 +250,6 @@ are defined later in the request.
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/analyzers/fingerprint-analyzer.asciidoc

@@ -24,7 +24,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -38,7 +38,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -110,7 +109,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -124,7 +123,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/analyzers/keyword-analyzer.asciidoc

@@ -19,7 +19,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -33,7 +33,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 3 - 6
docs/reference/analysis/analyzers/pattern-analyzer.asciidoc

@@ -34,7 +34,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -125,7 +125,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -205,7 +204,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -247,7 +246,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -290,7 +288,7 @@ GET my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -339,7 +337,6 @@ GET my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/analyzers/simple-analyzer.asciidoc

@@ -19,7 +19,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -103,7 +103,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/analyzers/standard-analyzer.asciidoc

@@ -22,7 +22,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -106,7 +106,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -176,7 +175,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -253,7 +252,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/analyzers/stop-analyzer.asciidoc

@@ -20,7 +20,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -90,7 +90,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -154,7 +153,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -217,7 +216,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/analyzers/whitespace-analyzer.asciidoc

@@ -19,7 +19,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -96,7 +96,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/charfilters/htmlstrip-charfilter.asciidoc

@@ -22,7 +22,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -36,7 +36,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -103,7 +102,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -117,7 +116,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 3 - 4
docs/reference/analysis/charfilters/mapping-charfilter.asciidoc

@@ -76,7 +76,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -90,7 +90,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -143,7 +142,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -185,7 +184,7 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
+
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/charfilters/pattern-replace-charfilter.asciidoc

@@ -144,7 +144,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -186,7 +186,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenfilters/common-grams-tokenfilter.asciidoc

@@ -87,7 +87,7 @@ POST /common_grams_example/_analyze
 
 And the response will be:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens" : [
@@ -168,4 +168,3 @@ And the response will be:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 2 - 2
docs/reference/analysis/tokenfilters/condition-tokenfilter.asciidoc

@@ -63,7 +63,7 @@ POST /condition_example/_analyze
 
 And it'd respond:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -84,7 +84,7 @@ And it'd respond:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 <1> The term `What` has been lowercased, because it is only 4 characters long
 <2> The term `Flapdoodle` has been left in its original case, because it doesn't pass
     the predicate

+ 2 - 4
docs/reference/analysis/tokenfilters/keep-types-tokenfilter.asciidoc

@@ -56,7 +56,7 @@ POST /keep_types_example/_analyze
 
 The response will be:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -70,7 +70,6 @@ The response will be:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 Note how only the `<NUM>` token is in the output.
 
@@ -118,7 +117,7 @@ POST /keep_types_exclude_example/_analyze
 
 The response will be:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -139,4 +138,3 @@ The response will be:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 2 - 4
docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc

@@ -66,7 +66,7 @@ POST /keyword_marker_example/_analyze
 
 And it'd respond:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -94,7 +94,6 @@ And it'd respond:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 As compared to the `normal` analyzer which has `cats` stemmed to `cat`:
 
@@ -111,7 +110,7 @@ POST /keyword_marker_example/_analyze
 
 Response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -139,4 +138,3 @@ Response:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc

@@ -52,7 +52,7 @@ POST /keyword_repeat_example/_analyze
 
 And it'd respond:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -87,7 +87,6 @@ And it'd respond:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 Which preserves both the `cat` and `cats` tokens. Compare this to the example
 on the <<analysis-keyword-marker-tokenfilter>>.

+ 1 - 2
docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc

@@ -68,7 +68,7 @@ POST /multiplexer_example/_analyze
 
 And it'd respond:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -110,7 +110,6 @@ And it'd respond:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 <1> The stemmer has also emitted a token `home` at position 1, but because it is a
 duplicate of this token it has been removed from the token stream

+ 1 - 2
docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc

@@ -58,7 +58,7 @@ POST /condition_example/_analyze
 
 And it'd respond:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -72,7 +72,6 @@ And it'd respond:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 <1> The token 'What' has been removed from the tokenstream because it does not
 match the predicate.

+ 1 - 3
docs/reference/analysis/tokenizers/chargroup-tokenizer.asciidoc

@@ -41,7 +41,7 @@ POST _analyze
 
 returns
 
-[source,js]
+[source,console-result]
 ---------------------------
 {
   "tokens": [
@@ -76,5 +76,3 @@ returns
   ]
 }
 ---------------------------
-// TESTRESPONSE
-

+ 2 - 4
docs/reference/analysis/tokenizers/classic-tokenizer.asciidoc

@@ -30,7 +30,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -114,7 +114,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -174,7 +173,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -251,7 +250,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/tokenizers/edgengram-tokenizer.asciidoc

@@ -33,7 +33,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -54,7 +54,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -138,7 +137,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -201,7 +200,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/keyword-tokenizer.asciidoc

@@ -20,7 +20,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -34,7 +34,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/letter-tokenizer.asciidoc

@@ -21,7 +21,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -105,7 +105,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/lowercase-tokenizer.asciidoc

@@ -26,7 +26,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -110,7 +110,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/tokenizers/ngram-tokenizer.asciidoc

@@ -29,7 +29,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -155,7 +155,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -243,7 +242,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -292,7 +291,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/tokenizers/pathhierarchy-tokenizer.asciidoc

@@ -20,7 +20,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -48,7 +48,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -124,7 +123,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -152,7 +151,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 3 - 6
docs/reference/analysis/tokenizers/pattern-tokenizer.asciidoc

@@ -37,7 +37,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -86,7 +86,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -154,7 +153,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -182,7 +181,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -245,7 +243,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -266,7 +264,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc

@@ -65,7 +65,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens" : [
@@ -93,7 +93,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc

@@ -66,7 +66,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens" : [
@@ -94,7 +94,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/tokenizers/standard-tokenizer.asciidoc

@@ -21,7 +21,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -105,7 +105,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -165,7 +164,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -256,7 +255,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/thai-tokenizer.asciidoc

@@ -25,7 +25,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -88,7 +88,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 2 - 4
docs/reference/analysis/tokenizers/uaxurlemail-tokenizer.asciidoc

@@ -19,7 +19,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -54,7 +54,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 
@@ -121,7 +120,7 @@ POST my_index/_analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -184,7 +183,6 @@ POST my_index/_analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 1 - 2
docs/reference/analysis/tokenizers/whitespace-tokenizer.asciidoc

@@ -19,7 +19,7 @@ POST _analyze
 
 /////////////////////
 
-[source,js]
+[source,console-result]
 ----------------------------
 {
   "tokens": [
@@ -96,7 +96,6 @@ POST _analyze
   ]
 }
 ----------------------------
-// TESTRESPONSE
 
 /////////////////////
 

+ 6 - 12
docs/reference/api-conventions.asciidoc

@@ -251,7 +251,7 @@ GET /_cluster/state?filter_path=metadata.indices.*.stat*
 
 Responds:
 
-[source,sh]
+[source,console-result]
 --------------------------------------------------
 {
   "metadata" : {
@@ -261,7 +261,6 @@ Responds:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 And the `**` wildcard can be used to include fields without knowing the
 exact path of the field. For example, we can return the Lucene version
@@ -276,7 +275,7 @@ GET /_cluster/state?filter_path=routing_table.indices.**.state
 
 Responds:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "routing_table": {
@@ -290,7 +289,6 @@ Responds:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 It is also possible to exclude one or more fields by prefixing the filter with the char `-`:
 
@@ -303,13 +301,12 @@ GET /_count?filter_path=-_shards
 
 Responds:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "count" : 5
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 And for more control, both inclusive and exclusive filters can be combined in the same expression. In
 this case, the exclusive filters will be applied first and the result will be filtered again using the
@@ -324,7 +321,7 @@ GET /_cluster/state?filter_path=metadata.indices.*.state,-metadata.indices.logst
 
 Responds:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "metadata" : {
@@ -336,7 +333,6 @@ Responds:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 Note that Elasticsearch sometimes returns directly the raw value of a field,
 like the `_source` field. If you want to filter `_source` fields, you should
@@ -356,7 +352,7 @@ GET /_search?filter_path=hits.hits._source&_source=title&sort=rating:desc
 --------------------------------------------------
 // CONSOLE
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "hits" : {
@@ -370,7 +366,6 @@ GET /_search?filter_path=hits.hits._source&_source=title&sort=rating:desc
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 
 [float]
@@ -585,7 +580,7 @@ POST /twitter/_search?size=surprise_me
 
 The response looks like:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------------
 {
   "error" : {
@@ -605,7 +600,6 @@ The response looks like:
   "status" : 400
 }
 ----------------------------------------------------------------------
-// TESTRESPONSE
 
 But if you set `error_trace=true`:
 

+ 1 - 2
docs/reference/ccr/apis/auto-follow/delete-auto-follow-pattern.asciidoc

@@ -73,10 +73,9 @@ DELETE /_ccr/auto_follow/my_auto_follow_pattern
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/auto-follow/get-auto-follow-pattern.asciidoc

@@ -88,7 +88,7 @@ GET /_ccr/auto_follow/my_auto_follow_pattern
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "patterns": [
@@ -106,4 +106,3 @@ The API returns the following result:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/auto-follow/put-auto-follow-pattern.asciidoc

@@ -114,13 +114,12 @@ PUT /_ccr/auto_follow/my_auto_follow_pattern
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 

+ 1 - 2
docs/reference/ccr/apis/follow-request-body.asciidoc

@@ -79,7 +79,7 @@ GET /follower_index/_ccr/info?filter_path=follower_indices.parameters
 The following output from the follow info api describes all the default
 values for the above described index follow request parameters:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "follower_indices" : [
@@ -101,4 +101,3 @@ values for the above described index follow request parameters:
 }
 
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/follow/get-follow-info.asciidoc

@@ -146,7 +146,7 @@ GET /follower_index/_ccr/info
 
 The API returns the following results:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "follower_indices" : [
@@ -171,4 +171,3 @@ The API returns the following results:
     ]
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/follow/post-pause-follow.asciidoc

@@ -70,10 +70,9 @@ POST /follower_index/_ccr/pause_follow
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/follow/post-resume-follow.asciidoc

@@ -99,10 +99,9 @@ POST /follower_index/_ccr/resume_follow
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/follow/post-unfollow.asciidoc

@@ -77,10 +77,9 @@ POST /follower_index/_ccr/unfollow
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/ccr/apis/follow/put-follow.asciidoc

@@ -110,7 +110,7 @@ PUT /follower_index/_ccr/follow?wait_for_active_shards=1
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "follow_index_created" : true,
@@ -118,4 +118,3 @@ The API returns the following result:
   "index_following_started" : true
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 4 - 7
docs/reference/ccr/getting-started.asciidoc

@@ -126,7 +126,7 @@ GET /_remote/info
 The API will respond by showing that the local cluster is connected to the
 remote cluster.
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "leader" : {
@@ -141,8 +141,7 @@ remote cluster.
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
-// TEST[s/127.0.0.1:9300/$body.leader.seeds.0/]
+// TESTRESPONSE[s/127.0.0.1:9300/$body.leader.seeds.0/]
 // TEST[s/"connected" : true/"connected" : $body.leader.connected/]
 // TEST[s/"num_nodes_connected" : 1/"num_nodes_connected" : $body.leader.num_nodes_connected/]
 <1> This shows the local cluster is connected to the remote cluster with cluster
@@ -226,7 +225,7 @@ PUT /server-metrics-copy/_ccr/follow?wait_for_active_shards=1
 
 //////////////////////////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "follow_index_created" : true,
@@ -234,7 +233,6 @@ PUT /server-metrics-copy/_ccr/follow?wait_for_active_shards=1
   "index_following_started" : true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 
@@ -300,13 +298,12 @@ PUT /_ccr/auto_follow/beats
 
 //////////////////////////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 

+ 2 - 4
docs/reference/cluster/tasks.asciidoc

@@ -64,7 +64,7 @@ GET _tasks?nodes=nodeId1,nodeId2&actions=cluster:* <3>
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "nodes" : {
@@ -98,7 +98,6 @@ The API returns the following result:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 ===== Retrieve information from a particular task
 
@@ -141,7 +140,7 @@ GET _tasks?actions=*search&detailed
 
 The API returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "nodes" : {
@@ -166,7 +165,6 @@ The API returns the following result:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 The new `description` field contains human readable text that identifies the
 particular request that the task is performing such as identifying the search

+ 2 - 2
docs/reference/data-frames/apis/delete-transform.asciidoc

@@ -53,10 +53,10 @@ DELETE _data_frame/transforms/ecommerce_transform
 // TEST[skip:setup kibana sample data]
 
 When the {dataframe-transform} is deleted, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged" : true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/data-frames/apis/get-transform-stats.asciidoc

@@ -119,7 +119,8 @@ GET _data_frame/transforms/ecommerce_transform/_stats
 // TEST[skip:todo]
 
 The API returns the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "count" : 1,
@@ -172,4 +173,3 @@ The API returns the following results:
   ]
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/data-frames/apis/get-transform.asciidoc

@@ -113,7 +113,8 @@ GET _data_frame/transforms/ecommerce_transform
 // TEST[skip:setup kibana sample data]
 
 The API returns the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "count" : 1,
@@ -158,4 +159,3 @@ The API returns the following results:
 }
 
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/data-frames/apis/put-transform.asciidoc

@@ -187,10 +187,10 @@ PUT _data_frame/transforms/ecommerce_transform
 // TEST[setup:kibana_sample_data_ecommerce]
 
 When the transform is created, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged" : true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/data-frames/apis/start-transform.asciidoc

@@ -69,10 +69,10 @@ POST _data_frame/transforms/ecommerce_transform/_start
 // TEST[skip:set up kibana samples]
 
 When the {dataframe-transform} starts, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged" : true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/data-frames/apis/stop-transform.asciidoc

@@ -99,10 +99,10 @@ POST _data_frame/transforms/ecommerce_transform/_stop
 // TEST[skip:set up kibana samples]
 
 When the {dataframe-transform} stops, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged" : true
 }
 ----
-// TESTRESPONSE

+ 4 - 6
docs/reference/docs/delete-by-query.asciidoc

@@ -472,7 +472,7 @@ POST twitter/_search?size=0&filter_path=hits.total
 
 Which results in a sensible `total` like this one:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------
 {
   "hits": {
@@ -483,7 +483,6 @@ Which results in a sensible `total` like this one:
   }
 }
 ----------------------------------------------------------------
-// TESTRESPONSE
 
 [float]
 [[docs-delete-by-query-automatic-slice]]
@@ -529,7 +528,7 @@ POST twitter/_search?size=0&filter_path=hits.total
 
 Which results in a sensible `total` like this one:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------
 {
   "hits": {
@@ -540,7 +539,6 @@ Which results in a sensible `total` like this one:
   }
 }
 ----------------------------------------------------------------
-// TESTRESPONSE
 
 Setting `slices` to `auto` will let {es} choose the number of slices
 to use. This setting will use one slice per shard, up to a certain limit. If
@@ -605,7 +603,7 @@ GET _tasks?detailed=true&actions=*/delete/byquery
 
 The response looks like:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "nodes" : {
@@ -642,7 +640,7 @@ The response looks like:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 <1> This object contains the actual status. It is just like the response JSON
 with the important addition of the `total` field. `total` is the total number
 of operations that the reindex expects to perform. You can estimate the

+ 4 - 6
docs/reference/docs/reindex.asciidoc

@@ -795,7 +795,7 @@ GET _tasks?detailed=true&actions=*reindex
 
 The response looks like:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "nodes" : {
@@ -841,7 +841,7 @@ The response looks like:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 <1> This object contains the actual status. It is identical to the response JSON
 except for the important addition of the `total` field. `total` is the total number
 of operations that the `_reindex` expects to perform. You can estimate the
@@ -1036,7 +1036,7 @@ POST new_twitter/_search?size=0&filter_path=hits.total
 
 which results in a sensible `total` like this one:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------
 {
   "hits": {
@@ -1047,7 +1047,6 @@ which results in a sensible `total` like this one:
   }
 }
 ----------------------------------------------------------------
-// TESTRESPONSE
 
 [float]
 [[docs-reindex-automatic-slice]]
@@ -1082,7 +1081,7 @@ POST new_twitter/_search?size=0&filter_path=hits.total
 
 which results in a sensible `total` like this one:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------
 {
   "hits": {
@@ -1093,7 +1092,6 @@ which results in a sensible `total` like this one:
   }
 }
 ----------------------------------------------------------------
-// TESTRESPONSE
 
 Setting `slices` to `auto` will let Elasticsearch choose the number of slices
 to use. This setting will use one slice per shard, up to a certain limit. If

+ 1 - 2
docs/reference/docs/termvectors.asciidoc

@@ -409,7 +409,7 @@ GET /imdb/_termvectors
 
 Response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
    "_index": "imdb",
@@ -446,4 +446,3 @@ Response:
    }
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 5 - 10
docs/reference/docs/update-by-query.asciidoc

@@ -365,7 +365,7 @@ GET _tasks?detailed=true&actions=*byquery
 
 The responses looks like:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "nodes" : {
@@ -405,7 +405,6 @@ The responses looks like:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 <1> This object contains the actual status. It is just like the response JSON
 with the important addition of the `total` field. `total` is the total number
@@ -524,7 +523,7 @@ POST twitter/_search?size=0&q=extra:test&filter_path=hits.total
 
 Which results in a sensible `total` like this one:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------
 {
   "hits": {
@@ -535,7 +534,6 @@ Which results in a sensible `total` like this one:
   }
 }
 ----------------------------------------------------------------
-// TESTRESPONSE
 
 [float]
 [[docs-update-by-query-automatic-slice]]
@@ -568,7 +566,7 @@ POST twitter/_search?size=0&q=extra:test&filter_path=hits.total
 
 Which results in a sensible `total` like this one:
 
-[source,js]
+[source,console-result]
 ----------------------------------------------------------------
 {
   "hits": {
@@ -579,7 +577,6 @@ Which results in a sensible `total` like this one:
   }
 }
 ----------------------------------------------------------------
-// TESTRESPONSE
 
 Setting `slices` to `auto` will let Elasticsearch choose the number of slices
 to use. This setting will use one slice per shard, up to a certain limit. If
@@ -690,7 +687,7 @@ POST test/_search?filter_path=hits.total
 // CONSOLE
 // TEST[continued]
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "hits" : {
@@ -701,7 +698,6 @@ POST test/_search?filter_path=hits.total
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 But you can issue an `_update_by_query` request to pick up the new mapping:
 
@@ -720,7 +716,7 @@ POST test/_search?filter_path=hits.total
 // CONSOLE
 // TEST[continued]
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "hits" : {
@@ -731,6 +727,5 @@ POST test/_search?filter_path=hits.total
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 You can do the exact same thing when adding a field to a multifield.

+ 1 - 2
docs/reference/docs/update.asciidoc

@@ -237,7 +237,7 @@ POST test/_update/1
 If the value of `name` is already `new_name`, the update
 request is ignored and the `result` element in the response returns `noop`:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
    "_shards": {
@@ -254,7 +254,6 @@ request is ignored and the `result` element in the response returns `noop`:
    "result": "noop"
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 You can disable this behavior by setting `"detect_noop": false`:
 

+ 1 - 3
docs/reference/ilm/apis/delete-lifecycle.asciidoc

@@ -76,11 +76,9 @@ DELETE _ilm/policy/my_policy
 
 When the policy is successfully deleted, you receive the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged": true
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE

+ 1 - 3
docs/reference/ilm/apis/get-status.asciidoc

@@ -42,11 +42,9 @@ GET _ilm/status
 
 If the request succeeds, the body of the response shows the operation mode:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "operation_mode": "RUNNING"
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE

+ 1 - 3
docs/reference/ilm/apis/move-to-step.asciidoc

@@ -107,14 +107,12 @@ POST _ilm/move/my_index
 
 If the request succeeds, you receive the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged": true
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE
 
 The request will fail if the index is not in the `new` phase as specified
 by the `current_step`.

+ 2 - 3
docs/reference/ilm/apis/put-lifecycle.asciidoc

@@ -70,11 +70,10 @@ PUT _ilm/policy/my_policy
 // TEST
 
 If the request succeeds, you receive the following result:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged": true
 }
 ----
-// CONSOLE
-// TESTRESPONSE

+ 1 - 3
docs/reference/ilm/apis/remove-policy-from-index.asciidoc

@@ -83,12 +83,10 @@ POST my_index/_ilm/remove
 
 If the request succeeds, you receive the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "has_failures" : false,
   "failed_indexes" : []
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE

+ 1 - 3
docs/reference/ilm/apis/start.asciidoc

@@ -77,11 +77,9 @@ POST _ilm/start
 
 If the request succeeds, you receive the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged": true
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE

+ 1 - 3
docs/reference/ilm/apis/stop.asciidoc

@@ -80,14 +80,12 @@ POST _ilm/stop
 
 If the request does not encounter errors, you receive the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged": true
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE
 
 //////////////////////////
 

+ 4 - 12
docs/reference/ilm/start-stop-ilm.asciidoc

@@ -61,14 +61,12 @@ GET _ilm/status
 
 If the request does not encounter errors, you receive the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "operation_mode": "RUNNING"
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE
 
 The operating modes of ILM:
 
@@ -107,14 +105,12 @@ GET _ilm/status
 // TEST[continued]
 ////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "operation_mode": "STOPPING"
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE
 
 The ILM service will then, asynchronously, run all policies to a point
 where it is safe to stop. After ILM verifies that it is safe, it will
@@ -131,14 +127,12 @@ GET _ilm/status
 // TEST[continued]
 ////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "operation_mode": "STOPPED"
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE
 
 [float]
 === Starting ILM
@@ -165,11 +159,9 @@ GET _ilm/status
 The Start API will send a request to the ILM service to immediately begin
 normal operations.
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "operation_mode": "RUNNING"
 }
 --------------------------------------------------
-// CONSOLE
-// TESTRESPONSE

+ 2 - 2
docs/reference/indices/analyze.asciidoc

@@ -308,7 +308,7 @@ GET /_analyze
 
 The request returns the following result:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "detail" : {
@@ -351,7 +351,7 @@ The request returns the following result:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 <1> Output only "keyword" attribute, since specify "attributes" in the request.
 
 [[tokens-limit-settings]]

+ 1 - 2
docs/reference/indices/close.asciidoc

@@ -70,7 +70,7 @@ POST /my_index/_close
 
 The API returns following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "acknowledged" : true,
@@ -82,4 +82,3 @@ The API returns following response:
     }
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/indices/create-index.asciidoc

@@ -169,7 +169,7 @@ By default, index creation will only return a response to the client when the pr
 each shard have been started, or the request times out. The index creation response will indicate
 what happened:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "acknowledged": true,
@@ -177,7 +177,6 @@ what happened:
     "index": "test"
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 `acknowledged` indicates whether the index was successfully created in the cluster, while
 `shards_acknowledged` indicates whether the requisite number of shard copies were started for

+ 4 - 6
docs/reference/indices/get-alias.asciidoc

@@ -99,7 +99,7 @@ GET /logs_20302801/_alias/*
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
  "logs_20302801" : {
@@ -117,7 +117,6 @@ The API returns the following response:
  }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 
 [[get-alias-api-named-ex]]
@@ -134,7 +133,7 @@ GET /_alias/2030
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "logs_20302801" : {
@@ -150,7 +149,7 @@ The API returns the following response:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 
 [[get-alias-api-wildcard-ex]]
 ===== Get aliases based on a wildcard
@@ -166,7 +165,7 @@ GET /_alias/20*
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "logs_20302801" : {
@@ -182,4 +181,3 @@ The API returns the following response:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 3 - 6
docs/reference/indices/get-field-mapping.asciidoc

@@ -94,7 +94,7 @@ GET publications/_mapping/field/title
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
    "publications": {
@@ -111,7 +111,6 @@ The API returns the following response:
    }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 [[get-field-mapping-api-specific-fields-ex]]
 ===== Specifying fields
@@ -129,7 +128,7 @@ GET publications/_mapping/field/author.id,abstract,name
 
 returns:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
    "publications": {
@@ -154,7 +153,6 @@ returns:
    }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 The get field mapping API also supports wildcard notation.
 
@@ -167,7 +165,7 @@ GET publications/_mapping/field/a*
 
 returns:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
    "publications": {
@@ -200,7 +198,6 @@ returns:
    }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 [[get-field-mapping-api-multi-index-ex]]
 ===== Multiple indices and fields

+ 1 - 2
docs/reference/indices/open-close.asciidoc

@@ -107,11 +107,10 @@ POST /my_index/_open
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "acknowledged" : true,
     "shards_acknowledged" : true
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 1 - 2
docs/reference/indices/recovery.asciidoc

@@ -43,7 +43,7 @@ POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true
 --------------------------------------------------
 // CONSOLE
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "snapshot": {
@@ -59,7 +59,6 @@ POST /_snapshot/my_repository/snap_1/_restore?wait_for_completion=true
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 

+ 5 - 8
docs/reference/indices/rollover-index.asciidoc

@@ -66,7 +66,7 @@ POST /logs_write/_rollover <2>
 
 The above request might return the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "acknowledged": true,
@@ -82,7 +82,7 @@ The above request might return the following response:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 <1> Whether the index was rolled over.
 <2> Whether the rollover was dry run.
 <3> The result of each condition.
@@ -162,7 +162,7 @@ GET _alias
 // CONSOLE
 // TEST[continued]
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "logs-2016.10.31-000002": {
@@ -175,7 +175,6 @@ GET _alias
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 
@@ -305,7 +304,7 @@ PUT logs/_doc/2 <2>
 <1> configures `my_logs_index` as the write index for the `logs` alias
 <2> newly indexed documents against the `logs` alias will write to the new index
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "_index" : "my_logs_index-000002",
@@ -322,7 +321,6 @@ PUT logs/_doc/2 <2>
   "_primary_term" : 1
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 [source,js]
@@ -336,7 +334,7 @@ GET _alias
 After the rollover, the alias metadata for the two indices will have the `is_write_index` setting
 reflect each index's role, with the newly created index as the write index.
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "my_logs_index-000002": {
@@ -351,4 +349,3 @@ reflect each index's role, with the newly created index as the write index.
   }
 }
 --------------------------------------------------
-// TESTRESPONSE

+ 2 - 4
docs/reference/ingest/apis/delete-pipeline.asciidoc

@@ -34,13 +34,12 @@ DELETE _ingest/pipeline/my-pipeline-id
 
 //////////////////////////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
 "acknowledged": true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 [source,js]
 --------------------------------------------------
@@ -68,12 +67,11 @@ DELETE _ingest/pipeline/*
 
 //////////////////////////
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
 "acknowledged": true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////

+ 3 - 6
docs/reference/ingest/apis/get-pipeline.asciidoc

@@ -33,7 +33,7 @@ GET _ingest/pipeline/my-pipeline-id
 
 Example response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "my-pipeline-id" : {
@@ -49,7 +49,6 @@ Example response:
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 For each returned pipeline, the source and the version are returned.
 The version is useful for knowing which version of the pipeline the node has.
@@ -96,7 +95,7 @@ GET /_ingest/pipeline/my-pipeline-id?filter_path=*.version
 
 This should give a small response that makes it both easy and inexpensive to parse:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "my-pipeline-id" : {
@@ -104,7 +103,6 @@ This should give a small response that makes it both easy and inexpensive to par
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 
@@ -115,12 +113,11 @@ DELETE /_ingest/pipeline/my-pipeline-id
 // CONSOLE
 // TEST[continued]
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
 "acknowledged": true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////

+ 1 - 2
docs/reference/ingest/apis/put-pipeline.asciidoc

@@ -29,13 +29,12 @@ DELETE /_ingest/pipeline/my-pipeline-id
 // CONSOLE
 // TEST[continued]
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
 "acknowledged": true
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 //////////////////////////
 

+ 1 - 2
docs/reference/ingest/ingest-node.asciidoc

@@ -196,7 +196,7 @@ POST test/_doc/1?pipeline=drop_guests_network
 
 Results in nothing indexed since the conditional evaluated to `true`.
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "_index": "test",
@@ -211,7 +211,6 @@ Results in nothing indexed since the conditional evaluated to `true`.
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 
 [[ingest-conditional-nullcheck]]

+ 2 - 4
docs/reference/mapping.asciidoc

@@ -196,7 +196,7 @@ GET /my-index/_mapping
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 ----
 {
   "my-index" : {
@@ -220,7 +220,6 @@ The API returns the following response:
   }
 }
 ----
-// TESTRESPONSE
 
 
 [float]
@@ -244,7 +243,7 @@ GET /my-index/_mapping/field/employee-id
 
 The API returns the following response:
 
-[source,js]
+[source,console-result]
 ----
 {
   "my-index" : {
@@ -263,7 +262,6 @@ The API returns the following response:
 }
 
 ----
-// TESTRESPONSE
 
 --
 

+ 4 - 6
docs/reference/mapping/removal_of_types.asciidoc

@@ -494,7 +494,7 @@ GET index/_mappings?include_type_name=false
 
 The above call returns
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "index": {
@@ -511,7 +511,7 @@ The above call returns
   }
 }
 --------------------------------------------------
-// TESTRESPONSE
+
 <1> Mappings are included directly under the `mappings` key, without a type name.
 
 [float]
@@ -529,7 +529,7 @@ PUT index/_doc/1
 --------------------------------------------------
 // CONSOLE
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "_index": "index",
@@ -546,7 +546,6 @@ PUT index/_doc/1
   "_primary_term": 1
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 Similarly, the `get` and `delete` APIs use the path `{index}/_doc/{id}`:
 
@@ -624,7 +623,7 @@ GET index/_doc/1
 --------------------------------------------------
 // CONSOLE
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
     "_index" : "index",
@@ -639,7 +638,6 @@ GET index/_doc/1
     }
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 [float]
 ==== Index templates

+ 1 - 2
docs/reference/mapping/types/percolator.asciidoc

@@ -311,7 +311,7 @@ POST /test_index/_analyze
 
 This results the following response:
 
-[source,js]
+[source,console-result]
 --------------------------------------------------
 {
   "tokens": [
@@ -332,7 +332,6 @@ This results the following response:
   ]
 }
 --------------------------------------------------
-// TESTRESPONSE
 
 All the tokens in the returned order need to replace the query text in the percolator query:
 

+ 2 - 3
docs/reference/migration/migrate_8_0/snapshots.asciidoc

@@ -25,7 +25,7 @@ GET _snapshot/repo1/snap1
 
 produces the following response
 
-[source,js]
+[source,console-result]
 -----------------------------------
 {
     "responses": [
@@ -57,8 +57,7 @@ produces the following response
     ]
 }
 -----------------------------------
-// TESTRESPONSE
-// TEST[skip:no repo and snapshots are created]
+// TESTRESPONSE[skip:no repo and snapshots are created]
 
 See <<modules-snapshots>> for more information.
 

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/close-job.asciidoc

@@ -106,10 +106,10 @@ POST _ml/anomaly_detectors/total-requests/_close
 // TEST[skip:setup:server_metrics_openjob]
 
 When the job is closed, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "closed": true
 }
 ----
-// TESTRESPONSE

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/delete-calendar-job.asciidoc

@@ -46,11 +46,10 @@ DELETE _ml/calendars/planned-outages/jobs/total-requests
 When the job is removed from the calendar, you receive the following
 results:
 
-[source,js]
+[source,console-result]
 ----
 {
    "calendar_id": "planned-outages",
    "job_ids": []
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/delete-calendar.asciidoc

@@ -45,10 +45,10 @@ DELETE _ml/calendars/planned-outages
 // TEST[skip:setup:calendar_outages]
 
 When the calendar is deleted, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged": true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/delete-datafeed.asciidoc

@@ -50,10 +50,10 @@ DELETE _ml/datafeeds/datafeed-total-requests
 // TEST[skip:setup:server_metrics_datafeed]
 
 When the {dfeed} is deleted, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged": true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/delete-expired-data.asciidoc

@@ -40,10 +40,10 @@ DELETE _ml/_delete_expired_data
 // TEST
 
 When the expired data is deleted, you receive the following response:
-[source,js]
+
+[source,console-result]
 ----
 {
   "deleted": true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/delete-filter.asciidoc

@@ -46,10 +46,10 @@ DELETE _ml/filters/safe_domains
 // TEST[skip:setup:ml_filter_safe_domains]
 
 When the filter is deleted, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged": true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/delete-job.asciidoc

@@ -67,13 +67,13 @@ DELETE _ml/anomaly_detectors/total-requests
 // TEST[skip:setup:server_metrics_job]
 
 When the job is deleted, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged": true
 }
 ----
-// TESTRESPONSE
 
 In the next example we delete the `total-requests` job asynchronously:
 

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/delete-snapshot.asciidoc

@@ -49,10 +49,10 @@ DELETE _ml/anomaly_detectors/farequote/model_snapshots/1491948163
 // TEST[skip:todo]
 
 When the snapshot is deleted, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "acknowledged": true
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/flush-job.asciidoc

@@ -106,11 +106,11 @@ POST _ml/anomaly_detectors/total-requests/_flush
 // TEST[skip:setup:server_metrics_openjob]
 
 When the operation succeeds, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "flushed": true,
   "last_finalized_bucket_end": 1514804400000
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/get-calendar.asciidoc

@@ -76,7 +76,8 @@ GET _ml/calendars/planned-outages
 // TEST[skip:setup:calendar_outages_addjob]
 
 The API returns the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "count": 1,
@@ -90,4 +91,3 @@ The API returns the following results:
   ]
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/get-filter.asciidoc

@@ -74,7 +74,8 @@ GET _ml/filters/safe_domains
 // TEST[skip:setup:ml_filter_safe_domains]
 
 The API returns the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "count": 1,
@@ -90,4 +91,3 @@ The API returns the following results:
   ]
 }
 ----
-// TESTRESPONSE

+ 2 - 2
docs/reference/ml/anomaly-detection/apis/open-job.asciidoc

@@ -63,10 +63,10 @@ POST _ml/anomaly_detectors/total-requests/_open
 // TEST[skip:setup:server_metrics_job]
 
 When the job opens, you receive the following results:
-[source,js]
+
+[source,console-result]
 ----
 {
   "opened": true
 }
 ----
-// TESTRESPONSE

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/post-calendar-event.asciidoc

@@ -76,7 +76,7 @@ POST _ml/calendars/planned-outages/events
 
 The API returns the following results:
 
-[source,js]
+[source,console-result]
 ----
 {
   "events": [
@@ -101,4 +101,3 @@ The API returns the following results:
   ]
 }
 ----
-// TESTRESPONSE

Неке датотеке нису приказане због велике количине промена