浏览代码

[DOCS] Change // CONSOLE comments to [source,console] (#46440)

James Rodewig 6 年之前
父节点
当前提交
5c78f606c2
共有 100 个文件被更改,包括 343 次插入649 次删除
  1. 1 2
      docs/reference/ml/anomaly-detection/apis/stop-datafeed.asciidoc
  2. 1 2
      docs/reference/ml/anomaly-detection/apis/update-datafeed.asciidoc
  3. 1 2
      docs/reference/ml/anomaly-detection/apis/update-filter.asciidoc
  4. 1 2
      docs/reference/ml/anomaly-detection/apis/update-job.asciidoc
  5. 1 2
      docs/reference/ml/anomaly-detection/apis/update-snapshot.asciidoc
  6. 1 2
      docs/reference/ml/anomaly-detection/apis/validate-detector.asciidoc
  7. 1 2
      docs/reference/ml/anomaly-detection/apis/validate-job.asciidoc
  8. 6 12
      docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc
  9. 7 14
      docs/reference/ml/anomaly-detection/functions/count.asciidoc
  10. 1 2
      docs/reference/ml/anomaly-detection/functions/geo.asciidoc
  11. 4 8
      docs/reference/ml/anomaly-detection/stopping-ml.asciidoc
  12. 19 22
      docs/reference/ml/anomaly-detection/transforms.asciidoc
  13. 1 2
      docs/reference/ml/df-analytics/apis/delete-dfanalytics.asciidoc
  14. 1 2
      docs/reference/ml/df-analytics/apis/dfanalyticsresources.asciidoc
  15. 1 2
      docs/reference/ml/df-analytics/apis/estimate-memory-usage-dfanalytics.asciidoc
  16. 1 2
      docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc
  17. 1 2
      docs/reference/ml/df-analytics/apis/get-dfanalytics-stats.asciidoc
  18. 1 2
      docs/reference/ml/df-analytics/apis/get-dfanalytics.asciidoc
  19. 1 2
      docs/reference/ml/df-analytics/apis/put-dfanalytics.asciidoc
  20. 1 2
      docs/reference/ml/df-analytics/apis/start-dfanalytics.asciidoc
  21. 1 2
      docs/reference/ml/df-analytics/apis/stop-dfanalytics.asciidoc
  22. 2 4
      docs/reference/modules/cluster/allocation_filtering.asciidoc
  23. 2 4
      docs/reference/modules/cluster/disk_allocator.asciidoc
  24. 2 4
      docs/reference/modules/cluster/misc.asciidoc
  25. 4 8
      docs/reference/modules/cross-cluster-search.asciidoc
  26. 3 6
      docs/reference/modules/discovery/adding-removing-nodes.asciidoc
  27. 1 2
      docs/reference/modules/discovery/voting.asciidoc
  28. 6 12
      docs/reference/modules/indices/request_cache.asciidoc
  29. 4 6
      docs/reference/modules/remote-clusters.asciidoc
  30. 33 62
      docs/reference/modules/snapshots.asciidoc
  31. 2 6
      docs/reference/modules/transport.asciidoc
  32. 1 2
      docs/reference/monitoring/collecting-monitoring-data.asciidoc
  33. 3 5
      docs/reference/monitoring/configuring-metricbeat.asciidoc
  34. 2 4
      docs/reference/monitoring/indices.asciidoc
  35. 4 8
      docs/reference/query-dsl/bool-query.asciidoc
  36. 1 2
      docs/reference/query-dsl/boosting-query.asciidoc
  37. 1 2
      docs/reference/query-dsl/constant-score-query.asciidoc
  38. 1 2
      docs/reference/query-dsl/dis-max-query.asciidoc
  39. 4 8
      docs/reference/query-dsl/distance-feature-query.asciidoc
  40. 2 4
      docs/reference/query-dsl/exists-query.asciidoc
  41. 9 16
      docs/reference/query-dsl/function-score-query.asciidoc
  42. 2 4
      docs/reference/query-dsl/fuzzy-query.asciidoc
  43. 10 20
      docs/reference/query-dsl/geo-bounding-box-query.asciidoc
  44. 6 12
      docs/reference/query-dsl/geo-distance-query.asciidoc
  45. 4 8
      docs/reference/query-dsl/geo-polygon-query.asciidoc
  46. 3 6
      docs/reference/query-dsl/geo-shape-query.asciidoc
  47. 3 6
      docs/reference/query-dsl/has-child-query.asciidoc
  48. 3 6
      docs/reference/query-dsl/has-parent-query.asciidoc
  49. 1 2
      docs/reference/query-dsl/ids-query.asciidoc
  50. 6 12
      docs/reference/query-dsl/intervals-query.asciidoc
  51. 3 6
      docs/reference/query-dsl/match-all-query.asciidoc
  52. 3 6
      docs/reference/query-dsl/match-bool-prefix-query.asciidoc
  53. 1 2
      docs/reference/query-dsl/match-phrase-prefix-query.asciidoc
  54. 2 4
      docs/reference/query-dsl/match-phrase-query.asciidoc
  55. 6 12
      docs/reference/query-dsl/match-query.asciidoc
  56. 4 8
      docs/reference/query-dsl/mlt-query.asciidoc
  57. 18 30
      docs/reference/query-dsl/multi-match-query.asciidoc
  58. 2 4
      docs/reference/query-dsl/nested-query.asciidoc
  59. 4 8
      docs/reference/query-dsl/parent-id-query.asciidoc
  60. 13 26
      docs/reference/query-dsl/percolate-query.asciidoc
  61. 1 2
      docs/reference/query-dsl/pinned-query.asciidoc
  62. 2 4
      docs/reference/query-dsl/prefix-query.asciidoc
  63. 12 24
      docs/reference/query-dsl/query-string-query.asciidoc
  64. 2 2
      docs/reference/query-dsl/query_filter_context.asciidoc
  65. 4 6
      docs/reference/query-dsl/range-query.asciidoc
  66. 7 14
      docs/reference/query-dsl/rank-feature-query.asciidoc
  67. 1 2
      docs/reference/query-dsl/regexp-query.asciidoc
  68. 2 4
      docs/reference/query-dsl/script-query.asciidoc
  69. 1 2
      docs/reference/query-dsl/script-score-query.asciidoc
  70. 3 6
      docs/reference/query-dsl/shape-query.asciidoc
  71. 7 12
      docs/reference/query-dsl/simple-query-string-query.asciidoc
  72. 1 2
      docs/reference/query-dsl/span-field-masking-query.asciidoc
  73. 1 2
      docs/reference/query-dsl/span-first-query.asciidoc
  74. 2 4
      docs/reference/query-dsl/span-multi-term-query.asciidoc
  75. 1 2
      docs/reference/query-dsl/span-near-query.asciidoc
  76. 1 2
      docs/reference/query-dsl/span-not-query.asciidoc
  77. 1 2
      docs/reference/query-dsl/span-or-query.asciidoc
  78. 3 6
      docs/reference/query-dsl/span-term-query.asciidoc
  79. 1 2
      docs/reference/query-dsl/span-within-query.asciidoc
  80. 6 12
      docs/reference/query-dsl/term-query.asciidoc
  81. 6 12
      docs/reference/query-dsl/terms-query.asciidoc
  82. 5 10
      docs/reference/query-dsl/terms-set-query.asciidoc
  83. 1 2
      docs/reference/query-dsl/wildcard-query.asciidoc
  84. 1 2
      docs/reference/query-dsl/wrapper-query.asciidoc
  85. 1 2
      docs/reference/redirects.asciidoc
  86. 3 6
      docs/reference/rest-api/info.asciidoc
  87. 1 2
      docs/reference/rollup/apis/delete-job.asciidoc
  88. 2 4
      docs/reference/rollup/apis/get-job.asciidoc
  89. 1 2
      docs/reference/rollup/apis/put-job.asciidoc
  90. 4 8
      docs/reference/rollup/apis/rollup-caps.asciidoc
  91. 3 6
      docs/reference/rollup/apis/rollup-index-caps.asciidoc
  92. 1 2
      docs/reference/rollup/apis/rollup-job-config.asciidoc
  93. 4 8
      docs/reference/rollup/apis/rollup-search.asciidoc
  94. 1 2
      docs/reference/rollup/apis/start-job.asciidoc
  95. 1 2
      docs/reference/rollup/apis/stop-job.asciidoc
  96. 4 8
      docs/reference/rollup/rollup-getting-started.asciidoc
  97. 1 2
      docs/reference/rollup/rollup-search-limitations.asciidoc
  98. 1 2
      docs/reference/scripting/engine.asciidoc
  99. 4 6
      docs/reference/scripting/fields.asciidoc
  100. 5 10
      docs/reference/scripting/using.asciidoc

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/stop-datafeed.asciidoc

@@ -82,14 +82,13 @@ are no matches or only partial matches.
 
 The following example stops the `datafeed-total-requests` {dfeed}:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-total-requests/_stop
 {
   "timeout": "30s"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:server_metrics_startdf]
 
 When the {dfeed} stops, you receive the following results:

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/update-datafeed.asciidoc

@@ -102,7 +102,7 @@ see <<ml-datafeed-resource>>.
 The following example updates the query for the `datafeed-total-requests`
 {dfeed} so that only log entries of error level are analyzed:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-total-requests/_update
 {
@@ -113,7 +113,6 @@ POST _ml/datafeeds/datafeed-total-requests/_update
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:server_metrics_datafeed]
 
 When the {dfeed} is updated, you receive the full {dfeed} configuration with

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/update-filter.asciidoc

@@ -44,7 +44,7 @@ Updates the description of a filter, adds items, or removes items.
 You can change the description, add and remove items to the `safe_domains`
 filter as follows:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/filters/safe_domains/_update
 {
@@ -53,7 +53,6 @@ POST _ml/filters/safe_domains/_update
   "remove_items": ["wikipedia.org"]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:ml_filter_safe_domains]
 
 The API returns the following results:

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/update-job.asciidoc

@@ -101,7 +101,7 @@ No other detector property can be updated.
 
 The following example updates the `total-requests` job:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/anomaly_detectors/total-requests/_update
 {
@@ -125,7 +125,6 @@ POST _ml/anomaly_detectors/total-requests/_update
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:server_metrics_job]
 
 When the {anomaly-job} is updated, you receive a summary of the job

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/update-snapshot.asciidoc

@@ -50,7 +50,7 @@ The following properties can be updated after the model snapshot is created:
 
 The following example updates the snapshot identified as `1491852978`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST
 _ml/anomaly_detectors/it_ops_new_logs/model_snapshots/1491852978/_update
@@ -59,7 +59,6 @@ _ml/anomaly_detectors/it_ops_new_logs/model_snapshots/1491852978/_update
   "retain": true
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:todo]
 
 When the snapshot is updated, you receive the following results:

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/validate-detector.asciidoc

@@ -37,7 +37,7 @@ see <<ml-detectorconfig,detector configuration objects>>.
 
 The following example validates detector configuration information:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/anomaly_detectors/_validate/detector
 {
@@ -46,7 +46,6 @@ POST _ml/anomaly_detectors/_validate/detector
   "by_field_name": "airline"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 When the validation completes, you receive the following results:

+ 1 - 2
docs/reference/ml/anomaly-detection/apis/validate-job.asciidoc

@@ -37,7 +37,7 @@ see <<ml-job-resource>>.
 
 The following example validates job configuration information:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/anomaly_detectors/_validate
 {
@@ -57,7 +57,6 @@ POST _ml/anomaly_detectors/_validate
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 When the validation is complete, you receive the following results:

+ 6 - 12
docs/reference/ml/anomaly-detection/detector-custom-rules.asciidoc

@@ -31,7 +31,7 @@ _filters_ in {ml}. Filters can be shared across {anomaly-jobs}.
 
 We create our filter using the {ref}/ml-put-filter.html[put filter API]:
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT _ml/filters/safe_domains
 {
@@ -39,13 +39,12 @@ PUT _ml/filters/safe_domains
   "items": ["safe.com", "trusted.com"]
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 Now, we can create our {anomaly-job} specifying a scope that uses the
 `safe_domains`  filter for the `highest_registered_domain` field:
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT _ml/anomaly_detectors/dns_exfiltration_with_rule
 {
@@ -71,21 +70,19 @@ PUT _ml/anomaly_detectors/dns_exfiltration_with_rule
   }
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 As time advances and we see more data and more results, we might encounter new 
 domains that we want to add in the filter. We can do that by using the 
 {ref}/ml-update-filter.html[update filter API]:
 
-[source,js]
+[source,console]
 ----------------------------------
 POST _ml/filters/safe_domains/_update
 {
   "add_items": ["another-safe.com"]
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:setup:ml_filter_safe_domains]
 
 Note that we can use any of the `partition_field_name`, `over_field_name`, or 
@@ -93,7 +90,7 @@ Note that we can use any of the `partition_field_name`, `over_field_name`, or
 
 In the following example we scope multiple fields:
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT _ml/anomaly_detectors/scoping_multiple_fields
 {
@@ -125,7 +122,6 @@ PUT _ml/anomaly_detectors/scoping_multiple_fields
   }
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 Such a detector will skip results when the values of all 3 scoped fields
@@ -143,7 +139,7 @@ investigation.
 Let us now configure an {anomaly-job} with a rule that will skip results where
 CPU utilization is less than 0.20.
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT _ml/anomaly_detectors/cpu_with_rule
 {
@@ -169,7 +165,6 @@ PUT _ml/anomaly_detectors/cpu_with_rule
   }
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 When there are multiple conditions they are combined with a logical `and`.
@@ -179,7 +174,7 @@ a rule with two conditions, one for each end of the desired range.
 Here is an example where a count detector will skip results when the count
 is greater than 30 and less than 50:
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT _ml/anomaly_detectors/rule_with_range
 {
@@ -209,7 +204,6 @@ PUT _ml/anomaly_detectors/rule_with_range
   }
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 ==== Custom rules in the life-cycle of a job

+ 7 - 14
docs/reference/ml/anomaly-detection/functions/count.asciidoc

@@ -43,7 +43,7 @@ For more information about those properties,
 see {ref}/ml-job-resource.html#ml-detectorconfig[Detector configuration objects].
 
 .Example 1: Analyzing events with the count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example1
 {
@@ -58,7 +58,6 @@ PUT _ml/anomaly_detectors/example1
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 This example is probably the simplest possible analysis. It identifies
@@ -70,7 +69,7 @@ event rate and detects when the event rate is unusual compared to its past
 behavior.
 
 .Example 2: Analyzing errors with the high_count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example2
 {
@@ -87,7 +86,6 @@ PUT _ml/anomaly_detectors/example2
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 If you use this `high_count` function in a detector in your {anomaly-job}, it
@@ -96,7 +94,7 @@ unusually high count of error codes compared to other users.
 
 
 .Example 3: Analyzing status codes with the low_count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example3
 {
@@ -112,7 +110,6 @@ PUT _ml/anomaly_detectors/example3
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 In this example, the function detects when the count of events for a
@@ -123,7 +120,7 @@ event rate for each status code and detects when a status code has an unusually
 low count compared to its past behavior.
 
 .Example 4: Analyzing aggregated data with the count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example4
 {
@@ -139,7 +136,6 @@ PUT _ml/anomaly_detectors/example4
   }
 }  
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 If you are analyzing an aggregated `events_per_min` field, do not use a sum
@@ -188,7 +184,7 @@ The `non_zero_count` function models only the following data:
 ========================================
 
 .Example 5: Analyzing signatures with the high_non_zero_count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example5
 {
@@ -204,7 +200,6 @@ PUT _ml/anomaly_detectors/example5
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 If you use this `high_non_zero_count` function in a detector in your
@@ -242,7 +237,7 @@ For more information about those properties,
 see {ref}/ml-job-resource.html#ml-detectorconfig[Detector configuration objects].
 
 .Example 6: Analyzing users with the distinct_count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example6
 {
@@ -258,7 +253,6 @@ PUT _ml/anomaly_detectors/example6
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 This `distinct_count` function detects when a system has an unusual number
@@ -267,7 +261,7 @@ of logged in users. When you use this function in a detector in your
 distinct number of users is unusual compared to the past.
 
 .Example 7: Analyzing ports with the high_distinct_count function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example7
 {
@@ -284,7 +278,6 @@ PUT _ml/anomaly_detectors/example7
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 This example detects instances of port scanning. When you use this function in a

+ 1 - 2
docs/reference/ml/anomaly-detection/functions/geo.asciidoc

@@ -29,7 +29,7 @@ For more information about those properties,
 see {ref}/ml-job-resource.html#ml-detectorconfig[Detector configuration objects].
 
 .Example 1: Analyzing transactions with the lat_long function
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/example1
 {
@@ -46,7 +46,6 @@ PUT _ml/anomaly_detectors/example1
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 If you use this `lat_long` function in a detector in your {anomaly-job}, it

+ 4 - 8
docs/reference/ml/anomaly-detection/stopping-ml.asciidoc

@@ -23,11 +23,10 @@ When you stop a {dfeed}, it ceases to retrieve data from {es}. You can stop a
 {ref}/ml-stop-datafeed.html[stop {dfeeds} API]. For example, the following
 request stops the `feed1` {dfeed}:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/feed1/_stop
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:server_metrics_startdf]
 
 NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}.
@@ -44,11 +43,10 @@ A {dfeed} can be started and stopped multiple times throughout its lifecycle.
 If you are upgrading your cluster, you can use the following request to stop all
 {dfeeds}:
 
-[source,js]
+[source,console]
 ----------------------------------
 POST _ml/datafeeds/_all/_stop
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 [float]
@@ -64,11 +62,10 @@ You can close a job by using the
 {ref}/ml-close-job.html[close {anomaly-job} API]. For 
 example, the following request closes the `job1` job:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/anomaly_detectors/job1/_close
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:server_metrics_openjob]
 
 NOTE: You must have `manage_ml`, or `manage` cluster privileges to stop {dfeeds}.
@@ -84,9 +81,8 @@ lifecycle.
 If you are upgrading your cluster, you can use the following request to close
 all open {anomaly-jobs} on the cluster:
 
-[source,js]
+[source,console]
 ----------------------------------
 POST _ml/anomaly_detectors/_all/_close
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]

+ 19 - 22
docs/reference/ml/anomaly-detection/transforms.asciidoc

@@ -24,7 +24,7 @@ functions in one or more detectors.
 The following index APIs create and add content to an index that is used in
 subsequent examples:
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT /my_index
 {
@@ -92,8 +92,8 @@ PUT /my_index/_doc/1
   }
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:SETUP]
+
 <1> In this example, string fields are mapped as `keyword` fields to support
 aggregation. If you want both a full text (`text`) and a keyword (`keyword`)
 version of the same field, use multi-fields. For more information, see
@@ -101,7 +101,7 @@ version of the same field, use multi-fields. For more information, see
 
 [[ml-configuring-transform1]]
 .Example 1: Adding two numerical fields
-[source,js]
+[source,console]
 ----------------------------------
 PUT _ml/anomaly_detectors/test1
 {
@@ -140,8 +140,8 @@ PUT _ml/datafeeds/datafeed-test1
   }
 }
 ----------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
+
 <1> A script field named `total_error_count` is referenced in the detector
 within the job.
 <2> The script field is defined in the {dfeed}.
@@ -157,11 +157,10 @@ For more information, see
 
 You can preview the contents of the {dfeed} by using the following API:
 
-[source,js]
+[source,console]
 ----------------------------------
 GET _ml/datafeeds/datafeed-test1/_preview
 ----------------------------------
-// CONSOLE
 // TEST[skip:continued]
 
 In this example, the API returns the following results, which contain a sum of
@@ -210,7 +209,7 @@ that convert your strings to upper or lowercase letters.
 
 [[ml-configuring-transform2]]
 .Example 2: Concatenating strings
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/test2
 {
@@ -251,8 +250,8 @@ PUT _ml/datafeeds/datafeed-test2
 
 GET _ml/datafeeds/datafeed-test2/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
+
 <1> The script field has a rather generic name in this case, since it will
 be used for various tests in the subsequent examples.
 <2> The script field uses the plus (+) operator to concatenate strings.
@@ -272,7 +271,7 @@ and "SMITH  " have been concatenated and an underscore was added:
 
 [[ml-configuring-transform3]]
 .Example 3: Trimming strings
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-test2/_update
 {
@@ -288,8 +287,8 @@ POST _ml/datafeeds/datafeed-test2/_update
 
 GET _ml/datafeeds/datafeed-test2/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:continued]
+
 <1> This script field uses the `trim()` function to trim extra white space from a
 string.
 
@@ -308,7 +307,7 @@ has been trimmed to "SMITH":
 
 [[ml-configuring-transform4]]
 .Example 4: Converting strings to lowercase
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-test2/_update
 {
@@ -324,8 +323,8 @@ POST _ml/datafeeds/datafeed-test2/_update
 
 GET _ml/datafeeds/datafeed-test2/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:continued]
+
 <1> This script field uses the `toLowerCase` function to convert a string to all
 lowercase letters. Likewise, you can use the `toUpperCase{}` function to convert
 a string to uppercase letters.
@@ -345,7 +344,7 @@ has been converted to "joe":
 
 [[ml-configuring-transform5]]
 .Example 5: Converting strings to mixed case formats
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-test2/_update
 {
@@ -361,8 +360,8 @@ POST _ml/datafeeds/datafeed-test2/_update
 
 GET _ml/datafeeds/datafeed-test2/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:continued]
+
 <1> This script field is a more complicated example of case manipulation. It uses
 the `subString()` function to capitalize the first letter of a string and
 converts the remaining characters to lowercase.
@@ -382,7 +381,7 @@ has been converted to "Joe":
 
 [[ml-configuring-transform6]]
 .Example 6: Replacing tokens
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-test2/_update
 {
@@ -398,8 +397,8 @@ POST _ml/datafeeds/datafeed-test2/_update
 
 GET _ml/datafeeds/datafeed-test2/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:continued]
+
 <1> This script field uses regular expressions to replace white
 space with underscores.
 
@@ -418,7 +417,7 @@ The preview {dfeed} API returns the following results, which show that
 
 [[ml-configuring-transform7]]
 .Example 7: Regular expression matching and concatenation
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/datafeeds/datafeed-test2/_update
 {
@@ -434,8 +433,8 @@ POST _ml/datafeeds/datafeed-test2/_update
 
 GET _ml/datafeeds/datafeed-test2/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:continued]
+
 <1> This script field looks for a specific regular expression pattern and emits the
 matched groups as a concatenated string. If no match is found, it emits an empty
 string.
@@ -455,7 +454,7 @@ The preview {dfeed} API returns the following results, which show that
 
 [[ml-configuring-transform8]]
 .Example 8: Splitting strings by domain name
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/test3
 {
@@ -499,7 +498,6 @@ PUT _ml/datafeeds/datafeed-test3
 
 GET _ml/datafeeds/datafeed-test3/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 If you have a single field that contains a well-formed DNS domain name, you can
@@ -527,7 +525,7 @@ The preview {dfeed} API returns the following results, which show that
 
 [[ml-configuring-transform9]]
 .Example 9: Transforming geo_point data
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/anomaly_detectors/test4
 {
@@ -567,7 +565,6 @@ PUT _ml/datafeeds/datafeed-test4
 
 GET _ml/datafeeds/datafeed-test4/_preview
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:needs-licence]
 
 In {es}, location data can be stored in `geo_point` fields but this data type is

+ 1 - 2
docs/reference/ml/df-analytics/apis/delete-dfanalytics.asciidoc

@@ -34,11 +34,10 @@ information, see {stack-ov}/security-privileges.html[Security privileges] and
 
 The following example deletes the `loganalytics` {dfanalytics-job}:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 DELETE _ml/data_frame/analytics/loganalytics
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:TBD]
 
 The API returns the following result:

+ 1 - 2
docs/reference/ml/df-analytics/apis/dfanalyticsresources.asciidoc

@@ -28,7 +28,7 @@
     from the analysis.
   
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/data_frame/analytics/loganalytics
 {
@@ -48,7 +48,6 @@ PUT _ml/data_frame/analytics/loganalytics
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:setup_logdata]
 
 `description`::

+ 1 - 2
docs/reference/ml/df-analytics/apis/estimate-memory-usage-dfanalytics.asciidoc

@@ -54,7 +54,7 @@ Serves as an advice on how to set `model_memory_limit` when creating {dfanalytic
 [[ml-estimate-memory-usage-dfanalytics-example]]
 ==== {api-examples-title}
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/data_frame/analytics/_estimate_memory_usage
 {
@@ -68,7 +68,6 @@ POST _ml/data_frame/analytics/_estimate_memory_usage
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:TBD]
 
 The API returns the following results:

+ 1 - 2
docs/reference/ml/df-analytics/apis/evaluate-dfanalytics.asciidoc

@@ -74,7 +74,7 @@ packages together commonly used metrics for various analyses.
 [[ml-evaluate-dfanalytics-example]]
 ==== {api-examples-title}
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/data_frame/_evaluate
 {
@@ -87,7 +87,6 @@ POST _ml/data_frame/_evaluate
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:TBD]
 
 The API returns the following results:

+ 1 - 2
docs/reference/ml/df-analytics/apis/get-dfanalytics-stats.asciidoc

@@ -106,11 +106,10 @@ The API returns the following information:
 [[ml-get-dfanalytics-stats-example]]
 ==== {api-examples-title}
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _ml/data_frame/analytics/loganalytics/_stats
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:TBD]
 
 

+ 1 - 2
docs/reference/ml/df-analytics/apis/get-dfanalytics.asciidoc

@@ -90,11 +90,10 @@ when there are no matches or only partial matches.
 The following example gets configuration information for the `loganalytics` 
 {dfanalytics-job}:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _ml/data_frame/analytics/loganalytics
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:TBD]
 
 The API returns the following results:

+ 1 - 2
docs/reference/ml/df-analytics/apis/put-dfanalytics.asciidoc

@@ -124,7 +124,7 @@ and mappings.
 The following example creates the `loganalytics` {dfanalytics-job}, the analysis 
 type is `outlier_detection`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _ml/data_frame/analytics/loganalytics
 {
@@ -141,7 +141,6 @@ PUT _ml/data_frame/analytics/loganalytics
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:setup_logdata]
 
 

+ 1 - 2
docs/reference/ml/df-analytics/apis/start-dfanalytics.asciidoc

@@ -46,11 +46,10 @@ and {stack-ov}/built-in-roles.html[Built-in roles].
 
 The following example starts the `loganalytics` {dfanalytics-job}:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/data_frame/analytics/loganalytics/_start
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:setup:logdata_job]
 
 When the {dfanalytics-job} starts, you receive the following results:

+ 1 - 2
docs/reference/ml/df-analytics/apis/stop-dfanalytics.asciidoc

@@ -68,11 +68,10 @@ stop all {dfanalytics-job} by using _all or by specifying * as the
 
 The following example stops the `loganalytics` {dfanalytics-job}:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/data_frame/analytics/loganalytics/_stop
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:TBD]
 
 When the {dfanalytics-job} stops, you receive the following results:

+ 2 - 4
docs/reference/modules/cluster/allocation_filtering.asciidoc

@@ -18,7 +18,7 @@ The most common use case for cluster-level shard allocation filtering is when
 you want to decommission a node. To move shards off of a node prior to shutting
 it down, you could create a filter that excludes the node by its IP address:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _cluster/settings
 {
@@ -27,7 +27,6 @@ PUT _cluster/settings
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 [[cluster-routing-settings]]
@@ -57,7 +56,7 @@ The cluster allocation settings support the following built-in attributes:
 
 You can use wildcards when specifying attribute values, for example:
 
-[source,js]
+[source,console]
 ------------------------
 PUT _cluster/settings
 {
@@ -66,4 +65,3 @@ PUT _cluster/settings
   }
 }
 ------------------------
-// CONSOLE

+ 2 - 4
docs/reference/modules/cluster/disk_allocator.asciidoc

@@ -52,14 +52,13 @@ threshold).
 
 An example of resetting the read-only index block on the `twitter` index:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /twitter/_settings
 {
   "index.blocks.read_only_allow_delete": null
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 --
 
@@ -88,7 +87,7 @@ An example of updating the low watermark to at least 100 gigabytes free, a high
 watermark of at least 50 gigabytes free, and a flood stage watermark of 10
 gigabytes free, and updating the information about the cluster every minute:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _cluster/settings
 {
@@ -100,4 +99,3 @@ PUT _cluster/settings
   }
 }
 --------------------------------------------------
-// CONSOLE

+ 2 - 4
docs/reference/modules/cluster/misc.asciidoc

@@ -75,7 +75,7 @@ any key prefixed with `cluster.metadata.`.  For example, to store the email
 address of the administrator of a cluster under the key `cluster.metadata.administrator`,
 issue this request:
 
-[source,js]
+[source,console]
 -------------------------------
 PUT /_cluster/settings
 {
@@ -84,7 +84,6 @@ PUT /_cluster/settings
   }
 }
 -------------------------------
-// CONSOLE
 
 IMPORTANT: User-defined cluster metadata is not intended to store sensitive or
 confidential information. Any information stored in user-defined cluster
@@ -116,7 +115,7 @@ The settings which control logging can be updated dynamically with the
 `logger.` prefix.  For instance, to increase the logging level of the
 `indices.recovery` module to `DEBUG`, issue this request:
 
-[source,js]
+[source,console]
 -------------------------------
 PUT /_cluster/settings
 {
@@ -125,7 +124,6 @@ PUT /_cluster/settings
   }
 }
 -------------------------------
-// CONSOLE
 
 
 [[persistent-tasks-allocation]]

+ 4 - 8
docs/reference/modules/cross-cluster-search.asciidoc

@@ -21,7 +21,7 @@ To perform a {ccs}, you must have at least one remote cluster configured.
 The following <<cluster-update-settings,cluster update settings>> API request
 adds three remote clusters:`cluster_one`, `cluster_two`, and `cluster_three`.
 
-[source,js]
+[source,console]
 --------------------------------
 PUT _cluster/settings
 {
@@ -48,7 +48,6 @@ PUT _cluster/settings
   }
 }
 --------------------------------
-// CONSOLE
 // TEST[setup:host]
 // TEST[s/127.0.0.1:930\d+/\${transport_host}/]
 
@@ -59,7 +58,7 @@ PUT _cluster/settings
 The following <<search,search>> API request searches the
 `twitter` index on a single remote cluster, `cluster_one`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /cluster_one:twitter/_search
 {
@@ -70,7 +69,6 @@ GET /cluster_one:twitter/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[setup:twitter]
 
@@ -132,7 +130,7 @@ three clusters:
 * Your local cluster
 * Two remote clusters, `cluster_one` and `cluster_two`
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /twitter,cluster_one:twitter,cluster_two:twitter/_search
 {
@@ -143,7 +141,6 @@ GET /twitter,cluster_one:twitter,cluster_two:twitter/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The API returns the following response:
@@ -235,7 +232,7 @@ To skip an unavailable cluster during a {ccs}, set the
 The following <<cluster-update-settings,cluster update settings>> API request
 changes `cluster_two`'s `skip_unavailable` setting to `true`.
 
-[source,js]
+[source,console]
 --------------------------------
 PUT _cluster/settings
 {
@@ -244,7 +241,6 @@ PUT _cluster/settings
   }
 }
 --------------------------------
-// CONSOLE
 // TEST[continued]
 
 If `cluster_two` is disconnected or unavailable during a {ccs}, {es} won't

+ 3 - 6
docs/reference/modules/discovery/adding-removing-nodes.asciidoc

@@ -60,7 +60,7 @@ without affecting the cluster's master-level availability. A node can be added
 to the voting configuration exclusion list using the
 <<voting-config-exclusions>> API. For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 # Add node to voting configuration exclusions list and wait for the system
 # to auto-reconfigure the node out of the voting configuration up to the
@@ -71,7 +71,6 @@ POST /_cluster/voting_config_exclusions/node_name
 # auto-reconfiguration up to one minute
 POST /_cluster/voting_config_exclusions/node_name?timeout=1m
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:this would break the test cluster if executed]
 
 The node that should be added to the exclusions list is specified using
@@ -104,11 +103,10 @@ reconfigure the voting configuration to remove that node and prevents it from
 returning to the voting configuration once it has removed. The current list of
 exclusions is stored in the cluster state and can be inspected as follows:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions
 --------------------------------------------------
-// CONSOLE
 
 This list is limited in size by the `cluster.max_voting_config_exclusions` 
 setting, which defaults to `10`. See <<modules-discovery-settings>>. Since
@@ -123,7 +121,7 @@ down permanently, its exclusion can be removed after it is shut down and removed
 from the cluster. Exclusions can also be cleared if they were created in error
 or were only required temporarily:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 # Wait for all the nodes with voting configuration exclusions to be removed from
 # the cluster and then remove all the exclusions, allowing any node to return to
@@ -134,4 +132,3 @@ DELETE /_cluster/voting_config_exclusions
 # to return to the voting configuration in the future.
 DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
 --------------------------------------------------
-// CONSOLE

+ 1 - 2
docs/reference/modules/discovery/voting.asciidoc

@@ -27,11 +27,10 @@ see <<modules-discovery-adding-removing-nodes>>.
 The current voting configuration is stored in the cluster state so you can
 inspect its current contents as follows:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config
 --------------------------------------------------
-// CONSOLE
 
 NOTE: The current voting configuration is not necessarily the same as the set of
 all available master-eligible nodes in the cluster. Altering the voting

+ 6 - 12
docs/reference/modules/indices/request_cache.asciidoc

@@ -40,11 +40,10 @@ evicted.
 
 The cache can be expired manually with the <<indices-clearcache,`clear-cache` API>>:
 
-[source,js]
+[source,console]
 ------------------------
 POST /kimchy,elasticsearch/_cache/clear?request=true
 ------------------------
-// CONSOLE
 // TEST[s/^/PUT kimchy\nPUT elasticsearch\n/]
 
 [float]
@@ -53,7 +52,7 @@ POST /kimchy,elasticsearch/_cache/clear?request=true
 The cache is enabled by default, but can be disabled when creating a new
 index as follows:
 
-[source,js]
+[source,console]
 -----------------------------
 PUT /my_index
 {
@@ -62,17 +61,15 @@ PUT /my_index
   }
 }
 -----------------------------
-// CONSOLE
 
 It can also be enabled or disabled dynamically on an existing index with the
 <<indices-update-settings,`update-settings`>> API:
 
-[source,js]
+[source,console]
 -----------------------------
 PUT /my_index/_settings
 { "index.requests.cache.enable": true }
 -----------------------------
-// CONSOLE
 // TEST[continued]
 
 
@@ -82,7 +79,7 @@ PUT /my_index/_settings
 The `request_cache` query-string parameter can be used to enable or disable
 caching on a *per-request* basis.  If set, it overrides the index-level setting:
 
-[source,js]
+[source,console]
 -----------------------------
 GET /my_index/_search?request_cache=true
 {
@@ -96,7 +93,6 @@ GET /my_index/_search?request_cache=true
   }
 }
 -----------------------------
-// CONSOLE
 // TEST[continued]
 
 IMPORTANT: If your query uses a script whose result is not deterministic (e.g.
@@ -140,16 +136,14 @@ setting is provided for completeness' sake only.
 The size of the cache (in bytes) and the number of evictions can be viewed
 by index, with the <<indices-stats,`indices-stats`>> API:
 
-[source,js]
+[source,console]
 ------------------------
 GET /_stats/request_cache?human
 ------------------------
-// CONSOLE
 
 or by node with the <<cluster-nodes-stats,`nodes-stats`>> API:
 
-[source,js]
+[source,console]
 ------------------------
 GET /_nodes/stats/indices/request_cache?human
 ------------------------
-// CONSOLE

+ 4 - 6
docs/reference/modules/remote-clusters.asciidoc

@@ -99,7 +99,7 @@ For more information about the optional transport settings, see
 If you use <<cluster-update-settings,cluster settings>>, the remote clusters
 are available on every node in the cluster. For example:
 
-[source,js]
+[source,console]
 --------------------------------
 PUT _cluster/settings
 {
@@ -129,14 +129,13 @@ PUT _cluster/settings
   }
 }
 --------------------------------
-// CONSOLE
 // TEST[setup:host]
 // TEST[s/127.0.0.1:9300/\${transport_host}/]
 
 You can dynamically update the compression and ping schedule settings. However,
 you must re-include seeds in the settings update request. For example:
 
-[source,js]
+[source,console]
 --------------------------------
 PUT _cluster/settings
 {
@@ -160,7 +159,6 @@ PUT _cluster/settings
   }
 }
 --------------------------------
-// CONSOLE
 // TEST[continued]
 
 NOTE: When the compression or ping schedule settings change, all the existing
@@ -169,7 +167,7 @@ fail.
 
 A remote cluster can be deleted from the cluster settings by setting its seeds and optional settings to `null` :
 
-[source,js]
+[source,console]
 --------------------------------
 PUT _cluster/settings
 {
@@ -188,8 +186,8 @@ PUT _cluster/settings
   }
 }
 --------------------------------
-// CONSOLE
 // TEST[continued]
+
 <1> `cluster_two` would be removed from the cluster settings, leaving
 `cluster_one` and `cluster_three` intact.
 

+ 33 - 62
docs/reference/modules/snapshots.asciidoc

@@ -91,7 +91,7 @@ be corrupted. While setting the repository to `readonly` on all but one of the
 clusters should work with multiple clusters differing by one major version, it
 is not a supported configuration.
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_backup
 {
@@ -101,16 +101,14 @@ PUT /_snapshot/my_backup
   }
 }
 -----------------------------------
-// CONSOLE
 // TESTSETUP
 
 To retrieve information about a registered repository, use a GET request:
 
-[source,js]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup
 -----------------------------------
-// CONSOLE
 
 which returns:
 
@@ -132,28 +130,25 @@ specifying repository names. For example, the following request retrieves
 information about all of the snapshot repositories that start with `repo` or
 contain `backup`:
 
-[source,js]
+[source,console]
 -----------------------------------
 GET /_snapshot/repo*,*backup*
 -----------------------------------
-// CONSOLE
 
 To retrieve information about all registered snapshot repositories, omit the
 repository name or specify `_all`:
 
-[source,js]
+[source,console]
 -----------------------------------
 GET /_snapshot
 -----------------------------------
-// CONSOLE
 
 or
 
-[source,js]
+[source,console]
 -----------------------------------
 GET /_snapshot/_all
 -----------------------------------
-// CONSOLE
 
 [float]
 ===== Shared File System Repository
@@ -182,7 +177,7 @@ path.repo: ["\\\\MY_SERVER\\Snapshots"]
 After all nodes are restarted, the following command can be used to register the shared file system repository with
 the name `my_fs_backup`:
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_fs_backup
 {
@@ -193,13 +188,12 @@ PUT /_snapshot/my_fs_backup
     }
 }
 -----------------------------------
-// CONSOLE
 // TEST[skip:no access to absolute path]
 
 If the repository location is specified as a relative path this path will be resolved against the first path specified
 in `path.repo`:
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_fs_backup
 {
@@ -210,7 +204,6 @@ PUT /_snapshot/my_fs_backup
     }
 }
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The following settings are supported:
@@ -277,7 +270,7 @@ When you restore a source only snapshot:
 When you create a source repository, you must specify the type and name of the delegate repository
 where the snapshots will be stored:
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT _snapshot/my_src_only_repository
 {
@@ -288,7 +281,6 @@ PUT _snapshot/my_src_only_repository
   }
 }
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 [float]
@@ -307,7 +299,7 @@ When a repository is registered, it's immediately verified on all master and dat
 on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository
 verification when registering or updating a repository:
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_unverified_backup?verify=false
 {
@@ -317,16 +309,14 @@ PUT /_snapshot/my_unverified_backup?verify=false
   }
 }
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The verification process can also be executed manually by running the following command:
 
-[source,js]
+[source,console]
 -----------------------------------
 POST /_snapshot/my_unverified_backup/_verify
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
@@ -339,11 +329,10 @@ process. This unreferenced data does in no way negatively impact the performance
 than necessary storage use. In order to clean up this unreferenced data, users can call the cleanup endpoint for a repository which will
 trigger a complete accounting of the repositories contents and subsequent deletion of all unreferenced data that was found.
 
-[source,js]
+[source,console]
 -----------------------------------
 POST /_snapshot/my_repository/_cleanup
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The response to a cleanup request looks as follows:
@@ -374,11 +363,10 @@ A repository can contain multiple snapshots of the same cluster. Snapshots are i
 cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
 command:
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
@@ -389,7 +377,7 @@ even minutes) for this command to return even if the `wait_for_completion` param
 By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by
 specifying the list of indices in the body of the snapshot request.
 
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
 {
@@ -402,7 +390,6 @@ PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
   }
 }
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
@@ -421,12 +408,12 @@ new indices. Note that special characters need to be URI encoded.
 
 For example, creating a snapshot with the current day in the name, like `snapshot-2018.05.11`, can be achieved with
 the following command:
-[source,js]
+
+[source,console]
 -----------------------------------
 # PUT /_snapshot/my_backup/<snapshot-{now/d}>
 PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 
@@ -452,11 +439,10 @@ filtering settings and rebalancing algorithm) once the snapshot is finished.
 
 Once a snapshot is created information about this snapshot can be obtained using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/snapshot_1
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 This command returns basic information about the snapshot including start and end time, version of
@@ -490,20 +476,18 @@ snapshot and the list of failures that occurred during the snapshot. The snapsho
 
 Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 All snapshots currently stored in the repository can be listed using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/_all
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to
@@ -519,31 +503,29 @@ such as status information, the number of snapshotted shards, etc.  The default
 value of the `verbose` parameter is `true`.
 
 It is also possible to retrieve snapshots from multiple repositories in one go, for example:
-[source,sh]
+
+[source,console]
 -----------------------------------
 GET /_snapshot/_all
 GET /_snapshot/my_backup,my_fs_backup
 GET /_snapshot/my*/snap*
 -----------------------------------
-// CONSOLE
 // TEST[skip:no my_fs_backup]
 
 A currently running snapshot can be retrieved using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/_current
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 A snapshot can be deleted from the repository using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 DELETE /_snapshot/my_backup/snapshot_2
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
@@ -554,11 +536,10 @@ started by mistake.
 
 A repository can be unregistered using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 DELETE /_snapshot/my_backup
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing
@@ -570,11 +551,10 @@ the snapshots. The snapshots themselves are left untouched and in place.
 
 A snapshot can be restored using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 POST /_snapshot/my_backup/snapshot_1/_restore
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 By default, all indices in the snapshot are restored, and the cluster state is
@@ -589,7 +569,7 @@ http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendRepl
 Set `include_aliases` to `false` to prevent aliases from being restored together
 with associated indices
 
-[source,js]
+[source,console]
 -----------------------------------
 POST /_snapshot/my_backup/snapshot_1/_restore
 {
@@ -600,7 +580,6 @@ POST /_snapshot/my_backup/snapshot_1/_restore
   "rename_replacement": "restored_index_$1"
 }
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The restore operation can be performed on a functioning cluster. However, an
@@ -628,7 +607,7 @@ restored in this case and all missing shards will be recreated empty.
 Most of index settings can be overridden during the restore process. For example, the following command will restore
 the index `index_1` without creating any replicas while switching back to default refresh interval:
 
-[source,js]
+[source,console]
 -----------------------------------
 POST /_snapshot/my_backup/snapshot_1/_restore
 {
@@ -641,7 +620,6 @@ POST /_snapshot/my_backup/snapshot_1/_restore
   ]
 }
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation.
@@ -673,31 +651,28 @@ the global cluster state.
 
 A list of currently running snapshots with their detailed status information can be obtained using the following command:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/_status
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
 to limit the results to a particular repository:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/_status
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
 if it's not currently running:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/snapshot_1/_status
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The output looks similar to the following:
@@ -749,11 +724,10 @@ in progress, there's also a `processed` section that contains information about
 
 Multiple ids are also supported:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 [float]
@@ -766,11 +740,10 @@ the simplest method that can be used to get notified about operation completion.
 
 The snapshot operation can be also monitored by periodic calls to the snapshot info:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/snapshot_1
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
@@ -779,11 +752,10 @@ for available resources before returning the result. On very large shards the wa
 
 To get more immediate and complete information about snapshots the snapshot status command can be used instead:
 
-[source,sh]
+[source,console]
 -----------------------------------
 GET /_snapshot/my_backup/snapshot_1/_status
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
@@ -809,11 +781,10 @@ running snapshot was executed by mistake, or takes unusually long, it can be ter
 The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
 that snapshot before deleting the snapshot data from the repository.
 
-[source,sh]
+[source,console]
 -----------------------------------
 DELETE /_snapshot/my_backup/snapshot_1
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can

+ 2 - 6
docs/reference/modules/transport.asciidoc

@@ -153,7 +153,7 @@ request was uncompressed--even when compression is enabled.
 The transport module has a dedicated tracer logger which, when activated, logs incoming and out going requests. The log can be dynamically activated
 by settings the level of the `org.elasticsearch.transport.TransportService.tracer` logger to `TRACE`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _cluster/settings
 {
@@ -162,12 +162,11 @@ PUT _cluster/settings
    }
 }
 --------------------------------------------------
-// CONSOLE
 
 You can also control which actions will be traced, using a set of include and exclude wildcard patterns. By default every request will be traced
 except for fault detection pings:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _cluster/settings
 {
@@ -177,6 +176,3 @@ PUT _cluster/settings
    }
 }
 --------------------------------------------------
-// CONSOLE
-
-

+ 1 - 2
docs/reference/monitoring/collecting-monitoring-data.asciidoc

@@ -53,7 +53,7 @@ view the cluster settings and `manage` cluster privileges to change them.
 
 For example, use the following APIs to review and change this setting:
 
-[source,js]
+[source,console]
 ----------------------------------
 GET _cluster/settings
 
@@ -64,7 +64,6 @@ PUT _cluster/settings
   }
 }
 ----------------------------------
-// CONSOLE
 
 Alternatively, you can enable this setting in {kib}. In the side navigation, 
 click *Monitoring*. If data collection is disabled, you are prompted to turn it 

+ 3 - 5
docs/reference/monitoring/configuring-metricbeat.asciidoc

@@ -28,7 +28,7 @@ production cluster. By default, it is is disabled (`false`).
 
 You can use the following APIs to review and change this setting:
 
-[source,js]
+[source,console]
 ----------------------------------
 GET _cluster/settings
 
@@ -38,8 +38,7 @@ PUT _cluster/settings
     "xpack.monitoring.collection.enabled": true
   }
 }
-----------------------------------
-// CONSOLE 
+---------------------------------- 
 
 If {es} {security-features} are enabled, you must have `monitor` cluster privileges to 
 view the cluster settings and `manage` cluster privileges to change them.
@@ -194,7 +193,7 @@ production cluster.
 
 You can use the following API to change this setting:
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT _cluster/settings
 {
@@ -203,7 +202,6 @@ PUT _cluster/settings
   }
 }
 ----------------------------------
-// CONSOLE
 
 If {es} {security-features} are enabled, you must have `monitor` cluster
 privileges to  view the cluster settings and `manage` cluster privileges

+ 2 - 4
docs/reference/monitoring/indices.asciidoc

@@ -8,11 +8,10 @@ that store the monitoring data collected from a cluster.
 
 You can retrieve the templates through the `_template` API:
 
-[source,sh]
+[source,console]
 ----------------------------------
 GET /_template/.monitoring-*
 ----------------------------------
-// CONSOLE
 
 By default, the template configures one shard and one replica for the
 monitoring indices. To override the default settings, add your own template:
@@ -26,7 +25,7 @@ section.
 For example, the following template increases the number of shards to five
 and the number of replicas to two.
 
-[source,js]
+[source,console]
 ----------------------------------
 PUT /_template/custom_monitoring
 {
@@ -38,7 +37,6 @@ PUT /_template/custom_monitoring
     }
 }
 ----------------------------------
-// CONSOLE
 
 IMPORTANT: Only set the `number_of_shards` and `number_of_replicas` in the
 settings section. Overriding other monitoring template settings could cause

+ 4 - 8
docs/reference/query-dsl/bool-query.asciidoc

@@ -32,7 +32,7 @@ The `bool` query takes a _more-matches-is-better_ approach, so the score from
 each matching `must` or `should` clause will be added together to provide the
 final `_score` for each document.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -59,7 +59,6 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[score-bool-filter]]
 ==== Scoring with `bool.filter`
@@ -72,7 +71,7 @@ all documents where the `status` field contains the term `active`.
 This first query assigns a score of `0` to all documents, as no scoring
 query has been specified:
 
-[source,js]
+[source,console]
 ---------------------------------
 GET _search
 {
@@ -87,12 +86,11 @@ GET _search
   }
 }
 ---------------------------------
-// CONSOLE
 
 This `bool` query has a `match_all` query, which assigns a score of `1.0` to
 all documents.
 
-[source,js]
+[source,console]
 ---------------------------------
 GET _search
 {
@@ -110,13 +108,12 @@ GET _search
   }
 }
 ---------------------------------
-// CONSOLE
 
 This `constant_score` query behaves in exactly the same way as the second example above.
 The `constant_score` query assigns a score of `1.0` to all documents matched
 by the filter.
 
-[source,js]
+[source,console]
 ---------------------------------
 GET _search
 {
@@ -131,7 +128,6 @@ GET _search
   }
 }
 ---------------------------------
-// CONSOLE
 
 ==== Using named queries to see which clauses matched
 

+ 1 - 2
docs/reference/query-dsl/boosting-query.asciidoc

@@ -14,7 +14,7 @@ excluding them from the search results.
 [[boosting-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -35,7 +35,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[boosting-top-level-params]]
 ==== Top-level parameters for `boosting`

+ 1 - 2
docs/reference/query-dsl/constant-score-query.asciidoc

@@ -8,7 +8,7 @@ Wraps a <<query-dsl-bool-query, filter query>> and returns every matching
 document with a <<relevance-scores,relevance score>> equal to the `boost`
 parameter value.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -22,7 +22,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[constant-score-top-level-params]]
 ==== Top-level parameters for `constant_score`

+ 1 - 2
docs/reference/query-dsl/dis-max-query.asciidoc

@@ -17,7 +17,7 @@ You can use the `dis_max` to search for a term in fields mapped with different
 [[query-dsl-dis-max-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -32,7 +32,6 @@ GET /_search
     }
 }    
 ----
-// CONSOLE
 
 [[query-dsl-dis-max-query-top-level-params]]
 ==== Top-level parameters for `dis_max`

+ 4 - 8
docs/reference/query-dsl/distance-feature-query.asciidoc

@@ -33,7 +33,7 @@ following example.
 * `production_date`, a <<date, `date`>> field
 * `location`, a <<geo-point,`geo_point`>> field
 
-[source,js]
+[source,console]
 ----
 PUT /items
 {
@@ -52,14 +52,13 @@ PUT /items
   }
 }
 ----
-// CONSOLE
 // TESTSETUP
 --
 
 . Index several documents to this index.
 +
 --
-[source,js]
+[source,console]
 ----
 PUT /items/_doc/1?refresh
 {
@@ -83,7 +82,6 @@ PUT /items/_doc/3?refresh
   "location": [-71.3, 41.12]
 }
 ----
-// CONSOLE
 --
 
 
@@ -96,7 +94,7 @@ The following `bool` search returns documents with a `name` value of
 `chocolate`. The search also uses the `distance_feature` query to increase the
 relevance score of documents with a `production_date` value closer to `now`.
 
-[source,js]
+[source,console]
 ----
 GET /items/_search
 {
@@ -118,7 +116,6 @@ GET /items/_search
   }
 }
 ----
-// CONSOLE
 
 [[distance-feature-query-distance-ex]]
 ====== Boost documents based on location
@@ -126,7 +123,7 @@ The following `bool` search returns documents with a `name` value of
 `chocolate`. The search also uses the `distance_feature` query to increase the
 relevance score of documents with a `location` value closer to `[-71.3, 41.15]`.
 
-[source,js]
+[source,console]
 ----
 GET /items/_search
 {
@@ -148,7 +145,6 @@ GET /items/_search
   }
 }
 ----
-// CONSOLE
 
 
 [[distance-feature-top-level-params]]

+ 2 - 4
docs/reference/query-dsl/exists-query.asciidoc

@@ -16,7 +16,7 @@ An indexed value may not exist for a document's field due to a variety of reason
 [[exists-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -27,7 +27,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[exists-query-top-level-params]]
 ==== Top-level parameters for `exists`
@@ -53,7 +52,7 @@ query.
 The following search returns documents that are missing an indexed value for
 the `user` field.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -68,4 +67,3 @@ GET /_search
     }
 }
 ----
-// CONSOLE

+ 9 - 16
docs/reference/query-dsl/function-score-query.asciidoc

@@ -15,7 +15,7 @@ by the query.
 
 `function_score` can be used with only one function like this:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -29,7 +29,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 <1> See <<score-functions>> for a list of supported functions.
@@ -38,7 +37,7 @@ Furthermore, several functions can be combined. In this case one can
 optionally choose to apply the function only if a document matches a
 given filtering query
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -65,7 +64,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 <1> Boost for the whole query.
@@ -135,7 +133,7 @@ the scoring of it optionally with a computation derived from other numeric
 field values in the doc using a script expression. Here is a
 simple sample:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -153,7 +151,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 NOTE: Scores produced by the `script_score` function must be non-negative,
@@ -167,7 +164,7 @@ Scripts compilation is cached for faster execution. If the script has
 parameters that it needs to take into account, it is preferable to reuse the
 same script, and provide parameters to it:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -189,7 +186,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 Note that unlike the `custom_score` query, the
@@ -234,7 +230,7 @@ NOTE: It was possible to set a seed without setting a field, but this has been
 deprecated as this requires loading fielddata on the `_id` field which consumes
 a lot of memory.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -248,7 +244,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 [[function-field-value-factor]]
@@ -263,7 +258,7 @@ As an example, imagine you have a document indexed with a numeric `likes`
 field and wish to influence the score of a document with this field, an example
 doing so would look like:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -279,7 +274,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 Which will translate into the following formula for scoring:
@@ -375,7 +369,7 @@ this case. If your field is a date field, you can set `scale` and `offset` as
 days, weeks, and so on. Example:
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -393,8 +387,8 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
+
 <1> The date format of the origin depends on the <<mapping-date-format,`format`>> defined in
     your mapping. If you do not define the origin, the current time is used.
 <2> The `offset` and `decay` parameters are optional.
@@ -573,7 +567,7 @@ and for `location`:
 Suppose you want to multiply these two functions on the original score,
 the request would look like this:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -607,7 +601,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Next, we show how the computed score looks like for each of the three
 possible decay functions.

+ 2 - 4
docs/reference/query-dsl/fuzzy-query.asciidoc

@@ -25,7 +25,7 @@ The query then returns exact matches for each expansion.
 [[fuzzy-query-ex-simple]]
 ===== Simple example
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -38,12 +38,11 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[fuzzy-query-ex-advanced]]
 ===== Example using advanced parameters
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -61,7 +60,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[fuzzy-query-top-level-params]]
 ==== Top-level parameters for `fuzzy`

+ 10 - 20
docs/reference/query-dsl/geo-bounding-box-query.asciidoc

@@ -7,7 +7,7 @@
 A query allowing to filter hits based on a point location using a
 bounding box. Assuming the following indexed document:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my_locations
 {
@@ -34,13 +34,12 @@ PUT /my_locations/_doc/1
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 Then the following simple query can be executed with a
 `geo_bounding_box` filter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -67,7 +66,6 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ==== Query Options
@@ -95,7 +93,7 @@ representations of the geo point, the filter can accept it as well:
 [float]
 ===== Lat Lon As Properties
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -122,7 +120,6 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Lat Lon As Array
@@ -130,7 +127,7 @@ GET my_locations/_search
 Format in `[lon, lat]`, note, the order of lon/lat here in order to
 conform with http://geojson.org/[GeoJSON].
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -151,14 +148,13 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Lat Lon As String
 
 Format in `lat,lon`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -179,12 +175,11 @@ GET my_locations/_search
 }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Bounding Box as Well-Known Text (WKT)
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -204,12 +199,11 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Geohash
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -230,7 +224,6 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 When geohashes are used to specify the bounding the edges of the
@@ -244,7 +237,7 @@ In order to specify a bounding box that would match entire area of a
 geohash the geohash can be specified in both `top_left` and
 `bottom_right` parameters:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -258,7 +251,6 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 In this example, the geohash `dr` will produce the bounding box
 query with the top left corner at `45.0,-78.75` and the bottom right
@@ -274,7 +266,7 @@ are supported. Instead of setting the values pairwise, one can use
 the simple names `top`, `left`, `bottom` and `right` to set the
 values separately.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -297,7 +289,6 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [float]
@@ -324,7 +315,7 @@ that the `geo_point` type must have lat and lon indexed in this case).
 Note, when using the indexed option, multi locations per document field
 are not supported. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_locations/_search
 {
@@ -352,7 +343,6 @@ GET my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ==== Ignore Unmapped

+ 6 - 12
docs/reference/query-dsl/geo-distance-query.asciidoc

@@ -8,7 +8,7 @@ Filters documents that include only hits that exists within a specific
 distance from a geo point. Assuming the following mapping and indexed
 document:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my_locations
 {
@@ -35,14 +35,13 @@ PUT /my_locations/_doc/1
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 
 Then the following simple query can be executed with a `geo_distance`
 filter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my_locations/_search
 {
@@ -64,7 +63,6 @@ GET /my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ==== Accepted Formats
@@ -75,7 +73,7 @@ representations of the geo point, the filter can accept it as well:
 [float]
 ===== Lat Lon As Properties
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my_locations/_search
 {
@@ -97,7 +95,6 @@ GET /my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Lat Lon As Array
@@ -105,7 +102,7 @@ GET /my_locations/_search
 Format in `[lon, lat]`, note, the order of lon/lat here in order to
 conform with http://geojson.org/[GeoJSON].
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my_locations/_search
 {
@@ -124,7 +121,6 @@ GET /my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [float]
@@ -132,7 +128,7 @@ GET /my_locations/_search
 
 Format in `lat,lon`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my_locations/_search
 {
@@ -151,12 +147,11 @@ GET /my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Geohash
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my_locations/_search
 {
@@ -175,7 +170,6 @@ GET /my_locations/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ==== Options

+ 4 - 8
docs/reference/query-dsl/geo-polygon-query.asciidoc

@@ -7,7 +7,7 @@
 A query returning hits that only fall within a polygon of
 points. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -31,7 +31,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ==== Query Options
@@ -57,7 +56,7 @@ Format as `[lon, lat]`
 Note: the order of lon/lat here must
 conform with http://geojson.org/[GeoJSON].
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -81,14 +80,13 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Lat Lon as String
 
 Format in `lat,lon`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -112,12 +110,11 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ===== Geohash
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -141,7 +138,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [float]
 ==== geo_point Type

+ 3 - 6
docs/reference/query-dsl/geo-shape-query.asciidoc

@@ -25,7 +25,7 @@ http://www.geojson.org[GeoJSON] to represent shapes.
 
 Given the following index:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /example
 {
@@ -47,13 +47,12 @@ POST /example/_doc?refresh
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 The following query will find the point using the Elasticsearch's
 `envelope` GeoJSON extension:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /example/_search
 {
@@ -77,7 +76,6 @@ GET /example/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Pre-Indexed Shape
 
@@ -98,7 +96,7 @@ Defaults to 'shape'.
 The following is an example of using the Filter with a pre-indexed
 shape:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /shapes
 {
@@ -138,7 +136,6 @@ GET /example/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Spatial Relations
 

+ 3 - 6
docs/reference/query-dsl/has-child-query.asciidoc

@@ -27,7 +27,7 @@ the `has_child` query, use it as rarely as possible.
 To use the `has_child` query, your index must include a <<parent-join,join>>
 field mapping. For example:
 
-[source,js]
+[source,console]
 ----
 PUT /my_index
 {
@@ -44,13 +44,12 @@ PUT /my_index
 }
 
 ----
-// CONSOLE
 // TESTSETUP
 
 [[has-child-query-ex-query]]
 ===== Example query
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -67,7 +66,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[has-child-top-level-params]]
 ==== Top-level parameters for `has_child`
@@ -139,7 +137,7 @@ If you need to sort returned documents by a field in their child documents, use
 a `function_score` query and sort by `_score`. For example, the following query
 sorts returned documents by the `click_count` field of their child documents.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -158,4 +156,3 @@ GET /_search
     }
 }
 ----
-// CONSOLE

+ 3 - 6
docs/reference/query-dsl/has-parent-query.asciidoc

@@ -23,7 +23,7 @@ Each `has_parent` query in a search can increase query time significantly.
 To use the `has_parent` query, your index must include a <<parent-join,join>>
 field mapping. For example:
 
-[source,js]
+[source,console]
 ----
 PUT /my-index
 {
@@ -43,13 +43,12 @@ PUT /my-index
 }
 
 ----
-// CONSOLE
 // TESTSETUP
 
 [[has-parent-query-ex-query]]
 ===== Example query
 
-[source,js]
+[source,console]
 ----
 GET /my-index/_search
 {
@@ -67,7 +66,6 @@ GET /my-index/_search
     }
 }
 ----
-// CONSOLE
 
 [[has-parent-top-level-params]]
 ==== Top-level parameters for `has_parent`
@@ -120,7 +118,7 @@ If you need to sort returned documents by a field in their parent documents, use
 a `function_score` query and sort by `_score`. For example, the following query
 sorts returned documents by the `view_count` field of their parent documents.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -139,4 +137,3 @@ GET /_search
     }
 }
 ----
-// CONSOLE

+ 1 - 2
docs/reference/query-dsl/ids-query.asciidoc

@@ -9,7 +9,7 @@ the <<mapping-id-field,`_id`>> field.
 
 ==== Example request
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -20,7 +20,6 @@ GET /_search
     }
 }    
 --------------------------------------------------
-// CONSOLE
 
 [[ids-query-top-level-parameters]]
 ==== Top-level parameters for `ids`

+ 6 - 12
docs/reference/query-dsl/intervals-query.asciidoc

@@ -24,7 +24,7 @@ favorite food` immediately followed by `hot water` or `cold porridge` in the
 This search would match a `my_text` value of `my favorite food is cold
 porridge` but not `when it's cold my favorite food is porridge`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -56,7 +56,6 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[intervals-top-level-params]]
 ==== Top-level parameters for `intervals`
@@ -273,7 +272,7 @@ The following search includes a `filter` rule. It returns documents that have
 the words `hot` and `porridge` within 10 positions of each other, without the
 word `salty` in between:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -296,7 +295,6 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[interval-script-filter]]
 ===== Script filters
@@ -305,7 +303,7 @@ You can use a script to filter intervals based on their start position, end
 position, and internal gap count. The following `filter` script uses the
 `interval` variable with the `start`, `end`, and `gaps` methods:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -325,7 +323,6 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [[interval-minimization]]
@@ -337,7 +334,7 @@ when using `max_gaps` restrictions or filters. For example, take the
 following query, searching for `salty` contained within the phrase `hot
 porridge`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -359,7 +356,6 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 This query does *not* match a document containing the phrase `hot porridge is
 salty porridge`, because the intervals returned by the match query for `hot
@@ -373,7 +369,7 @@ cause surprises when used in combination with `max_gaps`. Consider the
 following query, searching for `the` immediately followed by `big` or `big bad`,
 immediately followed by `wolf`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -398,7 +394,6 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 Counter-intuitively, this query does *not* match the document `the big bad
 wolf`, because the `any_of` rule in the middle only produces intervals
@@ -407,7 +402,7 @@ starting at the same position, and so being minimized away. In these cases,
 it's better to rewrite the query so that all of the options are explicitly
 laid out at the top level:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search
 {
@@ -431,4 +426,3 @@ POST _search
   }
 }
 --------------------------------------------------
-// CONSOLE

+ 3 - 6
docs/reference/query-dsl/match-all-query.asciidoc

@@ -7,7 +7,7 @@
 The most simple query, which matches all documents, giving them all a `_score`
 of `1.0`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 { 
@@ -16,11 +16,10 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `_score` can be changed with the `boost` parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -29,7 +28,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[query-dsl-match-none-query]]
 [float]
@@ -37,7 +35,7 @@ GET /_search
 
 This is the inverse of the `match_all` query, which matches no documents.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -46,4 +44,3 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE

+ 3 - 6
docs/reference/query-dsl/match-bool-prefix-query.asciidoc

@@ -9,7 +9,7 @@ A `match_bool_prefix` query analyzes its input and constructs a
 is used in a `term` query. The last term is used in a `prefix` query. A
 `match_bool_prefix` query such as
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -20,12 +20,11 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 where analysis produces the terms `quick`, `brown`, and `f` is similar to the
 following `bool` query
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -40,7 +39,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 An important difference between the `match_bool_prefix` query and
 <<query-dsl-match-query-phrase-prefix,`match_phrase_prefix`>> is that the
@@ -57,7 +55,7 @@ By default, `match_bool_prefix` queries' input text will be analyzed using the
 analyzer from the queried field's mapping. A different search analyzer can be
 configured with the `analyzer` parameter
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -71,7 +69,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 `match_bool_prefix` queries support the
 <<query-dsl-minimum-should-match,`minimum_should_match`>> and `operator`

+ 1 - 2
docs/reference/query-dsl/match-phrase-prefix-query.asciidoc

@@ -18,7 +18,7 @@ The following search returns documents that contain phrases beginning with
 This search would match a `message` value of `quick brown fox` or `two quick
 brown ferrets` but not `the fox is quick and brown`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -31,7 +31,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [[match-phrase-prefix-top-level-params]]

+ 2 - 4
docs/reference/query-dsl/match-phrase-query.asciidoc

@@ -7,7 +7,7 @@
 The `match_phrase` query analyzes the text and creates a `phrase` query
 out of the analyzed text. For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -18,7 +18,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 A phrase query matches terms up to a configurable `slop`
 (which defaults to 0) in any order. Transposed terms have a slop of 2.
@@ -27,7 +26,7 @@ The `analyzer` can be set to control which analyzer will perform the
 analysis process on the text. It defaults to the field explicit mapping
 definition, or the default search analyzer, for example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -41,6 +40,5 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 This query also accepts `zero_terms_query`, as explained in <<query-dsl-match-query, `match` query>>.

+ 6 - 12
docs/reference/query-dsl/match-query.asciidoc

@@ -14,7 +14,7 @@ including options for fuzzy matching.
 [[match-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -27,7 +27,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [[match-top-level-params]]
@@ -147,7 +146,7 @@ See <<query-dsl-match-query-zero>> for an example.
 You can simplify the match query syntax by combining the `<field>` and `query`
 parameters. For example:
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -158,7 +157,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[query-dsl-match-query-boolean]]
 ===== How the match query works
@@ -173,7 +171,7 @@ parameter.
 
 Here is an example with the `operator` parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -187,7 +185,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `analyzer` can be set to control which analyzer will perform the
 analysis process on the text. It defaults to the field explicit mapping
@@ -218,7 +215,7 @@ analysis process produces multiple tokens at the same position. Under the hood
 these terms are expanded to a special synonym query that blends term frequencies,
 which does not support fuzzy expansion.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -232,7 +229,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[query-dsl-match-query-zero]]
 ===== Zero terms query
@@ -241,7 +237,7 @@ does, the default behavior is to match no documents at all. In order to
 change that the `zero_terms_query` option can be used, which accepts
 `none` (default) and `all` which corresponds to a `match_all` query.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -256,7 +252,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[query-dsl-match-query-synonyms]]
 ===== Synonyms
@@ -269,7 +264,7 @@ For example, the following synonym: `"ny, new york" would produce:`
 
 It is also possible to match multi terms synonyms with conjunctions instead:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -283,7 +278,6 @@ GET /_search
    }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates a boolean query:
 

+ 4 - 8
docs/reference/query-dsl/mlt-query.asciidoc

@@ -15,7 +15,7 @@ provided piece of text. Here, we are asking for all movies that have some text
 similar to "Once upon a time" in their "title" and in their "description"
 fields, limiting the number of selected terms to 12.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -29,13 +29,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 A more complicated use case consists of mixing texts with documents already
 existing in the index. In this case, the syntax to specify a document is
 similar to the one used in the <<docs-multi-get,Multi GET API>>.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -59,13 +58,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Finally, users can mix some texts, a chosen set of documents but also provide
 documents not necessarily present in the index. To provide documents not
 present in the index, the syntax is similar to <<docs-termvectors-artificial-doc,artificial documents>>.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -94,7 +92,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== How it Works
 
@@ -120,7 +117,7 @@ we can explicitly store their `term_vector` at index time. We can still
 perform MLT on the "description" and "tags" fields, as `_source` is enabled by
 default, but there will be no speed up on analysis for these fields.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /imdb
 {
@@ -147,7 +144,6 @@ PUT /imdb
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Parameters
 

+ 18 - 30
docs/reference/query-dsl/multi-match-query.asciidoc

@@ -7,7 +7,7 @@
 The `multi_match` query builds on the <<query-dsl-match-query,`match` query>>
 to allow multi-field queries:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -19,7 +19,7 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> The query string.
 <2> The fields to be queried.
 
@@ -29,7 +29,7 @@ GET /_search
 
 Fields can be specified with wildcards, eg:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -41,12 +41,12 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> Query the `title`, `first_name` and `last_name` fields.
 
 Individual fields can be boosted with the caret (`^`) notation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -58,7 +58,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> The `subject` field is three times as important as the `message` field.
 
@@ -110,7 +109,7 @@ The `best_fields` type generates a <<query-dsl-match-query,`match` query>> for
 each field and wraps them in a <<query-dsl-dis-max-query,`dis_max`>> query, to
 find the single best matching field.  For instance, this query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -124,11 +123,10 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 would be executed as:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -143,7 +141,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 Normally the `best_fields` type uses the score of the *single* best matching
 field, but if `tie_breaker` is specified, then it calculates the score as
@@ -169,7 +166,7 @@ which is probably not what you want.
 
 Take this query for example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -183,7 +180,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> All terms must be present.
 
@@ -212,7 +208,7 @@ to push the most similar results to the top of the list.
 
 This query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -225,11 +221,10 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 would be executed as:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -244,7 +239,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 The score from each `match` clause is added together, then divided by the
 number of `match` clauses.
@@ -260,7 +254,8 @@ but they use a `match_phrase` or `match_phrase_prefix` query instead of a
 `match` query.
 
 This query:
-[source,js]
+
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -273,11 +268,10 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 would be executed as:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -291,7 +285,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 Also, accepts `analyzer`, <<mapping-boost,`boost`>>, `lenient` and `zero_terms_query` as explained
 in <<query-dsl-match-query>>, as well as `slop` which is explained in <<query-dsl-match-query-phrase>>.
@@ -344,7 +337,7 @@ big field.
 
 A query like:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -358,7 +351,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 is executed as:
 
@@ -404,7 +396,7 @@ For instance, if we have a `first` and `last` field which have
 the same analyzer, plus a `first.edge` and `last.edge` which
 both use an `edge_ngram` analyzer, this query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -420,7 +412,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 would be executed as:
 
@@ -443,7 +434,7 @@ You can easily rewrite this query yourself as two separate `cross_fields`
 queries combined with a `bool` query, and apply the `minimum_should_match`
 parameter to just one of them:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -470,7 +461,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE 
 
 <1> Either `will` or `smith` must be present in either of the `first`
     or `last` fields
@@ -478,7 +468,7 @@ GET /_search
 You can force all fields into the same group by specifying the `analyzer`
 parameter in the query.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -492,7 +482,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Use the `standard` analyzer for all fields.
 
@@ -531,7 +520,7 @@ The `bool_prefix` type's scoring behaves like <<type-most-fields>>, but using a
 <<query-dsl-match-bool-prefix-query,`match_bool_prefix` query>> instead of a
 `match` query.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -544,7 +533,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `analyzer`, `boost`, `operator`, `minimum_should_match`, `lenient`,
 `zero_terms_query`, and `auto_generate_synonyms_phrase_query` parameters as

+ 2 - 4
docs/reference/query-dsl/nested-query.asciidoc

@@ -19,7 +19,7 @@ the root parent document.
 To use the `nested` query, your index must include a <<nested,nested>> field
 mapping. For example:
 
-[source,js]
+[source,console]
 ----
 PUT /my_index
 {
@@ -33,13 +33,12 @@ PUT /my_index
 }
 
 ----
-// CONSOLE
 // TESTSETUP
 
 [[nested-query-ex-query]]
 ===== Example query
 
-[source,js]
+[source,console]
 ----
 GET /my_index/_search
 {
@@ -59,7 +58,6 @@ GET /my_index/_search
     }
 }
 ----
-// CONSOLE
 
 [[nested-top-level-params]]
 ==== Top-level parameters for `nested`

+ 4 - 8
docs/reference/query-dsl/parent-id-query.asciidoc

@@ -20,7 +20,7 @@ the following example.
 . Create an index with a <<parent-join,join>> field mapping.
 +
 --
-[source,js]
+[source,console]
 ----
 PUT /my-index
 {
@@ -37,14 +37,13 @@ PUT /my-index
 }
 
 ----
-// CONSOLE
 // TESTSETUP
 --
 
 . Index a parent document with an ID of `1`.
 +
 --
-[source,js]
+[source,console]
 ----
 PUT /my-index/_doc/1?refresh
 {
@@ -52,13 +51,12 @@ PUT /my-index/_doc/1?refresh
   "my-join-field": "my-parent"
 }
 ----
-// CONSOLE
 --
 
 . Index a child document of the parent document.
 +
 --
-[source,js]
+[source,console]
 ----
 PUT /my-index/_doc/2?routing=1&refresh
 {
@@ -69,7 +67,6 @@ PUT /my-index/_doc/2?routing=1&refresh
   }
 }
 ----
-// CONSOLE
 --
 
 [[parent-id-query-ex-query]]
@@ -78,7 +75,7 @@ PUT /my-index/_doc/2?routing=1&refresh
 The following search returns child documents for a parent document with an ID of
 `1`.
 
-[source,js]
+[source,console]
 ----
 GET /my-index/_search
 {
@@ -90,7 +87,6 @@ GET /my-index/_search
   }
 }
 ----
-// CONSOLE
 
 [[parent-id-top-level-params]]
 ==== Top-level parameters for `parent_id`

+ 13 - 26
docs/reference/query-dsl/percolate-query.asciidoc

@@ -14,7 +14,7 @@ to match with the stored queries.
 
 Create an index with two fields:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my-index
 {
@@ -30,7 +30,6 @@ PUT /my-index
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `message` field is the field used to preprocess the document defined in
 the `percolator` query before it gets indexed into a temporary index.
@@ -43,7 +42,7 @@ used later on to match documents defined on the `percolate` query.
 
 Register a query in the percolator:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my-index/_doc/1?refresh
 {
@@ -54,12 +53,11 @@ PUT /my-index/_doc/1?refresh
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Match a document to the registered percolator queries:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -73,7 +71,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The above request will yield the following response:
@@ -158,7 +155,7 @@ In that case the `document` parameter can be substituted with the following para
 In case you are not interested in the score, better performance can be expected by wrapping
 the percolator query in a `bool` query's filter clause or in a `constant_score` query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -176,7 +173,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 At index time terms are extracted from the percolator query and the percolator
@@ -199,7 +195,7 @@ The `_percolator_document_slot` field that is being returned with each matched p
 multiple documents simultaneously. It indicates which documents matched with a particular percolator query. The numbers
 correlate with the slot in the `documents` array specified in the `percolate` query.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -224,7 +220,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> The documents array contains 4 documents that are going to be percolated at the same time.
@@ -286,14 +281,13 @@ Based on the previous example.
 
 Index the document we want to percolate:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my-index/_doc/2
 {
   "message" : "A new bonsai tree in the office"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 Index response:
 
@@ -317,7 +311,7 @@ Index response:
 
 Percolating an existing document, using the index response as basis to build to new search request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -331,7 +325,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> The version is optional, but useful in certain cases. We can ensure that we are trying to percolate
@@ -354,7 +347,7 @@ This example is based on the mapping of the first example.
 
 Save a query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my-index/_doc/3?refresh
 {
@@ -365,12 +358,11 @@ PUT /my-index/_doc/3?refresh
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Save another query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /my-index/_doc/4?refresh
 {
@@ -381,12 +373,11 @@ PUT /my-index/_doc/4?refresh
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Execute a search request with the `percolate` query and highlighting enabled:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -405,7 +396,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 This will yield the following response.
@@ -483,7 +473,7 @@ the document defined in the `percolate` query.
 
 When percolating multiple documents at the same time like the request below then the highlight response is different:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -513,7 +503,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The slightly different response:
@@ -577,7 +566,7 @@ The slightly different response:
 
 It is possible to specify multiple `percolate` queries in a single search request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /my-index/_search
 {
@@ -607,7 +596,6 @@ GET /my-index/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> The `name` parameter will be used to identify which percolator document slots belong to what `percolate` query.
@@ -683,7 +671,7 @@ or the unsupported query is the only query in the percolator document).  These q
 can be found by running the following search:
 
 
-[source,js]
+[source,console]
 ---------------------------------------------------
 GET /_search
 {
@@ -694,7 +682,6 @@ GET /_search
   }
 }
 ---------------------------------------------------
-// CONSOLE
 
 NOTE: The above example assumes that there is a `query` field of type
 `percolator` in the mappings.

+ 1 - 2
docs/reference/query-dsl/pinned-query.asciidoc

@@ -10,7 +10,7 @@ the <<mapping-id-field,`_id`>> field.
 
 ==== Example request
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -26,7 +26,6 @@ GET /_search
     }
 }    
 --------------------------------------------------
-// CONSOLE
 
 [[pinned-query-top-level-parameters]]
 ==== Top-level parameters for `pinned`

+ 2 - 4
docs/reference/query-dsl/prefix-query.asciidoc

@@ -12,7 +12,7 @@ Returns documents that contain a specific prefix in a provided field.
 The following search returns documents where the `user` field contains a term
 that begins with `ki`.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -25,7 +25,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[prefix-query-top-level-params]]
 ==== Top-level parameters for `prefix`
@@ -50,7 +49,7 @@ information, see the <<query-dsl-multi-term-rewrite, `rewrite` parameter>>.
 You can simplify the `prefix` query syntax by combining the `<field>` and
 `value` parameters. For example:
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -59,7 +58,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[prefix-query-index-prefixes]]
 ===== Speed up prefix queries

+ 12 - 24
docs/reference/query-dsl/query-string-query.asciidoc

@@ -38,7 +38,7 @@ city) OR (big apple)` into two parts: `new york city` and `big apple`. The
 before returning matching documents. Because the query syntax does not use
 whitespace as an operator, `new york city` is passed as-is to the analyzer.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -50,7 +50,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[query-string-top-level-params]]
 ==== Top-level parameters for `query_string`
@@ -252,7 +251,7 @@ field1:query_term OR field2:query_term | ...
 
 For example, the following query
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -264,12 +263,11 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 matches the same words as
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -280,13 +278,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Since several queries are generated from the individual search terms,
 combining them is automatically done using a `dis_max` query with a `tie_breaker`.
 For example (the `name` is boosted by 5 using `^5` notation):
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -299,14 +296,13 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Simple wildcard can also be used to search "within" specific inner
 elements of the document. For example, if we have a `city` object with
 several fields (or inner object with fields) in it, we can automatically
 search on all "city" fields:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -318,13 +314,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Another option is to provide the wildcard fields search in the query
 string itself (properly escaping the `*` sign), for example:
 `city.\*:something`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -335,7 +330,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 NOTE: Since `\` (backslash) is a special character in json strings, it needs to
 be escaped, hence the two backslashes in the above `query_string`.
@@ -344,7 +338,7 @@ The fields parameter can also include pattern based field names,
 allowing to automatically expand to the relevant fields (dynamically
 introduced fields included). For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -356,7 +350,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[query-string-multi-field-parms]]
 ====== Additional parameters for multiple field searches
@@ -411,7 +404,7 @@ For example, the following synonym: `ny, new york` would produce:
 
 It is also possible to match multi terms synonyms with conjunctions instead:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -424,7 +417,6 @@ GET /_search
    }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates a boolean query:
 
@@ -440,7 +432,7 @@ The `query_string` splits the query around each operator to create a boolean
 query for the entire input. You can use `minimum_should_match` to control how
 many "should" clauses in the resulting query should match.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -455,7 +447,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates a boolean query:
 
@@ -467,7 +458,7 @@ in the single field `title`.
 [[query-string-min-should-match-multi]]
 ===== How `minimum_should_match` works for multiple fields
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -483,7 +474,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates a boolean query:
 
@@ -492,7 +482,7 @@ The example above creates a boolean query:
 that matches documents with the disjunction max over the fields `title` and
 `content`. Here the `minimum_should_match` parameter can't be applied.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -508,7 +498,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Adding explicit operators forces each term to be considered as a separate clause.
 
@@ -525,7 +514,7 @@ them made of the disjunction max over the fields for each term.
 A `cross_fields` value in the `type` field indicates fields with the same
 analyzer are grouped together when the input is analyzed.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -542,7 +531,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates a boolean query:
 

+ 2 - 2
docs/reference/query-dsl/query_filter_context.asciidoc

@@ -58,7 +58,7 @@ conditions are met:
 * The `status` field contains the exact word `published`.
 * The `publish_date` field contains a date from 1 Jan 2015 onwards.
 
-[source,js]
+[source,console]
 ------------------------------------
 GET /_search
 {
@@ -76,7 +76,7 @@ GET /_search
   }
 }
 ------------------------------------
-// CONSOLE
+
 <1> The `query` parameter indicates query context.
 <2> The `bool` and two `match` clauses are used in query context,
     which means that they are used to score how well each document

+ 4 - 6
docs/reference/query-dsl/range-query.asciidoc

@@ -12,7 +12,7 @@ Returns documents that contain terms within a provided range.
 The following search returns documents where the `age` field contains a term
 between `10` and `20`.
 
-[source,js]
+[source,console]
 ----
 GET _search
 {
@@ -27,7 +27,6 @@ GET _search
     }
 }
 ----
-// CONSOLE 
 
 [[range-query-top-level-params]]
 ==== Top-level parameters for `range`
@@ -149,7 +148,7 @@ When the `<field>` parameter is a <<date,`date`>> field datatype, you can use
 For example, the following search returns documents where the `timestamp` field
 contains a date between today and yesterday.
 
-[source,js]
+[source,console]
 ----
 GET _search
 {
@@ -163,7 +162,6 @@ GET _search
     }
 }
 ----
-// CONSOLE
 
 
 [[range-query-date-math-rounding]]
@@ -212,7 +210,7 @@ the entire month.
 You can use the `time_zone` parameter to convert `date` values to UTC using a
 UTC offset. For example:
 
-[source,js]
+[source,console]
 ----
 GET _search
 {
@@ -227,7 +225,7 @@ GET _search
     }
 }
 ----
-// CONSOLE
+
 <1> Indicates that `date` values use a UTC offset of `+01:00`.
 <2> With a UTC offset of `+01:00`, {es} converts this date to
 `2014-12-31T23:00:00 UTC`.

+ 7 - 14
docs/reference/query-dsl/rank-feature-query.asciidoc

@@ -53,7 +53,7 @@ to relevance, indicated by a `positive_score_impact` value of `false`.
 - `topics`, a <<rank-features,`rank_features`>> field which contains a list of
 topics and a measure of how well each document is connected to this topic
 
-[source,js]
+[source,console]
 ----
 PUT /test
 {
@@ -73,13 +73,12 @@ PUT /test
   }
 }
 ----
-// CONSOLE
 // TESTSETUP
 
 
 Index several documents to the `test` index.
 
-[source,js]
+[source,console]
 ----
 PUT /test/_doc/1?refresh
 {
@@ -118,7 +117,6 @@ PUT /test/_doc/3?refresh
   }
 }
 ----
-// CONSOLE
 
 [[rank-feature-query-ex-query]]
 ===== Example query
@@ -126,7 +124,7 @@ PUT /test/_doc/3?refresh
 The following query searches for `2016` and boosts relevance scores based or
 `pagerank`, `url_length`, and the `sports` topic.
 
-[source,js]
+[source,console]
 ----
 GET /test/_search 
 {
@@ -162,7 +160,6 @@ GET /test/_search
   }
 }
 ----
-// CONSOLE
 
 
 [[rank-feature-top-level-params]]
@@ -232,7 +229,7 @@ than `0.5` otherwise. Scores are always `(0,1)`.
 If the rank feature has a negative score impact then the function will be
 computed as `pivot / (S + pivot)`, which decreases when `S` increases.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /test/_search
 {
@@ -246,14 +243,13 @@ GET /test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 If a `pivot` value is not provided, {es} computes a default value equal to the
 approximate geometric mean of all rank feature values in the index. We recommend
 using this default value if you haven't had the opportunity to train a good
 pivot value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /test/_search
 {
@@ -265,7 +261,6 @@ GET /test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[rank-feature-query-logarithm]]
 ===== Logarithm
@@ -275,7 +270,7 @@ scaling factor. Scores are unbounded.
 
 This function only supports rank features that have a positive score impact.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /test/_search
 {
@@ -289,7 +284,6 @@ GET /test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[rank-feature-query-sigmoid]]
 ===== Sigmoid
@@ -302,7 +296,7 @@ The `exponent` must be positive and is typically in `[0.5, 1]`. A
 good value should be computed via training. If you don't have the opportunity to
 do so, we recommend you use the `saturation` function instead.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /test/_search
 {
@@ -317,4 +311,3 @@ GET /test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE

+ 1 - 2
docs/reference/query-dsl/regexp-query.asciidoc

@@ -19,7 +19,7 @@ that begins with `k` and ends with `y`. The `.*` operators match any
 characters of any length, including no characters. Matching
 terms can include `ky`, `kay`, and `kimchy`.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -35,7 +35,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 
 [[regexp-top-level-params]]

+ 2 - 4
docs/reference/query-dsl/script-query.asciidoc

@@ -11,7 +11,7 @@ Filters documents based on a provided <<modules-scripting-using,script>>. The
 [[script-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -29,7 +29,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 
 [[script-top-level-params]]
@@ -49,7 +48,7 @@ Like <<query-filter-context,filters>>, scripts are cached for faster execution.
 If you frequently change the arguments of a script, we recommend you store them
 in the script's `params` parameter. For example:
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -70,4 +69,3 @@ GET /_search
     }
 }
 ----
-// CONSOLE

+ 1 - 2
docs/reference/query-dsl/script-score-query.asciidoc

@@ -14,7 +14,7 @@ The `script_score` query is useful if, for example, a scoring function is expens
 ==== Example request
 The following `script_score` query assigns each returned document a score equal to the `likes` field value divided by `10`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -30,7 +30,6 @@ GET /_search
      }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [[script-score-top-level-params]]

+ 3 - 6
docs/reference/query-dsl/shape-query.asciidoc

@@ -24,7 +24,7 @@ https://en.wikipedia.org/wiki/Well-known_text_representation_of_geometry[Well Kn
 
 Given the following index:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /example
 {
@@ -46,13 +46,12 @@ POST /example/_doc?refresh
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 The following query will find the point using the Elasticsearch's
 `envelope` GeoJSON extension:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /example/_search
 {
@@ -69,7 +68,6 @@ GET /example/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Pre-Indexed Shape
 
@@ -90,7 +88,7 @@ Defaults to 'shape'.
 The following is an example of using the Filter with a pre-indexed
 shape:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /shapes
 {
@@ -126,7 +124,6 @@ GET /example/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Spatial Relations
 

+ 7 - 12
docs/reference/query-dsl/simple-query-string-query.asciidoc

@@ -20,7 +20,7 @@ parts of the query string.
 [[simple-query-string-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -33,7 +33,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 [[simple-query-string-top-level-params]]
@@ -153,7 +152,7 @@ To use one of these characters literally, escape it with a preceding backslash
 The behavior of these operators may differ depending on the `default_operator`
 value. For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -165,7 +164,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 This search is intended to only return documents containing `foo` or `bar` that
 also do **not** contain `baz`. However because of a `default_operator` of `OR`,
@@ -182,7 +180,7 @@ To explicitly enable only specific operators, use a `|` separator. For example,
 a `flags` value of `OR|AND|PREFIX` disables all operators except `OR`, `AND`,
 and `PREFIX`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -194,7 +192,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [[supported-flags-values]]
 ====== Valid values
@@ -247,7 +244,7 @@ Enables whitespace as split characters.
 
 Fields can be specified with wildcards, eg:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -259,12 +256,12 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> Query the `title`, `first_name` and `last_name` fields.
 
 Individual fields can be boosted with the caret (`^`) notation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -276,7 +273,6 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> The `subject` field is three times as important as the `message` field.
 
@@ -291,7 +287,7 @@ For example, the following synonym: `"ny, new york" would produce:`
 
 It is also possible to match multi terms synonyms with conjunctions instead:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -303,7 +299,6 @@ GET /_search
    }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates a boolean query:
 

+ 1 - 2
docs/reference/query-dsl/span-field-masking-query.asciidoc

@@ -12,7 +12,7 @@ Span field masking query is invaluable in conjunction with *multi-fields* when s
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -41,6 +41,5 @@ GET /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 Note: as span field masking query returns the masked field, scoring will be done using the norms of the field name supplied. This may lead to unexpected scoring behaviour.

+ 1 - 2
docs/reference/query-dsl/span-first-query.asciidoc

@@ -7,7 +7,7 @@
 Matches spans near the beginning of a field. The span first query maps
 to Lucene `SpanFirstQuery`. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -21,7 +21,6 @@ GET /_search
     }
 }    
 --------------------------------------------------
-// CONSOLE
 
 The `match` clause can be any other span type query. The `end` controls
 the maximum end position permitted in a match.

+ 2 - 4
docs/reference/query-dsl/span-multi-term-query.asciidoc

@@ -8,7 +8,7 @@ The `span_multi` query allows you to wrap a `multi term query` (one of wildcard,
 fuzzy, prefix, range or regexp query) as a `span query`, so
 it can be nested. Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -21,11 +21,10 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 A boost can also be associated with the query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -38,7 +37,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 WARNING: `span_multi` queries will hit too many clauses failure if the number of terms that match the query exceeds the
 boolean query limit (defaults to 1024).To avoid an unbounded expansion you can set the <<query-dsl-multi-term-rewrite,

+ 1 - 2
docs/reference/query-dsl/span-near-query.asciidoc

@@ -9,7 +9,7 @@ maximum number of intervening unmatched positions, as well as whether
 matches are required to be in-order. The span near query maps to Lucene
 `SpanNearQuery`. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -26,7 +26,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `clauses` element is a list of one or more other span type queries
 and the `slop` controls the maximum number of intervening unmatched

+ 1 - 2
docs/reference/query-dsl/span-not-query.asciidoc

@@ -9,7 +9,7 @@ within x tokens before (controlled by the parameter `pre`) or y tokens
 after (controlled by the parameter `post`) another SpanQuery. The span not
 query maps to Lucene `SpanNotQuery`. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -32,7 +32,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `include` and `exclude` clauses can be any span type query. The
 `include` clause is the span query whose matches are filtered, and the

+ 1 - 2
docs/reference/query-dsl/span-or-query.asciidoc

@@ -7,7 +7,7 @@
 Matches the union of its span clauses. The span or query maps to Lucene
 `SpanOrQuery`. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -22,6 +22,5 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `clauses` element is a list of one or more other span type queries.

+ 3 - 6
docs/reference/query-dsl/span-term-query.asciidoc

@@ -7,7 +7,7 @@
 Matches spans containing a term. The span term query maps to Lucene
 `SpanTermQuery`. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -16,11 +16,10 @@ GET /_search
     }
 }    
 --------------------------------------------------
-// CONSOLE
 
 A boost can also be associated with the query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -29,11 +28,10 @@ GET /_search
     }
 }    
 --------------------------------------------------
-// CONSOLE
 
 Or :
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -42,4 +40,3 @@ GET /_search
     }
 }    
 --------------------------------------------------
-// CONSOLE

+ 1 - 2
docs/reference/query-dsl/span-within-query.asciidoc

@@ -7,7 +7,7 @@
 Returns matches which are enclosed inside another span query. The span within
 query maps to Lucene `SpanWithinQuery`. Here is an example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -30,7 +30,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `big` and `little` clauses can be any span type query. Matching
 spans from `little` that are enclosed within `big` are returned.

+ 6 - 12
docs/reference/query-dsl/term-query.asciidoc

@@ -24,7 +24,7 @@ instead.
 [[term-query-ex-request]]
 ==== Example request
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -38,7 +38,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[term-top-level-params]]
 ==== Top-level parameters for `term`
@@ -91,7 +90,7 @@ To see the difference in search results, try the following example.
 +
 --
 
-[source,js]
+[source,console]
 ----
 PUT my_index
 {
@@ -102,7 +101,6 @@ PUT my_index
     }
 }
 ----
-// CONSOLE
 
 --
 
@@ -111,14 +109,13 @@ field.
 +
 --
 
-[source,js]
+[source,console]
 ----
 PUT my_index/_doc/1
 {
   "full_text":   "Quick Brown Foxes!"
 }
 ----
-// CONSOLE
 // TEST[continued]
 
 Because `full_text` is a `text` field, {es} changes `Quick Brown Foxes!` to
@@ -131,7 +128,7 @@ field. Include the `pretty` parameter so the response is more readable.
 +
 --
 
-[source,js]
+[source,console]
 ----
 GET my_index/_search?pretty
 {
@@ -142,7 +139,6 @@ GET my_index/_search?pretty
   }
 }
 ----
-// CONSOLE
 // TEST[continued]
 
 Because the `full_text` field no longer contains the *exact* term `Quick Brown
@@ -157,16 +153,15 @@ field.
 
 ////
 
-[source,js]
+[source,console]
 ----
 POST my_index/_refresh
 ----
-// CONSOLE
 // TEST[continued]
 
 ////
 
-[source,js]
+[source,console]
 ----
 GET my_index/_search?pretty
 {
@@ -177,7 +172,6 @@ GET my_index/_search?pretty
   }
 }
 ----
-// CONSOLE
 // TEST[continued]
 
 Unlike the `term` query, the `match` query analyzes your provided search term,

+ 6 - 12
docs/reference/query-dsl/terms-query.asciidoc

@@ -15,7 +15,7 @@ except you can search for multiple values.
 The following search returns documents where the `user` field contains `kimchy`
 or `elasticsearch`.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -27,7 +27,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[terms-top-level-params]]
 ==== Top-level parameters for `terms`
@@ -124,7 +123,7 @@ To see how terms lookup works, try the following example.
 +
 --
 
-[source,js]
+[source,console]
 ----
 PUT my_index
 {
@@ -135,7 +134,6 @@ PUT my_index
     }
 }
 ----
-// CONSOLE
 --
 
 . Index a document with an ID of 1 and values of `["blue", "green"]` in the
@@ -143,14 +141,13 @@ PUT my_index
 +
 --
 
-[source,js]
+[source,console]
 ----
 PUT my_index/_doc/1
 {
   "color":   ["blue", "green"]
 }
 ----
-// CONSOLE
 // TEST[continued]
 --
 
@@ -159,14 +156,13 @@ field.
 +
 --
 
-[source,js]
+[source,console]
 ----
 PUT my_index/_doc/2
 {
   "color":   "blue"
 }
 ----
-// CONSOLE
 // TEST[continued]
 --
 
@@ -178,16 +174,15 @@ parameter so the response is more readable.
 
 ////
 
-[source,js]
+[source,console]
 ----
 POST my_index/_refresh
 ----
-// CONSOLE
 // TEST[continued]
 
 ////
 
-[source,js]
+[source,console]
 ----
 GET my_index/_search?pretty
 {
@@ -202,7 +197,6 @@ GET my_index/_search?pretty
   }
 }
 ----
-// CONSOLE
 // TEST[continued]
 
 Because document 2 and document 1 both contain `blue` as a value in the `color`

+ 5 - 10
docs/reference/query-dsl/terms-set-query.asciidoc

@@ -45,7 +45,7 @@ programming languages known by the job candidate.
 * `required_matches`, a <<number, numeric>> `long` field. This field contains
 the number of matching terms required to return a document.
 
-[source,js]
+[source,console]
 ----
 PUT /job-candidates
 {
@@ -64,7 +64,6 @@ PUT /job-candidates
     }
 }
 ----
-// CONSOLE
 // TESTSETUP
 
 --
@@ -82,7 +81,7 @@ PUT /job-candidates
 Include the `?refresh` parameter so the document is immediately available for
 search.
 
-[source,js]
+[source,console]
 ----
 PUT /job-candidates/_doc/1?refresh
 {
@@ -91,7 +90,6 @@ PUT /job-candidates/_doc/1?refresh
     "required_matches": 2
 }
 ----
-// CONSOLE
 
 --
 
@@ -105,7 +103,7 @@ PUT /job-candidates/_doc/1?refresh
 
 * `2` in the `required_matches` field.
 
-[source,js]
+[source,console]
 ----
 PUT /job-candidates/_doc/2?refresh
 {
@@ -114,7 +112,6 @@ PUT /job-candidates/_doc/2?refresh
     "required_matches": 2
 }
 ----
-// CONSOLE
 
 --
 
@@ -135,7 +132,7 @@ The `minimum_should_match_field` is `required_matches`. This means the
 number of matching terms required is `2`, the value of the `required_matches`
 field.
 
-[source,js]
+[source,console]
 ----
 GET /job-candidates/_search
 {
@@ -149,7 +146,6 @@ GET /job-candidates/_search
     }
 }
 ----
-// CONSOLE
 
 [[terms-set-top-level-params]]
 ==== Top-level parameters for `terms_set`
@@ -214,7 +210,7 @@ number of terms provided in the `terms` field.
 * The required number of terms to match is `2`, the value of the
 `required_matches` field.
 
-[source,js]
+[source,console]
 ----
 GET /job-candidates/_search
 {
@@ -231,4 +227,3 @@ GET /job-candidates/_search
     }
 }
 ----
-// CONSOLE

+ 1 - 2
docs/reference/query-dsl/wildcard-query.asciidoc

@@ -17,7 +17,7 @@ The following search returns documents where the `user` field contains a term
 that begins with `ki` and ends with `y`. These matching terms can include `kiy`,
 `kity`, or `kimchy`.
 
-[source,js]
+[source,console]
 ----
 GET /_search
 {
@@ -32,7 +32,6 @@ GET /_search
     }
 }
 ----
-// CONSOLE
 
 [[wildcard-top-level-params]]
 ==== Top-level parameters for `wildcard`

+ 1 - 2
docs/reference/query-dsl/wrapper-query.asciidoc

@@ -6,7 +6,7 @@
 
 A query that accepts any other query as base64 encoded string.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -17,7 +17,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Base64 encoded string:  `{"term" : { "user" : "Kimchy" }}`
 

+ 1 - 2
docs/reference/redirects.asciidoc

@@ -404,7 +404,7 @@ GET _search
 move the query and filter to the `must` and `filter` parameters in the `bool`
 query:
 
-[source,js]
+[source,console]
 -------------------------
 GET _search
 {
@@ -424,7 +424,6 @@ GET _search
   }
 }
 -------------------------
-// CONSOLE
 
 [role="exclude",id="query-dsl-or-query"]
 === Or query

+ 3 - 6
docs/reference/rest-api/info.asciidoc

@@ -41,11 +41,10 @@ The information provided by this API includes:
 
 The following example queries the info API:
 
-[source,js]
+[source,console]
 ------------------------------------------------------------
 GET /_xpack
 ------------------------------------------------------------
-// CONSOLE
 
 Example response:
 
@@ -146,16 +145,14 @@ Example response:
 
 The following example only returns the build and features information:
 
-[source,js]
+[source,console]
 ------------------------------------------------------------
 GET /_xpack?categories=build,features
 ------------------------------------------------------------
-// CONSOLE
 
 The following example removes the descriptions from the response:
 
-[source,js]
+[source,console]
 ------------------------------------------------------------
 GET /_xpack?human=false
 ------------------------------------------------------------
-// CONSOLE

+ 1 - 2
docs/reference/rollup/apis/delete-job.asciidoc

@@ -76,11 +76,10 @@ POST my_rollup_index/_delete_by_query
 
 If we have a rollup job named `sensor`, it can be deleted with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 DELETE _rollup/job/sensor
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_rollup_job]
 
 Which will return the response:

+ 2 - 4
docs/reference/rollup/apis/get-job.asciidoc

@@ -79,11 +79,10 @@ state is set, the job will remove itself from the cluster.
 If we have already created a rollup job named `sensor`, the details about the
 job can be retrieved with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _rollup/job/sensor
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_rollup_job]
 
 The API yields the following response:
@@ -153,7 +152,7 @@ The API yields the following response:
 The `jobs` array contains a single job (`id: sensor`) since we requested a single job in the endpoint's URL. 
 If we add another job, we can see how multi-job responses are handled:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor2 <1>
 {
@@ -185,7 +184,6 @@ PUT _rollup/job/sensor2 <1>
 
 GET _rollup/job/_all <2>
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_rollup_job]
 <1> We create a second job with name `sensor2`
 <2> Then request all jobs by using `_all` in the GetJobs API

+ 1 - 2
docs/reference/rollup/apis/put-job.asciidoc

@@ -70,7 +70,7 @@ For more details about the job configuration, see <<rollup-job-config>>.
 The following example creates a {rollup-job} named "sensor", targeting the
 "sensor-*" index pattern:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor
 {
@@ -100,7 +100,6 @@ PUT _rollup/job/sensor
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_index]
 
 When the job is created, you receive the following results:

+ 4 - 8
docs/reference/rollup/apis/rollup-caps.asciidoc

@@ -51,7 +51,7 @@ Imagine we have an index named `sensor-1` full of raw data.  We know that the da
 will be a `sensor-2`, `sensor-3`, etc.  Let's create a Rollup job that targets the index pattern `sensor-*` to accommodate
 this future scaling:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor
 {
@@ -81,16 +81,14 @@ PUT _rollup/job/sensor
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_index]
 
 We can then retrieve the rollup capabilities of that index pattern (`sensor-*`) via the following command:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _rollup/data/sensor-*
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Which will yield the following response:
@@ -155,20 +153,18 @@ configurations available.
 
 We could also retrieve the same information with a request to `_all`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _rollup/data/_all
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 But note that if we use the concrete index name (`sensor-1`), we'll retrieve no rollup capabilities:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _rollup/data/sensor-1
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 [source,console-result]

+ 3 - 6
docs/reference/rollup/apis/rollup-index-caps.asciidoc

@@ -42,7 +42,7 @@ For more information, see
 Imagine we have an index named `sensor-1` full of raw data.  We know that the data will grow over time, so there
 will be a `sensor-2`, `sensor-3`, etc.  Let's create a Rollup job, which stores it's data in `sensor_rollup`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor
 {
@@ -72,17 +72,15 @@ PUT _rollup/job/sensor
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_index]
 
 If at a later date, we'd like to determine what jobs and capabilities were stored in the `sensor_rollup` index, we can use the Get Rollup
 Index API:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /sensor_rollup/_rollup/data
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Note how we are requesting the concrete rollup index name (`sensor_rollup`) as the first part of the URL.
@@ -150,10 +148,9 @@ configurations available.
 
 Like other APIs that interact with indices, you can specify index patterns instead of explicit indices:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /*_rollup/_rollup/data
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 

+ 1 - 2
docs/reference/rollup/apis/rollup-job-config.asciidoc

@@ -13,7 +13,7 @@ should be grouped on, and what metrics to collect for each group.
 
 A full job configuration might look like this:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor
 {
@@ -47,7 +47,6 @@ PUT _rollup/job/sensor
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_index]
 
 ==== Logistical Details

+ 4 - 8
docs/reference/rollup/apis/rollup-search.asciidoc

@@ -51,7 +51,7 @@ omitted entirely.
 
 Imagine we have an index named `sensor-1` full of raw data, and we have created a rollup job with the following configuration:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor
 {
@@ -81,14 +81,13 @@ PUT _rollup/job/sensor
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_index]
 
 This rolls up the `sensor-*` pattern and stores the results in `sensor_rollup`.  To search this rolled up data, we
 need to use the `_rollup_search` endpoint.  However, you'll notice that we can use regular query DSL to search the
 rolled-up data:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /sensor_rollup/_rollup_search
 {
@@ -102,7 +101,6 @@ GET /sensor_rollup/_rollup_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_prefab_data]
 // TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/]
 
@@ -141,7 +139,7 @@ Rollup searches are limited to functionality that was configured in the rollup j
 the average temperature because `avg` was not one of the configured metrics for the `temperature` field.  If we try
 to execute that search:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET sensor_rollup/_rollup_search
 {
@@ -155,7 +153,6 @@ GET sensor_rollup/_rollup_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[catch:/illegal_argument_exception/]
 
@@ -185,7 +182,7 @@ The Rollup Search API has the capability to search across both "live", non-rollu
 data.  This is done by simply adding the live indices to the URI:
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET sensor-1,sensor_rollup/_rollup_search <1>
 {
@@ -199,7 +196,6 @@ GET sensor-1,sensor_rollup/_rollup_search <1>
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[s/_rollup_search/_rollup_search?filter_path=took,timed_out,terminated_early,_shards,hits,aggregations/]
 <1> Note the URI now searches `sensor-1` and `sensor_rollup` at the same time

+ 1 - 2
docs/reference/rollup/apis/start-job.asciidoc

@@ -47,11 +47,10 @@ to start a job that is already started, nothing happens.
 
 If we have already created a {rollup-job} named `sensor`, it can be started with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _rollup/job/sensor/_start
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_rollup_job]
 
 Which will return the response:

+ 1 - 2
docs/reference/rollup/apis/stop-job.asciidoc

@@ -72,11 +72,10 @@ the indexer has fully stopped. This is accomplished with the
 `wait_for_completion` query parameter, and optionally a `timeout`:
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _rollup/job/sensor/_stop?wait_for_completion=true&timeout=10s
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_started_rollup_job]
 
 The parameter blocks the API call from returning until either the job has moved

+ 4 - 8
docs/reference/rollup/rollup-getting-started.asciidoc

@@ -28,7 +28,7 @@ look like this:
 We'd like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval
 one hour or greater.  A rollup job might look like this:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _rollup/job/sensor
 {
@@ -57,7 +57,6 @@ PUT _rollup/job/sensor
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_index]
 
 We give the job the ID of "sensor" (in the url: `PUT _rollup/job/sensor`), and tell it to rollup the index pattern `"sensor-*"`.
@@ -111,11 +110,10 @@ you to stop them later as a way to temporarily pause, without deleting the confi
 
 To start the job, execute this command:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _rollup/job/sensor/_start
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_rollup_job]
 
 [float]
@@ -126,7 +124,7 @@ so that you can use the same Query DSL syntax that you are accustomed to... it j
 
 For example, take this query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /sensor_rollup/_rollup_search
 {
@@ -140,7 +138,6 @@ GET /sensor_rollup/_rollup_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_prefab_data]
 
 It's a simple aggregation that calculates the maximum of the `temperature` field.  But you'll notice that is is being sent to the `sensor_rollup`
@@ -184,7 +181,7 @@ is nearly identical to normal DSL, making it easy to integrate into dashboards a
 
 Finally, we can use those grouping fields we defined to construct a more complicated query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /sensor_rollup/_rollup_search
 {
@@ -218,7 +215,6 @@ GET /sensor_rollup/_rollup_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_prefab_data]
 
 Which returns a corresponding response:

+ 1 - 2
docs/reference/rollup/rollup-search-limitations.asciidoc

@@ -41,7 +41,7 @@ rollup job to store metrics about the `price` field, you won't be able to use th
 For example, the `temperature` field in the following query has been stored in a rollup job... but not with an `avg` metric.  Which means
 the usage of `avg` here is not allowed:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET sensor_rollup/_rollup_search
 {
@@ -55,7 +55,6 @@ GET sensor_rollup/_rollup_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sensor_prefab_data]
 // TEST[catch:/illegal_argument_exception/]
 

+ 1 - 2
docs/reference/scripting/engine.asciidoc

@@ -24,7 +24,7 @@ You can execute the script by specifying its `lang` as `expert_scripts`, and the
 of the script as the script source:
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -53,5 +53,4 @@ POST /_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:we don't have an expert script plugin installed to test this]

+ 4 - 6
docs/reference/scripting/fields.asciidoc

@@ -42,7 +42,7 @@ Here's an example of using a script in a
 <<query-dsl-function-score-query,`function_score` query>> to alter the
 relevance `_score` of each document:
 
-[source,js]
+[source,console]
 -------------------------------------
 PUT my_index/_doc/1?refresh
 {
@@ -75,7 +75,6 @@ GET my_index/_search
   }
 }
 -------------------------------------
-// CONSOLE
 
 
 [float]
@@ -87,7 +86,7 @@ script is to use the `doc['field_name']` syntax, which retrieves the field
 value from <<doc-values,doc values>>. Doc values are a columnar field value
 store, enabled by default on all fields except for <<text,analyzed `text` fields>>.
 
-[source,js]
+[source,console]
 -------------------------------
 PUT my_index/_doc/1?refresh
 {
@@ -109,7 +108,6 @@ GET my_index/_search
   }
 }
 -------------------------------
-// CONSOLE
 
 Doc-values can only return "simple" field values like numbers, dates, geo-
 points, terms, etc, or arrays of these values if the field is multi-valued.
@@ -170,7 +168,7 @@ doc values.
 
 For instance:
 
-[source,js]
+[source,console]
 -------------------------------
 PUT my_index
 {
@@ -216,7 +214,7 @@ GET my_index/_search
   }
 }
 -------------------------------
-// CONSOLE
+
 <1> The `title` field is not stored and so cannot be used with the `_fields[]` syntax.
 <2> The `title` field can still be accessed from the `_source`.
 

+ 5 - 10
docs/reference/scripting/using.asciidoc

@@ -20,7 +20,7 @@ the same pattern:
 For example, the following script is used in a search request to return a
 <<request-body-search-script-fields, scripted field>>:
 
-[source,js]
+[source,console]
 -------------------------------------
 PUT my_index/_doc/1
 {
@@ -42,7 +42,6 @@ GET my_index/_search
   }
 }
 -------------------------------------
-// CONSOLE
 
 [float]
 === Script parameters
@@ -144,7 +143,7 @@ The following are examples of using a stored script that lives at
 
 First, create the script called `calculate-score` in the cluster state:
 
-[source,js]
+[source,console]
 -----------------------------------
 POST _scripts/calculate-score
 {
@@ -154,20 +153,18 @@ POST _scripts/calculate-score
   }
 }
 -----------------------------------
-// CONSOLE
 
 This same script can be retrieved with:
 
-[source,js]
+[source,console]
 -----------------------------------
 GET _scripts/calculate-score
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 Stored scripts can be used by specifying the `id` parameters as follows:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _search
 {
@@ -183,16 +180,14 @@ GET _search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 And deleted with:
 
-[source,js]
+[source,console]
 -----------------------------------
 DELETE _scripts/calculate-score
 -----------------------------------
-// CONSOLE
 // TEST[continued]
 
 [float]

部分文件因为文件数量过多而无法显示