瀏覽代碼

[DOCS] Replace "// CONSOLE" comments with [source,console] (#46159)

James Rodewig 6 年之前
父節點
當前提交
f5827ba0ae
共有 80 個文件被更改,包括 318 次插入595 次删除
  1. 5 10
      docs/reference/administering/backup-and-restore-security-config.asciidoc
  2. 1 2
      docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc
  3. 6 12
      docs/reference/aggregations/bucket/autodatehistogram-aggregation.asciidoc
  4. 4 8
      docs/reference/aggregations/bucket/children-aggregation.asciidoc
  5. 13 26
      docs/reference/aggregations/bucket/composite-aggregation.asciidoc
  6. 11 22
      docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc
  7. 5 10
      docs/reference/aggregations/bucket/daterange-aggregation.asciidoc
  8. 2 4
      docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc
  9. 1 2
      docs/reference/aggregations/bucket/filter-aggregation.asciidoc
  10. 3 6
      docs/reference/aggregations/bucket/filters-aggregation.asciidoc
  11. 5 10
      docs/reference/aggregations/bucket/geodistance-aggregation.asciidoc
  12. 3 6
      docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc
  13. 2 4
      docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc
  14. 1 2
      docs/reference/aggregations/bucket/global-aggregation.asciidoc
  15. 5 10
      docs/reference/aggregations/bucket/histogram-aggregation.asciidoc
  16. 4 8
      docs/reference/aggregations/bucket/iprange-aggregation.asciidoc
  17. 1 2
      docs/reference/aggregations/bucket/missing-aggregation.asciidoc
  18. 2 4
      docs/reference/aggregations/bucket/nested-aggregation.asciidoc
  19. 4 8
      docs/reference/aggregations/bucket/parent-aggregation.asciidoc
  20. 9 18
      docs/reference/aggregations/bucket/range-aggregation.asciidoc
  21. 5 10
      docs/reference/aggregations/bucket/rare-terms-aggregation.asciidoc
  22. 4 6
      docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc
  23. 2 4
      docs/reference/aggregations/bucket/sampler-aggregation.asciidoc
  24. 8 13
      docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc
  25. 6 10
      docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc
  26. 22 42
      docs/reference/aggregations/bucket/terms-aggregation.asciidoc
  27. 2 4
      docs/reference/aggregations/matrix/stats-aggregation.asciidoc
  28. 5 10
      docs/reference/aggregations/metrics/avg-aggregation.asciidoc
  29. 5 10
      docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc
  30. 6 12
      docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc
  31. 1 2
      docs/reference/aggregations/metrics/geobounds-aggregation.asciidoc
  32. 2 4
      docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc
  33. 5 10
      docs/reference/aggregations/metrics/max-aggregation.asciidoc
  34. 5 10
      docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc
  35. 5 10
      docs/reference/aggregations/metrics/min-aggregation.asciidoc
  36. 8 16
      docs/reference/aggregations/metrics/percentile-aggregation.asciidoc
  37. 10 12
      docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc
  38. 3 6
      docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc
  39. 5 10
      docs/reference/aggregations/metrics/stats-aggregation.asciidoc
  40. 5 10
      docs/reference/aggregations/metrics/sum-aggregation.asciidoc
  41. 6 10
      docs/reference/aggregations/metrics/tophits-aggregation.asciidoc
  42. 3 6
      docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc
  43. 4 8
      docs/reference/aggregations/metrics/weighted-avg-aggregation.asciidoc
  44. 3 6
      docs/reference/aggregations/misc.asciidoc
  45. 10 10
      docs/reference/aggregations/pipeline.asciidoc
  46. 2 2
      docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc
  47. 1 2
      docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc
  48. 1 2
      docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc
  49. 3 4
      docs/reference/aggregations/pipeline/bucket-sort-aggregation.asciidoc
  50. 2 4
      docs/reference/aggregations/pipeline/cumulative-cardinality-aggregation.asciidoc
  51. 1 2
      docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc
  52. 3 6
      docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc
  53. 1 2
      docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc
  54. 1 2
      docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc
  55. 1 2
      docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc
  56. 11 22
      docs/reference/aggregations/pipeline/movfn-aggregation.asciidoc
  57. 1 2
      docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc
  58. 1 2
      docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc
  59. 1 2
      docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc
  60. 1 2
      docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc
  61. 1 2
      docs/reference/search/suggesters/misc.asciidoc
  62. 6 10
      docs/reference/search/suggesters/phrase-suggest.asciidoc
  63. 2 4
      docs/reference/search/uri-request.asciidoc
  64. 7 14
      docs/reference/search/validate.asciidoc
  65. 1 2
      docs/reference/setup/install/check-running.asciidoc
  66. 1 2
      docs/reference/setup/logging-config.asciidoc
  67. 3 2
      docs/reference/setup/secure-settings.asciidoc
  68. 1 2
      docs/reference/setup/sysconfig/file-descriptors.asciidoc
  69. 1 2
      docs/reference/setup/sysconfig/swap.asciidoc
  70. 11 22
      docs/reference/sql/endpoints/rest.asciidoc
  71. 1 2
      docs/reference/sql/endpoints/translate.asciidoc
  72. 2 4
      docs/reference/sql/getting-started.asciidoc
  73. 2 4
      docs/reference/upgrade/close-ml.asciidoc
  74. 3 6
      docs/reference/upgrade/cluster_restart.asciidoc
  75. 1 2
      docs/reference/upgrade/disable-shard-alloc.asciidoc
  76. 1 2
      docs/reference/upgrade/open-ml.asciidoc
  77. 1 2
      docs/reference/upgrade/reindex_upgrade.asciidoc
  78. 4 8
      docs/reference/upgrade/rolling_upgrade.asciidoc
  79. 1 2
      docs/reference/upgrade/synced-flush.asciidoc
  80. 10 18
      docs/reference/vectors/vector-functions.asciidoc

+ 5 - 10
docs/reference/administering/backup-and-restore-security-config.asciidoc

@@ -75,7 +75,7 @@ It is preferable to have a <<backup-security-repos, dedicated repository>> for
 this special index. If you wish, you can also snapshot the system indices for other {stack} components to this repository. 
 +
 --
-[source,js]
+[source,console]
 -----------------------------------
 PUT /_snapshot/my_backup
 {
@@ -85,7 +85,6 @@ PUT /_snapshot/my_backup
   }
 }
 -----------------------------------
-// CONSOLE
 
 The user calling this API must have the elevated `manage` cluster privilege to
 prevent non-administrators exfiltrating data.
@@ -99,7 +98,7 @@ The following example creates a new user `snapshot_user` in the
 {stack-ov}/native-realm.html[native realm], but it is not important which
 realm the user is a member of:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_security/user/snapshot_user
 {
@@ -107,7 +106,6 @@ POST /_security/user/snapshot_user
   "roles" : [ "snapshot_user" ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:security is not enabled in this fixture]
 
 --
@@ -118,7 +116,7 @@ POST /_security/user/snapshot_user
 The following example shows how to use the create snapshot API to backup
 the `.security` index to the `my_backup` repository:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /_snapshot/my_backup/snapshot_1
 {
@@ -126,7 +124,6 @@ PUT /_snapshot/my_backup/snapshot_1
   "include_global_state": true <1>
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> This parameter value captures all the persistent settings stored in the
@@ -189,18 +186,16 @@ the {security-features}.
 To restore your security configuration from a backup, first make sure that the
 repository holding `.security` snapshots is installed:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_snapshot/my_backup
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_snapshot/my_backup/snapshot_1
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Then log into one of the node hosts, navigate to {es} installation directory,

+ 1 - 2
docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc

@@ -28,7 +28,7 @@ other than the default of the ampersand.
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /emails/_bulk?refresh
 { "index" : { "_id" : 1 } }
@@ -54,7 +54,6 @@ GET emails/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 In the above example, we analyse email messages to see which groups of individuals 
 have exchanged messages.

+ 6 - 12
docs/reference/aggregations/bucket/autodatehistogram-aggregation.asciidoc

@@ -10,7 +10,7 @@ The buckets field is optional, and will default to 10 buckets if not specified.
 
 Requesting a target of 10 buckets.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -24,7 +24,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Keys
@@ -37,7 +36,7 @@ date string using the format specified with the `format` parameter:
 TIP: If no `format` is specified, then it will use the first date
 <<mapping-date-format,format>> specified in the field mapping.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -52,7 +51,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Supports expressive date <<date-format-pattern,format pattern>>
@@ -119,7 +117,7 @@ Time zones may either be specified as an ISO 8601 UTC offset (e.g. `+01:00` or
 
 Consider the following example:
 
-[source,js]
+[source,console]
 ---------------------------------
 PUT my_index/log/1?refresh
 {
@@ -148,7 +146,6 @@ GET my_index/_search?size=0
   }
 }
 ---------------------------------
-// CONSOLE
 
 UTC is used if no time zone is specified, three 1-hour buckets are returned 
 starting at midnight UTC on 1 October 2015:
@@ -186,7 +183,7 @@ starting at midnight UTC on 1 October 2015:
 If a `time_zone` of `-01:00` is specified, then midnight starts at one hour before
 midnight UTC:
 
-[source,js]
+[source,console]
 ---------------------------------
 GET my_index/_search?size=0
 {
@@ -201,7 +198,6 @@ GET my_index/_search?size=0
   }
 }
 ---------------------------------
-// CONSOLE
 // TEST[continued]
 
 
@@ -273,7 +269,7 @@ The accepted units for `minimum_interval` are:
 * minute
 * second
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -288,7 +284,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Missing value
@@ -297,7 +292,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -312,7 +307,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Documents without a value in the `publish_date` field will fall into the same bucket as documents that have the value `2000-01-01`.

+ 4 - 8
docs/reference/aggregations/bucket/children-aggregation.asciidoc

@@ -9,7 +9,7 @@ This aggregation has a single option:
 
 For example, let's say we have an index of questions and answers. The answer type has the following `join` field in the mapping:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT child_example
 {
@@ -25,7 +25,6 @@ PUT child_example
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `question` document contain a tag field and the `answer` documents contain an owner field. With the `children`
 aggregation the tag buckets can be mapped to the owner buckets in a single request even though the two fields exist in
@@ -33,7 +32,7 @@ two different kinds of documents.
 
 An example of a question document:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT child_example/_doc/1
 {
@@ -49,12 +48,11 @@ PUT child_example/_doc/1
   ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Examples of `answer` documents:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT child_example/_doc/2?routing=1
 {
@@ -86,12 +84,11 @@ PUT child_example/_doc/3?routing=1&refresh
   "creation_date": "2009-05-05T13:45:37.030"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The following request can be built that connects the two together:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST child_example/_search?size=0
 {
@@ -120,7 +117,6 @@ POST child_example/_search?size=0
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> The `type` points to type / mapping with the name `answer`.

+ 13 - 26
docs/reference/aggregations/bucket/composite-aggregation.asciidoc

@@ -112,7 +112,7 @@ The values are extracted from a field or a script exactly like the `terms` aggre
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -127,11 +127,10 @@ GET /_search
      }
 }
 --------------------------------------------------
-// CONSOLE
 
 Like the `terms` aggregation it is also possible to use a script to create the values for the composite buckets:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -155,7 +154,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ===== Histogram
 
@@ -166,7 +164,7 @@ a value of `101` would be translated to `100` which is the key for the interval
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -181,11 +179,10 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The values are built from a numeric field or a script that return numerical values:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -210,7 +207,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 ===== Date Histogram
@@ -218,7 +214,7 @@ GET /_search
 The `date_histogram` is similar to the `histogram` value source except that the interval
 is specified by date/time expression:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -233,7 +229,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The example above creates an interval per day and translates all `timestamp` values to the start of its closest intervals.
 Available expressions for interval: `year`, `quarter`, `month`, `week`, `day`, `hour`, `minute`, `second`
@@ -248,7 +243,7 @@ Internally, a date is represented as a 64 bit number representing a timestamp in
 These timestamps are returned as the bucket keys. It is possible to return a formatted date string instead using
 the format specified with the format parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -271,7 +266,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Supports expressive date <<date-format-pattern,format pattern>>
 
@@ -291,7 +285,7 @@ The `sources` parameter accepts an array of values source.
 It is possible to mix different values source to create composite buckets.
 For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -307,14 +301,13 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 This will create composite buckets from the values created by two values source, a `date_histogram` and a `terms`.
 Each bucket is composed of two values, one for each value source defined in the aggregation.
 Any type of combinations is allowed and the order in the array is preserved
 in the composite buckets.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -331,7 +324,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Order
 
@@ -344,7 +336,7 @@ It is possible to define the direction of the sort for each value source by sett
 or `desc` (descending order) directly in the value source definition.
 For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -360,7 +352,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 \... will sort the composite bucket in descending order when comparing values from the `date_histogram` source
 and in ascending order when comparing values from the `terms` source.
@@ -371,7 +362,7 @@ By default documents without a value for a given source are ignored.
 It is possible to include them in the response by setting `missing_bucket` to
 `true` (defaults to `false`):
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -386,7 +377,6 @@ GET /_search
      }
 }
 --------------------------------------------------
-// CONSOLE
 
 In the example above the source `product_name` will emit an explicit `null` value
 for documents without a value for the field `product`.
@@ -411,7 +401,7 @@ If all composite buckets should be retrieved it is preferable to use a small siz
 and then use the `after` parameter to retrieve the next results.
 For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -428,7 +418,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 \... returns:
@@ -477,7 +466,7 @@ the last composite buckets returned in a previous round.
 For the example below the last bucket can be found in `after_key` and the next
 round of result can be retrieved with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -495,7 +484,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Should restrict the aggregation to buckets that sort **after** the provided values.
 
@@ -507,7 +495,7 @@ parent aggregation.
 For instance the following example computes the average value of a field
 per composite bucket:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -528,7 +516,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 \... returns:

+ 11 - 22
docs/reference/aggregations/bucket/datehistogram-aggregation.asciidoc

@@ -103,7 +103,7 @@ specified timezone, so that the date and time are the same at the start and end.
 ===== Calendar Interval Examples
 As an example, here is an aggregation requesting bucket intervals of a month in calendar time:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -117,13 +117,12 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 If you attempt to use multiples of calendar units, the aggregation will fail because only
 singular calendar units are supported:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -137,7 +136,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 // TEST[catch:bad_request]
 
@@ -199,7 +197,7 @@ Defined as 24 hours (86,400,000 milliseconds)
 If we try to recreate the "month" `calendar_interval` from earlier, we can approximate that with
 30 fixed days:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -213,12 +211,11 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 But if we try to use a calendar unit that is not supported, such as weeks, we'll get an exception:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -232,7 +229,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 // TEST[catch:bad_request]
 
@@ -290,7 +286,7 @@ date string using the `format` parameter specification:
 TIP: If you don't specify `format`, the first date
 <<mapping-date-format,format>> specified in the field mapping is used.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -305,7 +301,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Supports expressive date <<date-format-pattern,format pattern>>
@@ -353,7 +348,7 @@ such as`America/Los_Angeles`.
 
 Consider the following example:
 
-[source,js]
+[source,console]
 ---------------------------------
 PUT my_index/_doc/1?refresh
 {
@@ -377,7 +372,6 @@ GET my_index/_search?size=0
   }
 }
 ---------------------------------
-// CONSOLE
 
 If you don't specify a timezone, UTC is used. This would result in both of these
 documents being placed into the same day bucket, which starts at midnight UTC
@@ -405,7 +399,7 @@ on 1 October 2015:
 If you specify a `time_zone` of `-01:00`, midnight in that timezone is one hour
 before midnight UTC:
 
-[source,js]
+[source,console]
 ---------------------------------
 GET my_index/_search?size=0
 {
@@ -420,7 +414,6 @@ GET my_index/_search?size=0
   }
 }
 ---------------------------------
-// CONSOLE
 // TEST[continued]
 
 Now the first document falls into the bucket for 30 September 2015, while the
@@ -474,7 +467,7 @@ For example, when using an interval of `day`, each bucket runs from midnight
 to midnight.  Setting the `offset` parameter to `+6h` changes each bucket
 to run from 6am to 6am:
 
-[source,js]
+[source,console]
 -----------------------------
 PUT my_index/_doc/1?refresh
 {
@@ -499,7 +492,6 @@ GET my_index/_search?size=0
   }
 }
 -----------------------------
-// CONSOLE
 
 Instead of a single bucket starting at midnight, the above request groups the
 documents into buckets starting at 6am:
@@ -536,7 +528,7 @@ adjustments have been made.
 Setting the `keyed` flag to `true` associates a unique string key with each
 bucket and returns the ranges as a hash rather than an array:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -552,7 +544,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -606,7 +597,7 @@ The `missing` parameter defines how to treat documents that are missing a value.
 By default, they are ignored, but it is also possible to treat them as if they
 have a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -621,7 +612,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Documents without a value in the `publish_date` field will fall into the
@@ -640,7 +630,7 @@ When you need to aggregate the results by day of the week, use a script that
 returns the day of the week:
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -656,7 +646,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:

+ 5 - 10
docs/reference/aggregations/bucket/daterange-aggregation.asciidoc

@@ -12,7 +12,7 @@ for each range.
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -30,7 +30,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales s/now-10M\/M/10-2015/]
 
 <1> < now minus 10 months, rounded down to the start of the month.
@@ -75,7 +74,7 @@ be treated. By default they will be ignored but it is also possible to treat
 them as if they had a value. This is done by adding a set of fieldname :
 value mappings to specify default values per field.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -100,7 +99,6 @@ POST /sales/_search?size=0
    }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Documents without a value in the `date` field will be added to the "Older"
@@ -267,7 +265,7 @@ The `time_zone` parameter is also applied to rounding in date math expressions.
 As an example, to round to the beginning of the day in the CET time zone, you
 can do the following:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -286,7 +284,6 @@ POST /sales/_search?size=0
    }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> This date will be converted to `2016-02-01T00:00:00.000+01:00`.
@@ -297,7 +294,7 @@ POST /sales/_search?size=0
 Setting the `keyed` flag to `true` will associate a unique string key with each
 bucket and return the ranges as a hash rather than an array:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -316,7 +313,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales s/now-10M\/M/10-2015/]
 
 Response:
@@ -347,7 +343,7 @@ Response:
 
 It is also possible to customize the key for each range:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -366,7 +362,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:

+ 2 - 4
docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc

@@ -26,7 +26,7 @@ Example:
 We might want to see which tags are strongly associated with `#elasticsearch` on StackOverflow
 forum posts but ignoring the effects of some prolific users with a tendency to misspell #Kibana as #Cabana.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /stackoverflow/_search?size=0
 {
@@ -53,7 +53,6 @@ POST /stackoverflow/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:stackoverflow]
 
 Response:
@@ -92,7 +91,7 @@ Response:
 In this scenario we might want to diversify on a combination of field values. We can use a `script` to produce a hash of the
 multiple values in a tags field to ensure we don't have a sample that consists of the same repeated combinations of tags.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /stackoverflow/_search?size=0
 {
@@ -123,7 +122,6 @@ POST /stackoverflow/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:stackoverflow]
 
 Response:

+ 1 - 2
docs/reference/aggregations/bucket/filter-aggregation.asciidoc

@@ -5,7 +5,7 @@ Defines a single bucket of all the documents in the current document set context
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -19,7 +19,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 In the above example, we calculate the average price of all the products that are of type t-shirt.

+ 3 - 6
docs/reference/aggregations/bucket/filters-aggregation.asciidoc

@@ -7,7 +7,7 @@ filter.
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /logs/_bulk?refresh
 { "index" : { "_id" : 1 } }
@@ -32,7 +32,6 @@ GET logs/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 In the above example, we analyze log messages. The aggregation will build two
 collection (buckets) of log messages - one for all those containing an error,
@@ -70,7 +69,7 @@ Response:
 The filters field can also be provided as an array of filters, as in the
 following request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET logs/_search
 {
@@ -87,7 +86,6 @@ GET logs/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The filtered buckets are returned in the same order as provided in the
@@ -133,7 +131,7 @@ this parameter will implicitly set the `other_bucket` parameter to `true`.
 
 The following snippet shows a response where the `other` bucket is requested to be named `other_messages`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT logs/_doc/4?refresh
 {
@@ -156,7 +154,6 @@ GET logs/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The response would be something like the following:

+ 5 - 10
docs/reference/aggregations/bucket/geodistance-aggregation.asciidoc

@@ -3,7 +3,7 @@
 
 A multi-bucket aggregation that works on `geo_point` fields and conceptually works very similar to the <<search-aggregations-bucket-range-aggregation,range>> aggregation. The user can define a point of origin and a set of distance range buckets. The aggregation evaluate the distance of each document value from the origin point and determines the buckets it belongs to based on the ranges (a document belongs to a bucket if the distance between the document and the origin falls within the distance range of the bucket).
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /museums
 {
@@ -47,7 +47,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Response:
 
@@ -90,7 +89,7 @@ The specified field must be of type `geo_point` (which can only be set explicitl
 
 By default, the distance unit is `m` (meters) but it can also accept: `mi` (miles), `in` (inches), `yd` (yards), `km` (kilometers), `cm` (centimeters), `mm` (millimeters).
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -110,14 +109,13 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> The distances will be computed in kilometers
 
 There are two distance calculation modes: `arc` (the default), and `plane`. The `arc` calculation is the most accurate. The `plane` is the fastest but least accurate. Consider using `plane` when your search context is "narrow", and spans smaller geographical areas (~5km). `plane` will return higher error margins for searches across very large areas (e.g. cross continent search). The distance calculation type can be set using the `distance_type` parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -138,14 +136,13 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 ==== Keyed Response
 
 Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -165,7 +162,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Response:
@@ -200,7 +196,7 @@ Response:
 
 It is also possible to customize the key for each range:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -220,7 +216,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Response:

+ 3 - 6
docs/reference/aggregations/bucket/geohashgrid-aggregation.asciidoc

@@ -17,7 +17,7 @@ The specified field must be of type `geo_point` (which can only be set explicitl
 
 ==== Simple low-precision request
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /museums
 {
@@ -56,7 +56,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Response:
 
@@ -90,7 +89,7 @@ Response:
 
 When requesting detailed buckets (typically for displaying a "zoomed in" map) a filter like <<query-dsl-geo-bounding-box-query,geo_bounding_box>> should be applied to narrow the subject area otherwise potentially millions of buckets will be created and returned.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -116,13 +115,12 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The geohashes returned by the `geohash_grid` aggregation can be also used for zooming in. To zoom into the
 first geohash `u17` returned in the previous example, it should be specified as both `top_left` and `bottom_right` corner:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -148,7 +146,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 [source,js]

+ 2 - 4
docs/reference/aggregations/bucket/geotilegrid-aggregation.asciidoc

@@ -30,7 +30,7 @@ fields, in which case all points will be taken into account during aggregation.
 
 ==== Simple low-precision request
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /museums
 {
@@ -69,7 +69,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Response:
 
@@ -106,7 +105,7 @@ a filter like <<query-dsl-geo-bounding-box-query,geo_bounding_box>> should be
 applied to narrow the subject area otherwise potentially millions of buckets
 will be created and returned.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -132,7 +131,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 [source,js]

+ 1 - 2
docs/reference/aggregations/bucket/global-aggregation.asciidoc

@@ -11,7 +11,7 @@ NOTE:   Global aggregators can only be placed as top level aggregators because
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -29,7 +29,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> The `global` aggregation has an empty body

+ 5 - 10
docs/reference/aggregations/bucket/histogram-aggregation.asciidoc

@@ -19,7 +19,7 @@ The `interval` must be a positive decimal, while the `offset` must be a decimal
 
 The following snippet "buckets" the products based on their `price` by interval of `50`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -33,7 +33,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 And the following may be the response:
@@ -78,7 +77,7 @@ The response above show that no documents has a price that falls within the rang
 response will fill gaps in the histogram with empty buckets. It is possible change that and request buckets with
 a higher minimum count thanks to the `min_doc_count` setting:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -93,7 +92,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -154,7 +152,7 @@ under a range `filter` aggregation with the appropriate `from`/`to` settings.
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -175,7 +173,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Order
@@ -199,7 +196,7 @@ documents.
 By default, the buckets are returned as an ordered array. It is also possible to request the response as a hash
 instead keyed by the buckets keys:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -214,7 +211,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -259,7 +255,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -274,7 +270,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Documents without a value in the `quantity` field will fall into the same bucket as documents that have the value `0`.

+ 4 - 8
docs/reference/aggregations/bucket/iprange-aggregation.asciidoc

@@ -5,7 +5,7 @@ Just like the dedicated <<search-aggregations-bucket-daterange-aggregation,date>
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /ip_addresses/_search
 {
@@ -23,7 +23,6 @@ GET /ip_addresses/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:iprange]
 
 Response:
@@ -55,7 +54,7 @@ Response:
 
 IP ranges can also be defined as CIDR masks:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /ip_addresses/_search
 {
@@ -73,7 +72,6 @@ GET /ip_addresses/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:iprange]
 
 Response:
@@ -109,7 +107,7 @@ Response:
 
 Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /ip_addresses/_search
 {
@@ -128,7 +126,6 @@ GET /ip_addresses/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:iprange]
 
 Response:
@@ -158,7 +155,7 @@ Response:
 
 It is also possible to customize the key for each range:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /ip_addresses/_search
 {
@@ -177,7 +174,6 @@ GET /ip_addresses/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:iprange]
 
 Response:

+ 1 - 2
docs/reference/aggregations/bucket/missing-aggregation.asciidoc

@@ -5,7 +5,7 @@ A field data based single bucket aggregation, that creates a bucket of all docum
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -16,7 +16,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 In the above example, we get the total number of products that do not have a price.

+ 2 - 4
docs/reference/aggregations/bucket/nested-aggregation.asciidoc

@@ -6,7 +6,7 @@ A special single bucket aggregation that enables aggregating nested documents.
 For example, lets say we have an index of products, and each product holds the list of resellers - each having its own
 price for the product. The mapping could look like:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /index
 {
@@ -23,13 +23,12 @@ PUT /index
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 <1> The `resellers` is an array that holds nested documents under the `product` object.
 
 The following aggregations will return the minimum price products can be purchased in:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -48,7 +47,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
 // TEST[s/^/PUT index\/_doc\/0\?refresh\n{"name":"led", "resellers": [{"name": "foo", "price": 350.00}, {"name": "bar", "price": 500.00}]}\n/]
 

+ 4 - 8
docs/reference/aggregations/bucket/parent-aggregation.asciidoc

@@ -9,7 +9,7 @@ This aggregation has a single option:
 
 For example, let's say we have an index of questions and answers. The answer type has the following `join` field in the mapping:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT parent_example
 {
@@ -25,7 +25,6 @@ PUT parent_example
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 The `question` document contain a tag field and the `answer` documents contain an owner field. With the `parent`
 aggregation the owner buckets can be mapped to the tag buckets in a single request even though the two fields exist in
@@ -33,7 +32,7 @@ two different kinds of documents.
 
 An example of a question document:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT parent_example/_doc/1
 {
@@ -49,12 +48,11 @@ PUT parent_example/_doc/1
   ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Examples of `answer` documents:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT parent_example/_doc/2?routing=1
 {
@@ -86,12 +84,11 @@ PUT parent_example/_doc/3?routing=1&refresh
   "creation_date": "2009-05-05T13:45:37.030"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The following request can be built that connects the two together:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST parent_example/_search?size=0
 {
@@ -120,7 +117,6 @@ POST parent_example/_search?size=0
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 <1> The `type` points to type / mapping with the name `answer`.

+ 9 - 18
docs/reference/aggregations/bucket/range-aggregation.asciidoc

@@ -6,7 +6,7 @@ Note that this aggregation includes the `from` value and excludes the `to` value
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -24,7 +24,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 // TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
 
@@ -64,7 +63,7 @@ Response:
 
 Setting the `keyed` flag to `true` will associate a unique string key with each bucket and return the ranges as a hash rather than an array:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -83,7 +82,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 // TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
 
@@ -118,7 +116,7 @@ Response:
 
 It is also possible to customize the key for each range:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -137,7 +135,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 // TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
 
@@ -177,7 +174,7 @@ will be executed during aggregation execution.
 
 The following example shows how to use an `inline` script with the `painless` script language and no script parameters:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -198,11 +195,10 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 It is also possible to use stored scripts. Here is a simple stored script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_scripts/convert_currency
 {
@@ -212,12 +208,11 @@ POST /_scripts/convert_currency
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 And this new stored script can be used in the range aggregation like this:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -240,7 +235,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
 // TEST[continued]
 <1> Id of the stored script
@@ -278,7 +272,7 @@ GET /_search
 
 Lets say the product prices are in USD but we would like to get the price ranges in EURO. We can use value script to convert the prices prior the aggregation (assuming conversion rate of 0.8)
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /sales/_search
 {
@@ -302,14 +296,13 @@ GET /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Sub Aggregations
 
 The following example, not only "bucket" the documents to the different buckets but also computes statistics over the prices in each price range
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -332,7 +325,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 // TEST[s/GET \/_search/GET \/_search\?filter_path=aggregations/]
 
@@ -391,7 +383,7 @@ Response:
 
 If a sub aggregation is also based on the same value source as the range aggregation (like the `stats` aggregation in the example above) it is possible to leave out the value source definition for it. The following will return the same response as above:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -414,5 +406,4 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 <1> We don't need to specify the `price` as we "inherit" it by default from the parent `range` aggregation

+ 5 - 10
docs/reference/aggregations/bucket/rare-terms-aggregation.asciidoc

@@ -85,7 +85,7 @@ better approximation, but higher memory usage. Cannot be smaller than `0.00001`
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -98,7 +98,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 Response:
@@ -124,7 +123,7 @@ Response:
 In this example, the only bucket that we see is the "swing" bucket, because it is the only term that appears in
 one document.  If we increase the `max_doc_count` to `2`, we'll see some more buckets:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -138,7 +137,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 This now shows the "jazz" term which has a `doc_count` of 2":
@@ -275,7 +273,7 @@ It is possible to filter the values for which buckets will be created. This can
 
 ===== Filtering Values with regular expressions
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -290,7 +288,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 In the above example, buckets will be created for all the tags that starts with `swi`, except those starting
 with `electro` (so the tag `swing` will be aggregated but not `electro_swing`). The `include` regular expression will determine what
@@ -304,7 +301,7 @@ The syntax is the same as <<regexp-syntax,regexp queries>>.
 For matching based on exact values the `include` and `exclude` parameters can simply take an array of
 strings that represent the terms as they are found in the index:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -319,7 +316,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 
 ==== Missing value
@@ -328,7 +324,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -342,7 +338,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`.
 

+ 4 - 6
docs/reference/aggregations/bucket/reverse-nested-aggregation.asciidoc

@@ -15,7 +15,7 @@ a nested object field that falls outside the `nested` aggregation's nested struc
 For example, lets say we have an index for a ticket system with issues and comments. The comments are inlined into
 the issue documents as nested documents. The mapping could look like:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /issues
 {
@@ -33,7 +33,7 @@ PUT /issues
     }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> The `comments` is an array that holds nested documents under the `issue` object.
 
 The following aggregations will return the top commenters' username that have commented and per top commenter the top
@@ -41,17 +41,16 @@ tags of the issues the user has commented on:
 
 //////////////////////////
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /issues/_doc/0?refresh
 {"tags": ["tag_1"], "comments": [{"username": "username_1"}]}
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 //////////////////////////
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /issues/_search
 {
@@ -86,7 +85,6 @@ GET /issues/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[s/_search/_search\?filter_path=aggregations/]
 

+ 2 - 4
docs/reference/aggregations/bucket/sampler-aggregation.asciidoc

@@ -15,7 +15,7 @@ A query on StackOverflow data for the popular term `javascript` OR the rarer ter
 the `significant_terms` aggregation on top-scoring documents that are more likely to match 
 the most interesting parts of our query we use a sample.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /stackoverflow/_search?size=0
 {
@@ -41,7 +41,6 @@ POST /stackoverflow/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:stackoverflow]
 
 Response:
@@ -85,7 +84,7 @@ Without the `sampler` aggregation the request query considers the full "long tai
 less significant terms such as `jquery` and `angular` rather than focusing on the more insightful Kibana-related terms.
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /stackoverflow/_search?size=0
 {
@@ -105,7 +104,6 @@ POST /stackoverflow/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:stackoverflow]
 
 Response:

+ 8 - 13
docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc

@@ -64,7 +64,7 @@ set used for statistical comparisons is the index or indices from which the resu
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -78,12 +78,11 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 Response:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 {
     ...
@@ -125,7 +124,7 @@ A simpler way to perform analysis across multiple categories is to use a parent-
 
 Example using a parent aggregation for segmentation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -141,7 +140,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 Response:
@@ -203,7 +201,7 @@ Now we have anomaly detection for each of the police forces using a single reque
 We can use other forms of top-level aggregations to segment our data, for example segmenting by geographic
 area to identify unusual hot-spots of a particular crime type:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -222,7 +220,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 This example uses the `geohash_grid` aggregation to create result buckets that represent geographic areas, and inside each
 bucket we can identify anomalous levels of a crime type in these tightly-focused areas e.g.
@@ -464,7 +461,7 @@ NOTE:   `shard_size` cannot be smaller than `size` (as it doesn't make much sens
 
 It is possible to only return terms that match more than a configured number of hits using the `min_doc_count` option:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -478,7 +475,7 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
+
 The above aggregation would only return tags which have been found in 10 hits or more. Default value is `3`.
 
 
@@ -507,7 +504,7 @@ The default source of statistical information for background term frequencies is
 scope can be narrowed through the use of a `background_filter` to focus in on significant terms within a narrower
 context:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -528,7 +525,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing
 terms like "Spanish" that are unusual in the full index's worldwide context but commonplace in the subset of documents containing the
@@ -566,7 +562,7 @@ is significantly faster. By default, `map` is only used when running an aggregat
 ordinals.
 
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -580,7 +576,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> the possible values are `map`, `global_ordinals`
 

+ 6 - 10
docs/reference/aggregations/bucket/significanttext-aggregation.asciidoc

@@ -32,7 +32,7 @@ and the _background_set used for statistical comparisons is the index or indices
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET news/_search
 {
@@ -53,7 +53,6 @@ GET news/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:news]
 
 
@@ -147,7 +146,7 @@ The uncleansed documents have thrown up some odd-looking terms that are, on the
 correlated with appearances of our search term "elasticsearch" e.g. "pozmantier".
 We can drill down into examples of these documents to see why pozmantier is connected using this query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET news/_search
 {
@@ -167,8 +166,8 @@ GET news/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:news]
+
 The results show a series of very similar news articles about a judging panel for a number of tech projects:
 
 [source,js]
@@ -215,7 +214,7 @@ Fortunately similar documents tend to rank similarly so as part of examining the
 aggregation can apply a filter to remove sequences of any 6 or more tokens that have already been seen. Let's try this same query now but
 with the `filter_duplicate_text` setting turned on:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET news/_search
 {
@@ -241,7 +240,6 @@ GET news/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:news]
 
 The results from analysing our deduplicated text are obviously of higher quality to anyone familiar with the elastic stack:
@@ -418,7 +416,7 @@ The default source of statistical information for background term frequencies is
 scope can be narrowed through the use of a `background_filter` to focus in on significant terms within a narrower
 context:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET news/_search
 {
@@ -439,7 +437,6 @@ GET news/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:news]
 
 The above filter would help focus in on terms that were peculiar to the city of Madrid rather than revealing
@@ -457,7 +454,7 @@ JSON field(s) and the indexed field being aggregated can differ.
 In these cases it is possible to list the JSON _source fields from which text
 will be analyzed using the `source_fields` parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET news/_search
 {
@@ -476,7 +473,6 @@ GET news/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:news]
 
 

+ 22 - 42
docs/reference/aggregations/bucket/terms-aggregation.asciidoc

@@ -53,7 +53,7 @@ POST /products/_bulk?refresh
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -64,8 +64,8 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
+
 <1> `terms` aggregation should be a field of type `keyword` or any other data type suitable for bucket aggregations. In order to use it with `text` you will need to enable 
 <<fielddata, fielddata>>.
 
@@ -130,7 +130,7 @@ combined to give a final view. Consider the following scenario:
 A request is made to obtain the top 5 terms in the field product, ordered by descending document count from an index with
 3 shards. In this case each shard is asked to give its top 5 terms.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -144,7 +144,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 The terms for each of the three shards are shown below with their
@@ -260,7 +259,7 @@ could have the 4th highest document count.
 
 The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -275,7 +274,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 
@@ -338,7 +336,7 @@ but at least the top buckets will be correctly picked.
 
 Ordering the buckets by their doc `_count` in an ascending manner:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -352,11 +350,10 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Ordering the buckets alphabetically by their terms in an ascending manner:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -370,13 +367,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 deprecated[6.0.0, Use `_key` instead of `_term` to order buckets by their term]
 
 Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name):
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -393,11 +389,10 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Ordering the buckets by multi value metrics sub-aggregation (identified by the aggregation name):
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -414,7 +409,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 [NOTE]
 .Pipeline aggs cannot be used for sorting
@@ -444,7 +438,7 @@ METRIC              =  <the name of the metric (in case of multi-value metrics a
 PATH                =  <AGG_NAME> [ <AGG_SEPARATOR>, <AGG_NAME> ]* [ <METRIC_SEPARATOR>, <METRIC> ] ;
 --------------------------------------------------
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -466,13 +460,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The above will sort the artist's countries buckets based on the average play count among the rock songs.
 
 Multiple criteria can be used to order the buckets by providing an array of order criteria such as the following:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -494,7 +487,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The above will sort the artist's countries buckets based on the average play count among the rock songs and then by
 their `doc_count` in descending order.
@@ -506,7 +498,7 @@ tie-breaker in ascending alphabetical order to prevent non-deterministic orderin
 
 It is possible to only return terms that match more than a configured number of hits using the `min_doc_count` option:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -520,7 +512,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 The above aggregation would only return tags which have been found in 10 hits or more. Default value is `1`.
 
@@ -548,7 +539,7 @@ WARNING: When NOT sorting on `doc_count` descending, high values of `min_doc_cou
 
 Generating the terms using a script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -564,13 +555,12 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a stored script use the following syntax:
 
 //////////////////////////
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_scripts/my_script
 {
@@ -580,11 +570,10 @@ POST /_scripts/my_script
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 //////////////////////////
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -602,12 +591,11 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 ==== Value Script
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -624,7 +612,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ==== Filtering Values
 
@@ -634,7 +621,7 @@ It is possible to filter the values for which buckets will be created. This can
 
 ===== Filtering Values with regular expressions
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -649,7 +636,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 In the above example, buckets will be created for all the tags that has the word `sport` in them, except those starting
 with `water_` (so the tag `water_sports` will not be aggregated). The `include` regular expression will determine what
@@ -663,7 +649,7 @@ The syntax is the same as <<regexp-syntax,regexp queries>>.
 For matching based on exact values the `include` and `exclude` parameters can simply take an array of
 strings that represent the terms as they are found in the index:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -683,7 +669,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 ===== Filtering Values with partitions
 
@@ -693,7 +678,7 @@ This can be achieved by grouping the field's values into a number of partitions
 only one partition in each request.
 Consider this request which is looking for accounts that have not logged any access recently:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -722,7 +707,6 @@ GET /_search
    }
 }
 --------------------------------------------------
-// CONSOLE
 
 This request is finding the last logged access date for a subset of customer accounts because we
 might want to expire some customer accounts who haven't been seen for a long while.
@@ -786,7 +770,7 @@ are expanded in one depth-first pass and only then any pruning occurs.
 In some scenarios this can be very wasteful and can hit memory constraints.
 An example problem scenario is querying a movie database for the 10 most popular actors and their 5 most common co-stars:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -808,7 +792,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 Even though the number of actors may be comparatively small and we want only 50 result buckets there is a combinatorial explosion of buckets
 during calculation - a single actor can produce n² buckets where n is the number of actors. The sane option would be to first determine
@@ -818,7 +801,7 @@ mode as opposed to the `depth_first` mode.
 NOTE: The `breadth_first` is the default mode for fields with a cardinality bigger than the requested size or when the cardinality is unknown (numeric fields or scripts for instance).
 It is possible to override the default heuristic and to provide a collect mode directly in the request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -841,7 +824,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> the possible values are `breadth_first` and `depth_first`
 
@@ -870,7 +852,7 @@ so memory usage is linear to the number of values of the documents that are part
 is significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have
 ordinals.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -884,7 +866,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> The possible values are `map`, `global_ordinals`
 
@@ -896,7 +877,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -910,7 +891,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`.
 

+ 2 - 4
docs/reference/aggregations/matrix/stats-aggregation.asciidoc

@@ -35,7 +35,7 @@ POST /_refresh
 
 The following example demonstrates the use of matrix stats to describe the relationship between income and poverty.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -48,7 +48,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[s/_search/_search\?filter_path=aggregations/]
 
 The aggregation type is `matrix_stats` and the `fields` setting defines the set of fields (as an array) for computing
@@ -119,7 +118,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they had a value.
 This is done by adding a set of fieldname : value mappings to specify default values per field.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /_search
 {
@@ -133,7 +132,6 @@ GET /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Documents without a value in the `income` field will have the default value `50000`.
 

+ 5 - 10
docs/reference/aggregations/metrics/avg-aggregation.asciidoc

@@ -6,7 +6,7 @@ A `single-value` metrics aggregation that computes the average of numeric values
 Assuming the data consists of documents representing exams grades (between 0
 and 100) of students we can average their scores with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -15,7 +15,6 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 The above aggregation computes the average grade over all documents. The aggregation type is `avg` and the `field` setting defines the numeric field of the documents the average will be computed on. The above will return the following:
@@ -39,7 +38,7 @@ The name of the aggregation (`avg_grade` above) also serves as the key by which
 
 Computing the average grade based on a script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -54,12 +53,11 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -77,14 +75,13 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams,stored_example_script]
 
 ===== Value Script
 
 It turned out that the exam was way above the level of the students and a grade correction needs to be applied. We can use value script to get the new average:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -104,7 +101,6 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 ==== Missing value
@@ -113,7 +109,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -127,7 +123,6 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`.

+ 5 - 10
docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc

@@ -7,7 +7,7 @@ document or generated by a script.
 
 Assume you are indexing store sales and would like to count the unique number of sold products that match a query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -20,7 +20,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -42,7 +41,7 @@ Response:
 
 This aggregation also supports the `precision_threshold` option:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -56,7 +55,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> The `precision_threshold` options allows to trade memory for accuracy, and
@@ -183,7 +181,7 @@ make sure that hashes are computed at most once per unique value per segment.
 The `cardinality` metric supports scripting, with a noticeable performance hit
 however since hashes need to be computed on the fly.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -199,12 +197,11 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -223,7 +220,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:no script]
 
 ==== Missing value
@@ -232,7 +228,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -246,6 +242,5 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 <1> Documents without a value in the `tag` field will fall into the same bucket as documents that have the value `N/A`.

+ 6 - 12
docs/reference/aggregations/metrics/extendedstats-aggregation.asciidoc

@@ -7,7 +7,7 @@ The `extended_stats` aggregations is an extended version of the <<search-aggrega
 
 Assuming the data consists of documents representing exams grades (between 0 and 100) of students
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /exams/_search
 {
@@ -17,7 +17,6 @@ GET /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 The above aggregation computes the grades statistics over all documents. The aggregation type is `extended_stats` and the `field` setting defines the numeric field of the documents the stats will be computed on. The above will return the following:
@@ -55,7 +54,7 @@ By default, the `extended_stats` metric will return an object called `std_deviat
 deviations from the mean.  This can be a useful way to visualize variance of your data.  If you want a different boundary, for example
 three standard deviations, you can set `sigma` in the request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /exams/_search
 {
@@ -70,7 +69,6 @@ GET /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 <1> `sigma` controls how many standard deviations +/- from the mean should be displayed
 
@@ -89,7 +87,7 @@ if your data is skewed heavily left or right, the value returned will be mislead
 
 Computing the grades stats based on a script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /exams/_search
 {
@@ -106,12 +104,11 @@ GET /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /exams/_search
 {
@@ -130,14 +127,13 @@ GET /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams,stored_example_script]
 
 ===== Value Script
 
 It turned out that the exam was way above the level of the students and a grade correction needs to be applied. We can use value script to get the new stats:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /exams/_search
 {
@@ -158,7 +154,6 @@ GET /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 ==== Missing value
@@ -167,7 +162,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /exams/_search
 {
@@ -182,7 +177,6 @@ GET /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `0`.

+ 1 - 2
docs/reference/aggregations/metrics/geobounds-aggregation.asciidoc

@@ -6,7 +6,7 @@ A metric aggregation that computes the bounding box containing all geo_point val
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /museums
 {
@@ -48,7 +48,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> The `geo_bounds` aggregation specifies the field to use to obtain the bounds
 <2> `wrap_longitude` is an optional parameter which specifies whether the bounding box should be allowed to overlap the international date line. The default value is `true`

+ 2 - 4
docs/reference/aggregations/metrics/geocentroid-aggregation.asciidoc

@@ -5,7 +5,7 @@ A metric aggregation that computes the weighted https://en.wikipedia.org/wiki/Ce
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /museums
 {
@@ -43,7 +43,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> The `geo_centroid` aggregation specifies the field to use for computing the centroid. (NOTE: field must be a <<geo-point>> type)
 
@@ -72,7 +71,7 @@ The `geo_centroid` aggregation is more interesting when combined as a sub-aggreg
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /museums/_search?size=0
 {
@@ -88,7 +87,6 @@ POST /museums/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 The above example uses `geo_centroid` as a sub-aggregation to a

+ 5 - 10
docs/reference/aggregations/metrics/max-aggregation.asciidoc

@@ -12,7 +12,7 @@ whose absolute value is greater than +2^53+.
 
 Computing the max price value across all documents
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -21,7 +21,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -48,7 +47,7 @@ response.
 The `max` aggregation can also calculate the maximum of a script. The example
 below computes the maximum price:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -63,13 +62,12 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 This will use the <<modules-scripting-painless, Painless>> scripting language
 and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -87,7 +85,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales,stored_example_script]
 
 ==== Value Script
@@ -97,7 +94,7 @@ would like to compute the max in EURO (and for the sake of this example, let's
 say the conversion rate is 1.2). We can use a value script to apply the
 conversion rate to every value before it is aggregated:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -116,7 +113,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Missing value
@@ -125,7 +121,7 @@ The `missing` parameter defines how documents that are missing a value should
 be treated. By default they will be ignored but it is also possible to treat
 them as if they had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -139,7 +135,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Documents without a value in the `grade` field will fall into the same

+ 5 - 10
docs/reference/aggregations/metrics/median-absolute-deviation-aggregation.asciidoc

@@ -24,7 +24,7 @@ In this example we have a product which has an average rating of
 3 stars. Let's look at its ratings' median absolute deviation to determine
 how much they vary
 
-[source,js]
+[source,console]
 ---------------------------------------------------------
 GET reviews/_search
 {
@@ -43,7 +43,6 @@ GET reviews/_search
   }
 }
 ---------------------------------------------------------
-// CONSOLE
 // TEST[setup:reviews]
 <1> `rating` must be a numeric field
 
@@ -84,7 +83,7 @@ cost of higher memory usage. For more about the characteristics of the TDigest
 `compression` parameter see
 <<search-aggregations-metrics-percentile-aggregation-compression>>.
 
-[source,js]
+[source,console]
 ---------------------------------------------------------
 GET reviews/_search
 {
@@ -99,7 +98,6 @@ GET reviews/_search
   }
 }
 ---------------------------------------------------------
-// CONSOLE
 // TEST[setup:reviews]
 
 The default `compression` value for this aggregation is `1000`. At this
@@ -114,7 +112,7 @@ of one to ten, we can using scripting.
 
 To provide an inline script:
 
-[source,js]
+[source,console]
 ---------------------------------------------------------
 GET reviews/_search
 {
@@ -134,12 +132,11 @@ GET reviews/_search
   }
 }
 ---------------------------------------------------------
-// CONSOLE
 // TEST[setup:reviews]
 
 To provide a stored script:
 
-[source,js]
+[source,console]
 ---------------------------------------------------------
 GET reviews/_search
 {
@@ -158,7 +155,6 @@ GET reviews/_search
   }
 }
 ---------------------------------------------------------
-// CONSOLE
 // TEST[setup:reviews,stored_example_script]
 
 ==== Missing value
@@ -170,7 +166,7 @@ as if they had a value.
 Let's be optimistic and assume some reviewers loved the product so much that
 they forgot to give it a rating. We'll assign them five stars
 
-[source,js]
+[source,console]
 ---------------------------------------------------------
 GET reviews/_search
 {
@@ -185,5 +181,4 @@ GET reviews/_search
   }
 }
 ---------------------------------------------------------
-// CONSOLE
 // TEST[setup:reviews]

+ 5 - 10
docs/reference/aggregations/metrics/min-aggregation.asciidoc

@@ -12,7 +12,7 @@ whose absolute value is greater than +2^53+.
 
 Computing the min price value across all documents:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -21,7 +21,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -49,7 +48,7 @@ response.
 The `min` aggregation can also calculate the minimum of a script. The example
 below computes the minimum price:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -64,13 +63,12 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 This will use the <<modules-scripting-painless, Painless>> scripting language
 and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -88,7 +86,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales,stored_example_script]
 
 ==== Value Script
@@ -98,7 +95,7 @@ would like to compute the min in EURO (and for the sake of this example, let's
 say the conversion rate is 1.2). We can use a value script to apply the
 conversion rate to every value before it is aggregated:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -117,7 +114,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Missing value
@@ -126,7 +122,7 @@ The `missing` parameter defines how documents that are missing a value should
 be treated. By default they will be ignored but it is also possible to treat
 them as if they had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -140,7 +136,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> Documents without a value in the `grade` field will fall into the same

+ 8 - 16
docs/reference/aggregations/metrics/percentile-aggregation.asciidoc

@@ -24,7 +24,7 @@ but it can be easily skewed by a single slow response.
 
 Let's look at a range of percentiles representing load time:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -38,7 +38,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 <1> The field `load_time` must be a numeric field
 
@@ -76,7 +75,7 @@ Often, administrators are only interested in outliers -- the extreme percentiles
 We can specify just the percents we are interested in (requested percentiles
 must be a value between 0-100 inclusive):
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -91,7 +90,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 <1> Use the `percents` parameter to specify particular percentiles to calculate
 
@@ -99,7 +97,7 @@ GET latency/_search
 
 By default the `keyed` flag is set to `true` which associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -114,7 +112,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 
 Response:
@@ -168,7 +165,7 @@ The percentile metric supports scripting.  For example, if our load times
 are in milliseconds but we want percentiles calculated in seconds, we could use
 a script to convert them on-the-fly:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -188,7 +185,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 
 <1> The `field` parameter is replaced with a `script` parameter, which uses the
@@ -197,7 +193,7 @@ script to generate values which percentiles are calculated on
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -216,7 +212,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency,stored_example_script]
 
 [[search-aggregations-metrics-percentile-aggregation-approximation]]
@@ -262,7 +257,7 @@ it. It would not be the case on more skewed distributions.
 Approximate algorithms must balance memory utilization with estimation accuracy.
 This balance can be controlled using a `compression` parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -279,7 +274,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 
 <1> Compression controls memory usage and approximation error
@@ -313,7 +307,7 @@ for values up to 1 millisecond and 3.6 seconds (or better) for the maximum track
 
 The HDR Histogram can be used by specifying the `method` parameter in the request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -331,7 +325,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 
 <1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object
@@ -346,7 +339,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -361,7 +354,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 
 <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`.

+ 10 - 12
docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc

@@ -22,7 +22,7 @@ Assume your data consists of website load times.  You may have a service agreeme
 
 Let's look at a range of percentiles representing load time:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -37,8 +37,8 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
+
 <1> The field `load_time` must be a numeric field
 
 The response will look like this:
@@ -67,7 +67,7 @@ hitting the 95% load time target
 
 By default the `keyed` flag is set to `true` associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -83,7 +83,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
 
 Response:
@@ -118,7 +117,7 @@ The percentile rank metric supports scripting.  For example, if our load times
 are in milliseconds but we want to specify values in seconds, we could use
 a script to convert them on-the-fly:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -139,15 +138,15 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
+
 <1> The `field` parameter is replaced with a `script` parameter, which uses the
 script to generate values which percentile ranks are calculated on
 <2> Scripting supports parameterized input just like any other script
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -167,7 +166,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency,stored_example_script]
 
 ==== HDR Histogram
@@ -183,7 +181,7 @@ microseconds) in a histogram set to 3 significant digits, it will maintain a val
 
 The HDR Histogram can be used by specifying the `method` parameter in the request:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -201,8 +199,8 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
+
 <1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object
 <2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits
 
@@ -215,7 +213,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET latency/_search
 {
@@ -231,6 +229,6 @@ GET latency/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:latency]
+
 <1> Documents without a value in the `load_time` field will fall into the same bucket as documents that have the value `10`.

+ 3 - 6
docs/reference/aggregations/metrics/scripted-metric-aggregation.asciidoc

@@ -5,7 +5,7 @@ A metric aggregation that executes using scripts to provide a metric output.
 
 Example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST ledger/_search?size=0
 {
@@ -24,7 +24,6 @@ POST ledger/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:ledger]
 
 <1> `init_script` is an optional parameter, all other scripts are required.
@@ -50,7 +49,7 @@ The response for the above aggregation:
 
 The above example can also be specified using stored scripts as follows:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST ledger/_search?size=0
 {
@@ -77,7 +76,6 @@ POST ledger/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:ledger,stored_scripted_metric_script]
 
 <1> script parameters for `init`, `map` and `combine` scripts must be specified
@@ -145,7 +143,7 @@ final combined profit which will be returned in the response of the aggregation.
 
 Imagine a situation where you index the following documents into an index with 2 shards:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /transactions/_bulk?refresh
 {"index":{"_id":1}}
@@ -157,7 +155,6 @@ PUT /transactions/_bulk?refresh
 {"index":{"_id":4}}
 {"type": "sale","amount": 130}
 --------------------------------------------------
-// CONSOLE
 
 Lets say that documents 1 and 3 end up on shard A and documents 2 and 4 end up on shard B. The following is a breakdown of what the aggregation result is
 at each stage of the example above.

+ 5 - 10
docs/reference/aggregations/metrics/stats-aggregation.asciidoc

@@ -7,7 +7,7 @@ The stats that are returned consist of: `min`, `max`, `sum`, `count` and `avg`.
 
 Assuming the data consists of documents representing exams grades (between 0 and 100) of students
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -16,7 +16,6 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 The above aggregation computes the grades statistics over all documents. The aggregation type is `stats` and the `field` setting defines the numeric field of the documents the stats will be computed on. The above will return the following:
@@ -46,7 +45,7 @@ The name of the aggregation (`grades_stats` above) also serves as the key by whi
 
 Computing the grades stats based on a script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -62,12 +61,11 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -85,14 +83,13 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams,stored_example_script]
 
 ===== Value Script
 
 It turned out that the exam was way above the level of the students and a grade correction needs to be applied. We can use a value script to get the new stats:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -112,7 +109,6 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 ==== Missing value
@@ -121,7 +117,7 @@ The `missing` parameter defines how documents that are missing a value should be
 By default they will be ignored but it is also possible to treat them as if they
 had a value.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search?size=0
 {
@@ -135,7 +131,6 @@ POST /exams/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `0`.

+ 5 - 10
docs/reference/aggregations/metrics/sum-aggregation.asciidoc

@@ -6,7 +6,7 @@ A `single-value` metrics aggregation that sums up numeric values that are extrac
 Assuming the data consists of documents representing sales records we can sum
 the sale price of all hats with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -22,7 +22,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Resulting in:
@@ -46,7 +45,7 @@ The name of the aggregation (`hat_prices` above) also serves as the key by which
 
 We could also use a script to fetch the sales price:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -68,12 +67,11 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -98,7 +96,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales,stored_example_script]
 
 ===== Value Script
@@ -106,7 +103,7 @@ POST /sales/_search?size=0
 It is also possible to access the field value from the script using `_value`.
 For example, this will sum the square of the prices for all hats:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -129,7 +126,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== Missing value
@@ -139,7 +135,7 @@ be treated. By default documents missing the value will be ignored but it is
 also possible to treat them as if they had a value. For example, this treats
 all hat sales without a price as being `100`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -160,5 +156,4 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]

+ 6 - 10
docs/reference/aggregations/metrics/tophits-aggregation.asciidoc

@@ -32,7 +32,7 @@ The top_hits aggregation returns regular search hits, because of this many per h
 In the following example we group the sales by type and per type we show the last sale. 
 For each sale only the date and price fields are being included in the source.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -63,7 +63,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Possible response:
@@ -185,7 +184,7 @@ belong to. By defining a `terms` aggregator on the `domain` field we group the r
 Also a `max` aggregator is defined which is used by the `terms` aggregator's order feature to return the buckets by
 relevancy order of the most relevant document in a bucket.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -218,7 +217,6 @@ POST /sales/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 At the moment the `max` (or `min`) aggregator is needed to make sure the buckets from the `terms` aggregator are
@@ -239,7 +237,7 @@ and includes the array field and the offset in the array field the nested hit be
 
 Let's see how it works with a real sample. Considering the following mapping:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /sales
 {
@@ -257,12 +255,12 @@ PUT /sales
     }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> The `comments` is an array that holds nested documents under the `product` object.
 
 And some documents:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /sales/_doc/1?refresh
 {
@@ -274,12 +272,11 @@ PUT /sales/_doc/1?refresh
     ]
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 It's now possible to execute the following `top_hits` aggregation (wrapped in a `nested` aggregation):
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -308,7 +305,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[s/_search/_search\?filter_path=aggregations.by_sale.by_user.buckets/]
 

+ 3 - 6
docs/reference/aggregations/metrics/valuecount-aggregation.asciidoc

@@ -6,7 +6,7 @@ These values can be extracted either from specific fields in the documents, or b
 this aggregator will be used in conjunction with other single-value aggregations. For example, when computing the `avg`
 one might be interested in the number of values the average is computed over.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -15,7 +15,6 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:
@@ -40,7 +39,7 @@ retrieved from the returned response.
 
 Counting the values generated by a script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -55,12 +54,11 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search?size=0
 {
@@ -78,5 +76,4 @@ POST /sales/_search?size=0
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales,stored_example_script]

+ 4 - 8
docs/reference/aggregations/metrics/weighted-avg-aggregation.asciidoc

@@ -51,7 +51,7 @@ The `value` and `weight` objects have per-field specific configuration:
 If our documents have a `"grade"` field that holds a 0-100 numeric score, and a `"weight"` field which holds an arbitrary numeric weight,
 we can calculate the weighted average using:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search
 {
@@ -70,7 +70,6 @@ POST /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 Which yields a response like:
@@ -98,7 +97,7 @@ This single weight will be applied independently to each value extracted from th
 
 This example show how a single document with multiple values will be averaged with a single weight:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_doc?refresh
 {
@@ -123,7 +122,6 @@ POST /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST
 
 The three values (`1`, `2`, and `3`) will be included as independent values, all with the weight of `2`:
@@ -149,7 +147,7 @@ The aggregation returns `2.0` as the result, which matches what we would expect
 Both the value and the weight can be derived from a script, instead of a field.  As a simple example, the following
 will add one to the grade and weight in the document using a script:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search
 {
@@ -168,7 +166,6 @@ POST /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 
 
@@ -182,7 +179,7 @@ If the `weight` field is missing, it is assumed to have a weight of `1` (like a
 
 Both of these defaults can be overridden with the `missing` parameter:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /exams/_search
 {
@@ -203,6 +200,5 @@ POST /exams/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:exams]
 

+ 3 - 6
docs/reference/aggregations/misc.asciidoc

@@ -15,7 +15,7 @@ See <<shard-request-cache>> for more details.
 There are many occasions when aggregations are required but search hits are not.  For these cases the hits can be ignored by
 setting `size=0`. For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /twitter/_search
 {
@@ -29,7 +29,6 @@ GET /twitter/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 Setting `size` to `0` avoids executing the fetch phase of the search making the request more efficient.
@@ -42,7 +41,7 @@ at response time.
 
 Consider this example where we want to associate the color blue with our `terms` aggregation.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /twitter/_search
 {
@@ -59,7 +58,6 @@ GET /twitter/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 Then that piece of metadata will be returned in place for our `titles` terms aggregation
@@ -94,7 +92,7 @@ Considering the following <<search-aggregations-bucket-datehistogram-aggregation
 `tweets_over_time` which has a sub <<search-aggregations-metrics-top-hits-aggregation, 'top_hits` aggregation>> named
  `top_users`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /twitter/_search?typed_keys
 {
@@ -115,7 +113,6 @@ GET /twitter/_search?typed_keys
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 In the response, the aggregations names will be changed to respectively `date_histogram#tweets_over_time` and

+ 10 - 10
docs/reference/aggregations/pipeline.asciidoc

@@ -50,7 +50,7 @@ Paths are relative from the position of the pipeline aggregation; they are not a
 aggregation tree. For example, this derivative is embedded inside a date_histogram and refers to a "sibling"
 metric `"the_sum"`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -72,7 +72,7 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> The metric is called `"the_sum"`
 <2> The `buckets_path` refers to the metric via a relative path `"the_sum"`
 
@@ -80,7 +80,7 @@ POST /_search
 instead of embedded "inside" them.  For example, the `max_bucket` aggregation uses the `buckets_path` to specify
 a metric embedded inside a sibling aggregation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -106,8 +106,8 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
+
 <1> `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the
 `sales_per_month` date histogram.
 
@@ -115,7 +115,7 @@ If a Sibling pipeline agg references a multi-bucket aggregation, such as a `term
 select specific keys from the multi-bucket.  For example, a `bucket_script` could select two specific buckets (via
 their bucket keys) to perform the calculation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -152,8 +152,8 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
+
 <1> `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically,
 instead of fetching all the buckets from `sale_type` aggregation
 
@@ -164,7 +164,7 @@ Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path
 the pipeline aggregation to use the document count as its input.  For example, a derivative can be calculated
 on the document count of each bucket, instead of a specific metric:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -183,14 +183,14 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> By using `_count` instead of a metric name, we can calculate the derivative of document counts in the histogram
 
 The `buckets_path` can also use `"_bucket_count"` and path to a multi-bucket aggregation to use the number of buckets
 returned by that aggregation in the pipeline aggregation instead of a metric. for example a `bucket_selector` can be
 used here to filter out buckets which contain no buckets for an inner terms aggregation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -222,8 +222,8 @@ POST /sales/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
+
 <1> By using `_bucket_count` instead of a metric name, we can filter out `histo` buckets where they contain no buckets
 for the `categories` aggregation
 

+ 2 - 2
docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc

@@ -33,7 +33,7 @@ An `avg_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the average of the total monthly `sales`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -61,8 +61,8 @@ POST /_search
 }
 
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
+
 <1> `buckets_path` instructs this avg_bucket aggregation that we want the (mean) average value of the `sales` aggregation in the
 `sales_per_month` date histogram.
 

+ 1 - 2
docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc

@@ -41,7 +41,7 @@ for more details) |Required |
 
 The following snippet calculates the ratio percentage of t-shirt sales compared to total sales each month:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -86,7 +86,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 And the following may be the response:

+ 1 - 2
docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc

@@ -44,7 +44,7 @@ for more details) |Required |
 
 The following snippet only retains buckets where the total sales for the month is more than 200:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -74,7 +74,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 And the following may be the response:

+ 3 - 4
docs/reference/aggregations/pipeline/bucket-sort-aggregation.asciidoc

@@ -47,7 +47,7 @@ is ascending.
 
 The following snippet returns the buckets corresponding to the 3 months with the highest total sales in descending order:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -77,8 +77,8 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
+
 <1> `sort` is set to use the values of `total_sales` in descending order
 <2> `size` is set to `3` meaning only the top 3 months in `total_sales` will be returned
 
@@ -135,7 +135,7 @@ without specifying `sort`.
 
 The following example simply truncates the result so that only the second bucket is returned:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -158,7 +158,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 Response:

+ 2 - 4
docs/reference/aggregations/pipeline/cumulative-cardinality-aggregation.asciidoc

@@ -38,7 +38,7 @@ A `cumulative_cardinality` aggregation looks like this in isolation:
 
 The following snippet calculates the cumulative cardinality of the total daily `users`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /user_hits/_search
 {
@@ -65,7 +65,6 @@ GET /user_hits/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:user_hits]
 
 <1> `buckets_path` instructs this aggregation to use the output of the `distinct_users` aggregation for the cumulative cardinality
@@ -138,7 +137,7 @@ are added each day, rather than the total cumulative count.
 
 This can be accomplished by adding a `derivative` aggregation to our query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET /user_hits/_search
 {
@@ -170,7 +169,6 @@ GET /user_hits/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:user_hits]
 
 

+ 1 - 2
docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc

@@ -31,7 +31,7 @@ A `cumulative_sum` aggregation looks like this in isolation:
 
 The following snippet calculates the cumulative sum of the total monthly `sales`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -58,7 +58,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` instructs this cumulative sum aggregation to use the output of the `sales` aggregation for the cumulative sum

+ 3 - 6
docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc

@@ -34,7 +34,7 @@ A `derivative` aggregation looks like this in isolation:
 
 The following snippet calculates the derivative of the total monthly `sales`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -61,7 +61,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` instructs this derivative aggregation to use the output of the `sales` aggregation for the derivative
@@ -128,7 +127,7 @@ A second order derivative can be calculated by chaining the derivative pipeline
 pipeline aggregation as in the following example which will calculate both the first and the second order derivative of the total
 monthly sales:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -160,7 +159,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` for the second derivative points to the name of the first derivative
@@ -228,7 +226,7 @@ The derivative aggregation allows the units of the derivative values to be speci
 `normalized_value` which reports the derivative value in the desired x-axis units.  In the below example we calculate the derivative
 of the total sales per month but ask for the derivative of the sales as in the units of sales per day:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -256,7 +254,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 <1> `unit` specifies what unit to use for the x-axis of the derivative calculation
 

+ 1 - 2
docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc

@@ -35,7 +35,7 @@ A `extended_stats_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the extended stats for monthly `sales` bucket:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -62,7 +62,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `bucket_paths` instructs this `extended_stats_bucket` aggregation that we want the calculate stats for the `sales` aggregation in the

+ 1 - 2
docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc

@@ -33,7 +33,7 @@ A `max_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the maximum of the total monthly `sales`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -60,7 +60,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the

+ 1 - 2
docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc

@@ -33,7 +33,7 @@ A `min_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the minimum of the total monthly `sales`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -60,7 +60,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` instructs this min_bucket aggregation that we want the minimum value of the `sales` aggregation in the

+ 11 - 22
docs/reference/aggregations/pipeline/movfn-aggregation.asciidoc

@@ -38,7 +38,7 @@ A `moving_fn` aggregation looks like this in isolation:
 `moving_fn` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation.  They can be
 embedded like any other metric aggregation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -65,7 +65,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals
@@ -140,7 +139,7 @@ kind of calculation and emit a single `double` as the result.  Emitting `null` i
 
 For example, this script will simply return the first value from the window, or `NaN` if no values are available:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -167,7 +166,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 [[shift-parameter]]
@@ -211,7 +209,7 @@ is only calculated over the real values. If the window is empty, or all values a
 |`values` |The window of values to find the maximum
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -238,7 +236,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ===== min Function
@@ -254,7 +251,7 @@ is only calculated over the real values. If the window is empty, or all values a
 |`values` |The window of values to find the minimum
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -281,7 +278,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ===== sum Function
@@ -297,7 +293,7 @@ the sum is only calculated over the real values.  If the window is empty, or all
 |`values` |The window of values to find the sum of
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -324,7 +320,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ===== stdDev Function
@@ -342,7 +337,7 @@ This function accepts a collection of doubles and average, then returns the stan
 |`avg` |The average of the window
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -369,7 +364,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 The `avg` parameter must be provided to the standard deviation function because different styles of averages can be computed on the window
@@ -394,7 +388,7 @@ values.
 |`values` |The window of values to find the sum of
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -421,7 +415,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== linearWeightedAvg Function
@@ -440,7 +433,7 @@ If the window is empty, or all values are `null`/`NaN`, `NaN` is returned as the
 |`values` |The window of values to find the sum of
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -467,7 +460,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 ==== ewma Function
@@ -492,7 +484,7 @@ values.
 |`alpha` |Exponential decay
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -519,7 +511,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 
@@ -550,7 +541,7 @@ values.
 |`beta` |Trend decay value
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -577,7 +568,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 In practice, the `alpha` value behaves very similarly in `holtMovAvg` as `ewmaMovAvg`: small values produce more smoothing
@@ -616,7 +606,7 @@ values.
 |`multiplicative` |True if you wish to use multiplicative holt-winters, false to use additive
 |===
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -643,7 +633,6 @@ POST /_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 [WARNING]

+ 1 - 2
docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc

@@ -34,7 +34,7 @@ A `percentiles_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the percentiles for the total monthly `sales` buckets:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -62,7 +62,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` instructs this percentiles_bucket aggregation that we want to calculate percentiles for

+ 1 - 2
docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc

@@ -60,7 +60,7 @@ A `serial_diff` aggregation looks like this in isolation:
 
 `serial_diff` aggregations must be embedded inside of a `histogram` or `date_histogram` aggregation:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_search
 {
@@ -88,7 +88,6 @@ POST /_search
    }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> A `date_histogram` named "my_date_histo" is constructed on the "timestamp" field, with one-day intervals
 <2> A `sum` metric is used to calculate the sum of a field.  This could be any metric (sum, min, max, etc)

+ 1 - 2
docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc

@@ -32,7 +32,7 @@ A `stats_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the stats for monthly `sales`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -59,7 +59,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `bucket_paths` instructs this `stats_bucket` aggregation that we want the calculate stats for the `sales` aggregation in the

+ 1 - 2
docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc

@@ -32,7 +32,7 @@ A `sum_bucket` aggregation looks like this in isolation:
 
 The following snippet calculates the sum of all the total monthly `sales` buckets:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /sales/_search
 {
@@ -59,7 +59,6 @@ POST /sales/_search
     }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:sales]
 
 <1> `buckets_path` instructs this sum_bucket aggregation that we want the sum of the `sales` aggregation in the

+ 1 - 2
docs/reference/search/suggesters/misc.asciidoc

@@ -6,7 +6,7 @@ Sometimes you need to know the exact type of a suggester in order to parse its r
 
 Considering the following example with two suggesters `term` and `phrase`:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _search?typed_keys
 {
@@ -25,7 +25,6 @@ POST _search?typed_keys
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 In the response, the suggester names will be changed to respectively `term#my-first-suggester` and

+ 6 - 10
docs/reference/search/suggesters/phrase-suggest.asciidoc

@@ -21,7 +21,7 @@ In general the `phrase` suggester requires special mapping up front to work.
 The `phrase` suggester examples on this page need the following mapping to
 work. The `reverse` analyzer is used only in the last example.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT test
 {
@@ -74,13 +74,12 @@ POST test/_doc?refresh=true
 POST test/_doc?refresh=true
 {"title": "nobel prize"}
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 Once you have the analyzers and mappings set up you can use the `phrase`
 suggester in the same spot you'd use the `term` suggester:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST test/_search
 {
@@ -104,7 +103,6 @@ POST test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 The response contains suggestions scored by the most likely spell correction first. In this case we received the expected correction "nobel prize".
 
@@ -222,7 +220,7 @@ The response contains suggestions scored by the most likely spell correction fir
     matching documents for the phrase was found, `false` otherwise.
     The default value for `prune` is `false`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST test/_search
 {
@@ -253,7 +251,7 @@ POST test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> This query will be run once for every suggestion.
 <2> The `{{suggestion}}` variable will be replaced by the text
     of each suggestion.
@@ -291,7 +289,7 @@ properties that can be configured.
     All parameters (`trigram_lambda`, `bigram_lambda`, `unigram_lambda`)
     must be supplied.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST test/_search
 {
@@ -311,7 +309,6 @@ POST test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 ===== Candidate Generators
 
@@ -412,7 +409,7 @@ of the direct generators to require a constant prefix to provide
 high-performance suggestions. The `pre_filter` and `post_filter` options
 accept ordinary analyzer names.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST test/_search
 {
@@ -436,7 +433,6 @@ POST test/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 `pre_filter` and `post_filter` can also be used to inject synonyms after
 candidates are generated. For instance for the query `captain usq` we

+ 2 - 4
docs/reference/search/uri-request.asciidoc

@@ -3,11 +3,10 @@
 
 Specifies search criteria as query parameters in the request URI.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_search?q=user:kimchy
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 
@@ -110,11 +109,10 @@ include::{docdir}/rest-api/common-parms.asciidoc[tag=terminate_after]
 [[search-uri-request-api-example]]
 ==== {api-examples-title}
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_search?q=user:kimchy
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:twitter]
 
 

+ 7 - 14
docs/reference/search/validate.asciidoc

@@ -4,7 +4,7 @@
 The validate API allows a user to validate a potentially expensive query
 without executing it. We'll use the following test data to explain _validate:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT twitter/_bulk?refresh
 {"index":{"_id":1}}
@@ -12,16 +12,14 @@ PUT twitter/_bulk?refresh
 {"index":{"_id":2}}
 {"user" : "kimchi", "post_date" : "2009-11-15T14:12:13", "message" : "My username is similar to @kimchy!"}
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 When sent a valid query:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_validate/query?q=user:foo
 --------------------------------------------------
-// CONSOLE
 
 The response contains `valid:true`:
 
@@ -58,7 +56,7 @@ not. Defaults to `false`.
 
 The query may also be sent in the request body:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_validate/query
 {
@@ -76,7 +74,6 @@ GET twitter/_validate/query
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 NOTE: The query being sent in the body must be nested in a `query` key, same as
 the <<search-search,search api>> works
@@ -85,7 +82,7 @@ If the query is invalid, `valid` will be `false`. Here the query is
 invalid because Elasticsearch knows the post_date field should be a date
 due to dynamic mapping, and 'foo' does not correctly parse into a date:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_validate/query
 {
@@ -97,7 +94,6 @@ GET twitter/_validate/query
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 [source,js]
 --------------------------------------------------
@@ -108,7 +104,7 @@ GET twitter/_validate/query
 An `explain` parameter can be specified to get more detailed information
 about why a query failed:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_validate/query?explain=true
 {
@@ -120,7 +116,6 @@ GET twitter/_validate/query?explain=true
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 responds with:
 
@@ -148,7 +143,7 @@ is more detailed showing the actual Lucene query that will be executed.
 
 For More Like This:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_validate/query?rewrite=true
 {
@@ -162,7 +157,6 @@ GET twitter/_validate/query?rewrite=true
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:the output is randomized depending on which shard we hit]
 
 Response:
@@ -195,7 +189,7 @@ all available shards.
 
 For Fuzzy Queries:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET twitter/_validate/query?rewrite=true&all_shards=true
 {
@@ -209,7 +203,6 @@ GET twitter/_validate/query?rewrite=true&all_shards=true
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 Response:
 

+ 1 - 2
docs/reference/setup/install/check-running.asciidoc

@@ -3,11 +3,10 @@
 You can test that your Elasticsearch node is running by sending an HTTP
 request to port `9200` on `localhost`:
 
-[source,js]
+[source,console]
 --------------------------------------------
 GET /
 --------------------------------------------
-// CONSOLE
 
 which should give you a response something like this:
 

+ 1 - 2
docs/reference/setup/logging-config.asciidoc

@@ -151,7 +151,7 @@ PUT /_cluster/settings
 
 For example:
 
-[source,js]
+[source,console]
 -------------------------------
 PUT /_cluster/settings
 {
@@ -160,7 +160,6 @@ PUT /_cluster/settings
   }
 }
 -------------------------------
-// CONSOLE
 
 This is most appropriate when you need to dynamically need to adjust a logging
 level on an actively-running cluster.

+ 3 - 2
docs/reference/setup/secure-settings.asciidoc

@@ -104,11 +104,12 @@ can be re-read and applied on a running node.
 The values of all secure settings, *reloadable* or not, must be identical
 across all cluster nodes. After making the desired secure settings changes,
 using the `bin/elasticsearch-keystore add` command, call:
-[source,js]
+
+[source,console]
 ----
 POST _nodes/reload_secure_settings
 ----
-// CONSOLE
+
 This API will decrypt and re-read the entire keystore, on every cluster node,
 but only the *reloadable* secure settings will be applied. Changes to other
 settings will not go into effect until the next restart. Once the call returns,

+ 1 - 2
docs/reference/setup/sysconfig/file-descriptors.asciidoc

@@ -25,8 +25,7 @@ descriptors to 65535 and do not require further configuration.
 You can check the `max_file_descriptors` configured for each node
 using the <<cluster-nodes-stats>> API, with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET _nodes/stats/process?filter_path=**.max_file_descriptors
 --------------------------------------------------
-// CONSOLE

+ 1 - 2
docs/reference/setup/sysconfig/swap.asciidoc

@@ -67,11 +67,10 @@ After starting Elasticsearch, you can see whether this setting was applied
 successfully by checking the value of `mlockall` in the output from this
 request:
 
-[source,js]
+[source,console]
 --------------
 GET _nodes?filter_path=**.mlockall
 --------------
-// CONSOLE
 
 If you see that `mlockall` is `false`, then it means that the `mlockall`
 request has failed.  You will also see a line with more information in the logs

+ 11 - 22
docs/reference/sql/endpoints/rest.asciidoc

@@ -17,14 +17,13 @@ The SQL REST API accepts SQL in a JSON document, executes it,
 and returns the results. 
 For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=txt
 {
     "query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -107,7 +106,7 @@ Here are some examples for the human readable formats:
 
 ==== CSV
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=csv
 {
@@ -115,7 +114,6 @@ POST /_sql?format=csv
     "fetch_size": 5
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -133,7 +131,7 @@ James S.A. Corey,Leviathan Wakes,561,2011-06-02T00:00:00.000Z
 
 ==== JSON
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=json
 {
@@ -141,7 +139,6 @@ POST /_sql?format=json
     "fetch_size": 5
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -169,7 +166,7 @@ Which returns:
 
 ==== TSV
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=tsv
 {
@@ -177,7 +174,6 @@ POST /_sql?format=tsv
     "fetch_size": 5
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -196,7 +192,7 @@ James S.A. Corey	Leviathan Wakes	561	2011-06-02T00:00:00.000Z
 
 ==== TXT
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=txt
 {
@@ -204,7 +200,6 @@ POST /_sql?format=txt
     "fetch_size": 5
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -224,7 +219,7 @@ James S.A. Corey |Leviathan Wakes     |561            |2011-06-02T00:00:00.000Z
 
 ==== YAML
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=yaml
 {
@@ -232,7 +227,6 @@ POST /_sql?format=yaml
     "fetch_size": 5
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -279,14 +273,13 @@ cursor: "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmB
 Using the example above, one can continue to the next page by sending back the `cursor` field. In
 case of text format the cursor is returned as `Cursor` http header.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=json
 {
     "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
 
@@ -317,14 +310,13 @@ Elasticsearch state is cleared.
 
 To clear the state earlier, you can use the clear cursor command:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql/close
 {
     "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
 
@@ -347,7 +339,7 @@ You can filter the results that SQL will run on using a standard
 {es} query DSL by specifying the query in the filter
 parameter.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=txt
 {
@@ -363,7 +355,6 @@ POST /_sql?format=txt
     "fetch_size": 5
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -386,7 +377,7 @@ in a columnar fashion: one row represents all the values of a certain column fro
 
 The following formats can be returned in columnar orientation: `json`, `yaml`, `cbor` and `smile`.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=json
 {
@@ -395,7 +386,6 @@ POST /_sql?format=json
     "columnar": true
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:
@@ -423,7 +413,7 @@ Which returns:
 Any subsequent calls using a `cursor` still have to contain the `columnar` parameter to preserve the orientation,
 meaning the initial query will not _remember_ the columnar option.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=json
 {
@@ -431,7 +421,6 @@ POST /_sql?format=json
     "columnar": true
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 // TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
 

+ 1 - 2
docs/reference/sql/endpoints/translate.asciidoc

@@ -6,7 +6,7 @@
 The SQL Translate API accepts SQL in a JSON document and translates it
 into native {es} queries. For example:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql/translate
 {
@@ -14,7 +14,6 @@ POST /_sql/translate
     "fetch_size": 10
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:library]
 
 Which returns:

+ 2 - 4
docs/reference/sql/getting-started.asciidoc

@@ -6,7 +6,7 @@
 To start using {es-sql}, create
 an index with some data to experiment with:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT /library/book/_bulk?refresh
 {"index":{"_id": "Leviathan Wakes"}}
@@ -16,18 +16,16 @@ PUT /library/book/_bulk?refresh
 {"index":{"_id": "Dune"}}
 {"name": "Dune", "author": "Frank Herbert", "release_date": "1965-06-01", "page_count": 604}
 --------------------------------------------------
-// CONSOLE
 
 And now you can execute SQL using the <<sql-rest>> right away:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST /_sql?format=txt
 {
     "query": "SELECT * FROM library WHERE release_date < '2000-01-01'"
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[continued]
 
 Which should return something along the lines of:

+ 2 - 4
docs/reference/upgrade/close-ml.asciidoc

@@ -3,11 +3,10 @@
 ////////////
 Take us out of upgrade mode after running any snippets on this page.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/set_upgrade_mode?enabled=false
 --------------------------------------------------
-// CONSOLE
 // TEARDOWN
 ////////////
 
@@ -25,11 +24,10 @@ it puts increased load on the cluster.
 prevent new jobs from opening by using the
 <<ml-set-upgrade-mode,set upgrade mode API>>:
 +
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/set_upgrade_mode?enabled=true
 --------------------------------------------------
-// CONSOLE
 +
 When you disable upgrade mode, the jobs resume using the last model
 state that was automatically saved. This option avoids the overhead of managing

+ 3 - 6
docs/reference/upgrade/cluster_restart.asciidoc

@@ -79,13 +79,12 @@ cluster and elect a master. At that point, you can use
 <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>> to monitor nodes
 joining the cluster:
 
-[source,sh]
+[source,console]
 --------------------------------------------------
 GET _cat/health
 
 GET _cat/nodes
 --------------------------------------------------
-// CONSOLE
 
 The `status` column returned by `_cat/health` shows the health of each node
 in the cluster: `red`, `yellow`, or `green`.
@@ -113,7 +112,7 @@ When all nodes have joined the cluster and recovered their primary shards,
 reenable allocation by restoring `cluster.routing.allocation.enable` to its
 default:
 
-[source,js]
+[source,console]
 ------------------------------------------------------
 PUT _cluster/settings
 {
@@ -122,7 +121,6 @@ PUT _cluster/settings
   }
 }
 ------------------------------------------------------
-// CONSOLE
 
 Once allocation is reenabled, the cluster starts allocating replica shards to
 the data nodes. At this point it is safe to resume indexing and searching,
@@ -133,13 +131,12 @@ is `green`.
 You can monitor progress with the <<cat-health,`_cat/health`>> and
 <<cat-recovery,`_cat/recovery`>> APIs:
 
-[source,sh]
+[source,console]
 --------------------------------------------------
 GET _cat/health
 
 GET _cat/recovery
 --------------------------------------------------
-// CONSOLE
 --
 
 . *Restart machine learning jobs.*

+ 1 - 2
docs/reference/upgrade/disable-shard-alloc.asciidoc

@@ -7,7 +7,7 @@ restarted, this I/O is unnecessary. You can avoid racing the clock by
 <<shards-allocation, disabling allocation>> of replicas before shutting down
 the node:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _cluster/settings
 {
@@ -16,5 +16,4 @@ PUT _cluster/settings
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[skip:indexes don't assign]

+ 1 - 2
docs/reference/upgrade/open-ml.asciidoc

@@ -3,11 +3,10 @@ If you temporarily halted the tasks associated with your {ml} jobs,
 use the <<ml-set-upgrade-mode,set upgrade mode API>> to return them to active
 states:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _ml/set_upgrade_mode?enabled=false
 --------------------------------------------------
-// CONSOLE
 
 If you closed all {ml} jobs before the upgrade, open the jobs and start the
 datafeeds from {kib} or with the <<ml-open-job,open jobs>> and

+ 1 - 2
docs/reference/upgrade/reindex_upgrade.asciidoc

@@ -156,7 +156,7 @@ cluster and remove nodes from the old one.
   remote index into the new {version} index:
 +
 --
-[source,js]
+[source,console]
 --------------------------------------------------
 POST _reindex
 {
@@ -178,7 +178,6 @@ POST _reindex
   }
 }
 --------------------------------------------------
-// CONSOLE
 // TEST[setup:host]
 // TEST[s/^/PUT source\n/]
 // TEST[s/oldhost:9200",/\${host}"/]

+ 4 - 8
docs/reference/upgrade/rolling_upgrade.asciidoc

@@ -73,11 +73,10 @@ particular, the placement of the realm type changed. See
 Start the newly-upgraded node and confirm that it joins the cluster by checking
 the log file or by submitting a `_cat/nodes` request:
 
-[source,sh]
+[source,console]
 --------------------------------------------------
 GET _cat/nodes
 --------------------------------------------------
-// CONSOLE
 --
 
 . *Reenable shard allocation.*
@@ -87,7 +86,7 @@ GET _cat/nodes
 Once the node has joined the cluster, remove the `cluster.routing.allocation.enable`
 setting to enable shard allocation and start using the node:
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT _cluster/settings
 {
@@ -96,7 +95,6 @@ PUT _cluster/settings
   }
 }
 --------------------------------------------------
-// CONSOLE
 --
 
 . *Wait for the node to recover.*
@@ -106,11 +104,10 @@ PUT _cluster/settings
 Before upgrading the next node, wait for the cluster to finish shard allocation.
 You can check progress by submitting a <<cat-health,`_cat/health`>> request:
 
-[source,sh]
+[source,console]
 --------------------------------------------------
 GET _cat/health?v
 --------------------------------------------------
-// CONSOLE
 
 Wait for the `status` column to switch from `yellow` to `green`. Once the
 node is `green`, all primary and replica shards have been allocated.
@@ -137,11 +134,10 @@ Shards that were not <<synced-flush-api,sync-flushed>> might take longer to
 recover.  You can monitor the recovery status of individual shards by
 submitting a <<cat-recovery,`_cat/recovery`>> request:
 
-[source,sh]
+[source,console]
 --------------------------------------------------
 GET _cat/recovery
 --------------------------------------------------
-// CONSOLE
 
 If you stopped indexing, it is safe to resume indexing as soon as
 recovery completes.

+ 1 - 2
docs/reference/upgrade/synced-flush.asciidoc

@@ -1,9 +1,8 @@
 
-[source,sh]
+[source,console]
 --------------------------------------------------
 POST _flush/synced
 --------------------------------------------------
-// CONSOLE
 
 When you perform a synced flush, check the response to make sure there are
 no failures. Synced flush operations that fail due to pending indexing

+ 10 - 18
docs/reference/vectors/vector-functions.asciidoc

@@ -17,7 +17,7 @@ to limit the number of matched documents with a `query` parameter.
 Let's create an index with the following mapping and index a couple
 of documents into it.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 PUT my_index
 {
@@ -52,13 +52,12 @@ PUT my_index/_doc/2
 }
 
 --------------------------------------------------
-// CONSOLE
 // TESTSETUP
 
 For dense_vector fields, `cosineSimilarity` calculates the measure of
 cosine similarity between a given query vector and document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -83,7 +82,7 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
+
 <1> To restrict the number of documents on which script score calculation is applied, provide a filter.
 <2> The script adds 1.0 to the cosine similarity to prevent the score from being negative.
 <3> To take advantage of the script optimizations, provide a query vector as a script parameter.
@@ -94,7 +93,7 @@ different from the query's vector, an error will be thrown.
 Similarly, for sparse_vector fields, `cosineSimilaritySparse` calculates cosine similarity
 between a given query vector and document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -119,12 +118,11 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 For dense_vector fields, `dotProduct` calculates the measure of
 dot product between a given query vector and document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -152,14 +150,13 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Using the standard sigmoid function prevents scores from being negative.
 
 Similarly, for sparse_vector fields, `dotProductSparse` calculates dot product
 between a given query vector and document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -187,13 +184,12 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 For dense_vector fields, `l1norm` calculates L^1^ distance
 (Manhattan distance) between a given query vector and
 document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -218,7 +214,6 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 <1> Unlike `cosineSimilarity` that represent similarity, `l1norm` and
 `l2norm` shown below represent distances or differences. This means, that
@@ -232,7 +227,7 @@ we added `1` in the denominator.
 For sparse_vector fields, `l1normSparse` calculates L^1^ distance
 between a given query vector and document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -257,13 +252,12 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 For dense_vector fields, `l2norm` calculates L^2^ distance
 (Euclidean distance) between a given query vector and
 document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -288,12 +282,11 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 Similarly, for sparse_vector fields, `l2normSparse` calculates L^2^ distance
 between a given query vector and document vectors.
 
-[source,js]
+[source,console]
 --------------------------------------------------
 GET my_index/_search
 {
@@ -318,7 +311,6 @@ GET my_index/_search
   }
 }
 --------------------------------------------------
-// CONSOLE
 
 NOTE: If a document doesn't have a value for a vector field on which
 a vector function is executed, an error will be thrown.