Sfoglia il codice sorgente

Docs: Replace [source,json] with [source,js]

The syntax highlighter only supports [source,js].

Also adds a check to the rest test generator that runs during
the build that'll fail the build if it sees `[source,json]`.
Nik Everett 9 anni fa
parent
commit
72eb621bce

+ 4 - 0
buildSrc/src/main/groovy/org/elasticsearch/gradle/doc/RestTestsFromSnippetsTask.groovy

@@ -87,6 +87,10 @@ public class RestTestsFromSnippetsTask extends SnippetsTask {
          * calls buildTest to actually build the test.
          * calls buildTest to actually build the test.
          */
          */
         void handleSnippet(Snippet snippet) {
         void handleSnippet(Snippet snippet) {
+            if (snippet.language == 'json') {
+                throw new InvalidUserDataException(
+                        "$snippet: Use `js` instead of `json`.")
+            }
             if (snippet.testSetup) {
             if (snippet.testSetup) {
                 setup(snippet)
                 setup(snippet)
                 return
                 return

+ 9 - 9
docs/plugins/analysis-icu.asciidoc

@@ -48,7 +48,7 @@ convert `nfc` to `nfd` or `nfkc` to `nfkd` respectively:
 Here are two examples, the default usage and a customised character filter:
 Here are two examples, the default usage and a customised character filter:
 
 
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {
@@ -96,7 +96,7 @@ but adds better support for some Asian languages by using a dictionary-based
 approach to identify words in Thai, Lao, Chinese, Japanese, and Korean, and
 approach to identify words in Thai, Lao, Chinese, Japanese, and Korean, and
 using custom rules to break Myanmar and Khmer text into syllables.
 using custom rules to break Myanmar and Khmer text into syllables.
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {
@@ -137,7 +137,7 @@ As a demonstration of how the rule files can be used, save the following user fi
 
 
 Then create an analyzer to use this rule file as follows:
 Then create an analyzer to use this rule file as follows:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {
@@ -167,7 +167,7 @@ POST icu_sample/_analyze?analyzer=my_analyzer&text=Elasticsearch. Wow!
 
 
 The above `analyze` request returns the following:
 The above `analyze` request returns the following:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 # Result
 # Result
 {
 {
@@ -198,7 +198,7 @@ You should probably prefer the <<analysis-icu-normalization-charfilter,Normaliza
 
 
 Here are two examples, the default usage and a customised token filter:
 Here are two examples, the default usage and a customised token filter:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {
@@ -244,7 +244,7 @@ Case folding of Unicode characters based on `UTR#30`, like the
 on steroids. It registers itself as the `icu_folding` token filter and is
 on steroids. It registers itself as the `icu_folding` token filter and is
 available to all indices:
 available to all indices:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {
@@ -278,7 +278,7 @@ to note that both upper and lowercase forms should be specified, and that
 these filtered character are not lowercased which is why we add the
 these filtered character are not lowercased which is why we add the
 `lowercase` filter as well:
 `lowercase` filter as well:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {
@@ -319,7 +319,7 @@ which is a best-effort attempt at language-neutral sorting.
 Below is an example of how to set up a field for sorting German names in
 Below is an example of how to set up a field for sorting German names in
 ``phonebook'' order:
 ``phonebook'' order:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT /my_index
 PUT /my_index
 {
 {
@@ -452,7 +452,7 @@ rulesets are not yet supported.
 
 
 For example:
 For example:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT icu_sample
 PUT icu_sample
 {
 {

+ 8 - 8
docs/plugins/analysis-kuromoji.asciidoc

@@ -146,7 +146,7 @@ If both parameters are used, the largest number of both is applied.
 
 
 Then create an analyzer as follows:
 Then create an analyzer as follows:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {
@@ -178,7 +178,7 @@ POST kuromoji_sample/_analyze?analyzer=my_analyzer&text=東京スカイツリー
 
 
 The above `analyze` request returns the following:
 The above `analyze` request returns the following:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 # Result
 # Result
 {
 {
@@ -204,7 +204,7 @@ The above `analyze` request returns the following:
 The `kuromoji_baseform` token filter replaces terms with their
 The `kuromoji_baseform` token filter replaces terms with their
 BaseFormAttribute. This acts as a lemmatizer for verbs and adjectives.
 BaseFormAttribute. This acts as a lemmatizer for verbs and adjectives.
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {
@@ -253,7 +253,7 @@ part-of-speech tags. It accepts the following setting:
     An array of part-of-speech tags that should be removed. It defaults to the
     An array of part-of-speech tags that should be removed. It defaults to the
     `stoptags.txt` file embedded in the `lucene-analyzer-kuromoji.jar`.
     `stoptags.txt` file embedded in the `lucene-analyzer-kuromoji.jar`.
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {
@@ -322,7 +322,7 @@ to `true`. The default when defining a custom `kuromoji_readingform`, however,
 is `false`.  The only reason to use the custom form is if you need the
 is `false`.  The only reason to use the custom form is if you need the
 katakana reading form:
 katakana reading form:
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {
@@ -379,7 +379,7 @@ This token filter accepts the following setting:
     is `4`).
     is `4`).
 
 
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {
@@ -425,7 +425,7 @@ the predefined `_japanese_` stopwords list.  If you want to use a different
 predefined list, then use the
 predefined list, then use the
 {ref}/analysis-stop-tokenfilter.html[`stop` token filter] instead.
 {ref}/analysis-stop-tokenfilter.html[`stop` token filter] instead.
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {
@@ -480,7 +480,7 @@ The above request returns:
 The `kuromoji_number` token filter normalizes Japanese numbers (kansūji)
 The `kuromoji_number` token filter normalizes Japanese numbers (kansūji)
 to regular Arabic decimal numbers in half-width characters.
 to regular Arabic decimal numbers in half-width characters.
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT kuromoji_sample
 PUT kuromoji_sample
 {
 {

+ 1 - 1
docs/plugins/analysis-phonetic.asciidoc

@@ -50,7 +50,7 @@ The `phonetic` token filter takes the following settings:
     token. Accepts `true` (default) and `false`.  Not supported by
     token. Accepts `true` (default) and `false`.  Not supported by
     `beidermorse` encoding.
     `beidermorse` encoding.
 
 
-[source,json]
+[source,js]
 --------------------------------------------------
 --------------------------------------------------
 PUT phonetic_sample
 PUT phonetic_sample
 {
 {

+ 3 - 3
docs/plugins/lang-javascript.asciidoc

@@ -51,7 +51,7 @@ See <<lang-javascript-file>> for a safer option.
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[inline scripts],
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[inline scripts],
 you can use JavaScript as follows:
 you can use JavaScript as follows:
 
 
-[source,json]
+[source,js]
 ----
 ----
 DELETE test
 DELETE test
 
 
@@ -94,7 +94,7 @@ See <<lang-javascript-file>> for a safer option.
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[stored scripts],
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[stored scripts],
 you can use JavaScript as follows:
 you can use JavaScript as follows:
 
 
-[source,json]
+[source,js]
 ----
 ----
 DELETE test
 DELETE test
 
 
@@ -155,7 +155,7 @@ doc["num"].value * factor
 
 
 then use the script as follows:
 then use the script as follows:
 
 
-[source,json]
+[source,js]
 ----
 ----
 DELETE test
 DELETE test
 
 

+ 3 - 3
docs/plugins/lang-python.asciidoc

@@ -50,7 +50,7 @@ See <<lang-python-file>> for a safer option.
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[inline scripts],
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[inline scripts],
 you can use Python as follows:
 you can use Python as follows:
 
 
-[source,json]
+[source,js]
 ----
 ----
 DELETE test
 DELETE test
 
 
@@ -93,7 +93,7 @@ See <<lang-python-file>> for a safer option.
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[stored scripts],
 If you have enabled {ref}/modules-scripting-security.html#enable-dynamic-scripting[stored scripts],
 you can use Python as follows:
 you can use Python as follows:
 
 
-[source,json]
+[source,js]
 ----
 ----
 DELETE test
 DELETE test
 
 
@@ -154,7 +154,7 @@ doc["num"].value * factor
 
 
 then use the script as follows:
 then use the script as follows:
 
 
-[source,json]
+[source,js]
 ----
 ----
 DELETE test
 DELETE test
 
 

+ 1 - 1
docs/plugins/repository-azure.asciidoc

@@ -129,7 +129,7 @@ The Azure repository supports following settings:
 
 
 Some examples, using scripts:
 Some examples, using scripts:
 
 
-[source,json]
+[source,js]
 ----
 ----
 # The simpliest one
 # The simpliest one
 PUT _snapshot/my_backup1
 PUT _snapshot/my_backup1

+ 1 - 1
docs/plugins/repository-s3.asciidoc

@@ -137,7 +137,7 @@ use `S3SignerType`, which is Signature Version 2.
 
 
 The S3 repository is using S3 to store snapshots. The S3 repository can be created using the following command:
 The S3 repository is using S3 to store snapshots. The S3 repository can be created using the following command:
 
 
-[source,json]
+[source,js]
 ----
 ----
 PUT _snapshot/my_s3_repository
 PUT _snapshot/my_s3_repository
 {
 {

+ 1 - 1
docs/plugins/store-smb.asciidoc

@@ -68,7 +68,7 @@ Note that setting will be applied for newly created indices.
 
 
 It can also be set on a per-index basis at index creation time:
 It can also be set on a per-index basis at index creation time:
 
 
-[source,json]
+[source,js]
 ----
 ----
 PUT my_index
 PUT my_index
 {
 {

+ 2 - 2
docs/reference/index-modules/allocation/prioritization.asciidoc

@@ -13,7 +13,7 @@ This means that, by default, newer indices will be recovered before older indice
 Use the per-index dynamically updateable `index.priority` setting to customise
 Use the per-index dynamically updateable `index.priority` setting to customise
 the index prioritization order.  For instance:
 the index prioritization order.  For instance:
 
 
-[source,json]
+[source,js]
 ------------------------------
 ------------------------------
 PUT index_1
 PUT index_1
 
 
@@ -45,7 +45,7 @@ In the above example:
 This setting accepts an integer, and can be updated on a live index with the
 This setting accepts an integer, and can be updated on a live index with the
 <<indices-update-settings,update index settings API>>:
 <<indices-update-settings,update index settings API>>:
 
 
-[source,json]
+[source,js]
 ------------------------------
 ------------------------------
 PUT index_4/_settings
 PUT index_4/_settings
 {
 {

+ 2 - 2
docs/reference/migration/migrate_5_0/mapping.asciidoc

@@ -20,7 +20,7 @@ values.  For backwards compatibility purposes, during the 5.x series:
 
 
 String mappings now have the following default mappings:
 String mappings now have the following default mappings:
 
 
-[source,json]
+[source,js]
 ---------------
 ---------------
 {
 {
   "type": "text",
   "type": "text",
@@ -135,7 +135,7 @@ will reject this option.
 Core types no longer support the object notation, which was used to provide
 Core types no longer support the object notation, which was used to provide
 per document boosts as follows:
 per document boosts as follows:
 
 
-[source,json]
+[source,js]
 ---------------
 ---------------
 {
 {
   "value": "field_value",
   "value": "field_value",

+ 3 - 3
docs/reference/modules/scripting/security.asciidoc

@@ -165,7 +165,7 @@ https://github.com/elastic/elasticsearch/blob/{branch}/core/src/main/java/org/el
 In a script, attempting to load a class that does not appear in the whitelist
 In a script, attempting to load a class that does not appear in the whitelist
 _may_ result in a `ClassNotFoundException`, for instance this script:
 _may_ result in a `ClassNotFoundException`, for instance this script:
 
 
-[source,json]
+[source,js]
 ------------------------------
 ------------------------------
 GET _search
 GET _search
 {
 {
@@ -179,7 +179,7 @@ GET _search
 
 
 will return the following exception:
 will return the following exception:
 
 
-[source,json]
+[source,js]
 ------------------------------
 ------------------------------
 {
 {
   "reason": {
   "reason": {
@@ -207,7 +207,7 @@ use(groovy.time.TimeCategory); new Date(123456789).format('HH')
 
 
 Returns the following exception:
 Returns the following exception:
 
 
-[source,json]
+[source,js]
 ------------------------------
 ------------------------------
 {
 {
   "reason": {
   "reason": {