浏览代码

fix typos of docs/plugins (#113348) (#113404)

Co-authored-by: YeonghyeonKo <46114393+YeonghyeonKO@users.noreply.github.com>
Liam Thompson 1 年之前
父节点
当前提交
2fac37dd68

+ 2 - 2
docs/plugins/analysis-icu.asciidoc

@@ -380,7 +380,7 @@ GET /my-index-000001/_search <3>
 
 
 --------------------------
 --------------------------
 
 
-<1> The `name` field uses the `standard` analyzer, and so support full text queries.
+<1> The `name` field uses the `standard` analyzer, and so supports full text queries.
 <2> The `name.sort` field is an `icu_collation_keyword` field that will preserve the name as
 <2> The `name.sort` field is an `icu_collation_keyword` field that will preserve the name as
     a single token doc values, and applies the German ``phonebook'' order.
     a single token doc values, and applies the German ``phonebook'' order.
 <3> An example query which searches the `name` field and sorts on the `name.sort` field.
 <3> An example query which searches the `name` field and sorts on the `name.sort` field.
@@ -467,7 +467,7 @@ differences.
 `case_first`::
 `case_first`::
 
 
 Possible values: `lower` or `upper`. Useful to control which case is sorted
 Possible values: `lower` or `upper`. Useful to control which case is sorted
-first when case is not ignored for strength `tertiary`. The default depends on
+first when the case is not ignored for strength `tertiary`. The default depends on
 the collation.
 the collation.
 
 
 `numeric`::
 `numeric`::

+ 2 - 2
docs/plugins/analysis-kuromoji.asciidoc

@@ -86,7 +86,7 @@ The `kuromoji_iteration_mark` normalizes Japanese horizontal iteration marks
 
 
 `normalize_kanji`::
 `normalize_kanji`::
 
 
-    Indicates whether kanji iteration marks should be normalize. Defaults to `true`.
+    Indicates whether kanji iteration marks should be normalized. Defaults to `true`.
 
 
 `normalize_kana`::
 `normalize_kana`::
 
 
@@ -189,7 +189,7 @@ PUT kuromoji_sample
 +
 +
 --
 --
 Additional expert user parameters `nbest_cost` and `nbest_examples` can be used
 Additional expert user parameters `nbest_cost` and `nbest_examples` can be used
-to include additional tokens that most likely according to the statistical model.
+to include additional tokens that are most likely according to the statistical model.
 If both parameters are used, the largest number of both is applied.
 If both parameters are used, the largest number of both is applied.
 
 
 `nbest_cost`::
 `nbest_cost`::

+ 1 - 1
docs/plugins/analysis-nori.asciidoc

@@ -447,7 +447,7 @@ Which responds with:
 The `nori_number` token filter normalizes Korean numbers
 The `nori_number` token filter normalizes Korean numbers
 to regular Arabic decimal numbers in half-width characters.
 to regular Arabic decimal numbers in half-width characters.
 
 
-Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds punctuation.
+Korean numbers are often written using a combination of Hangul and Arabic numbers with various kinds of punctuation.
 For example, 3.2천 means 3200.
 For example, 3.2천 means 3200.
 This filter does this kind of normalization and allows a search for 3200 to match 3.2천 in text,
 This filter does this kind of normalization and allows a search for 3200 to match 3.2천 in text,
 but can also be used to make range facets based on the normalized numbers and so on.
 but can also be used to make range facets based on the normalized numbers and so on.

+ 25 - 25
docs/plugins/development/creating-stable-plugins.asciidoc

@@ -1,8 +1,8 @@
 [[creating-stable-plugins]]
 [[creating-stable-plugins]]
 === Creating text analysis plugins with the stable plugin API
 === Creating text analysis plugins with the stable plugin API
 
 
-Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene 
-analyzers, token filters, character filters, and tokenizers]. 
+Text analysis plugins provide {es} with custom {ref}/analysis.html[Lucene
+analyzers, token filters, character filters, and tokenizers].
 
 
 [discrete]
 [discrete]
 ==== The stable plugin API
 ==== The stable plugin API
@@ -10,7 +10,7 @@ analyzers, token filters, character filters, and tokenizers].
 Text analysis plugins can be developed against the stable plugin API. This API
 Text analysis plugins can be developed against the stable plugin API. This API
 consists of the following dependencies:
 consists of the following dependencies:
 
 
-* `plugin-api` - an API used by plugin developers to implement custom {es} 
+* `plugin-api` - an API used by plugin developers to implement custom {es}
 plugins.
 plugins.
 * `plugin-analysis-api` - an API used by plugin developers to implement analysis
 * `plugin-analysis-api` - an API used by plugin developers to implement analysis
 plugins and integrate them into {es}.
 plugins and integrate them into {es}.
@@ -18,7 +18,7 @@ plugins and integrate them into {es}.
 core Lucene analysis interfaces like `Tokenizer`, `Analyzer`, and `TokenStream`.
 core Lucene analysis interfaces like `Tokenizer`, `Analyzer`, and `TokenStream`.
 
 
 For new versions of {es} within the same major version, plugins built against
 For new versions of {es} within the same major version, plugins built against
-this API do not need to be recompiled. Future versions of the API will be
+this API does not need to be recompiled. Future versions of the API will be
 backwards compatible and plugins are binary compatible with future versions of
 backwards compatible and plugins are binary compatible with future versions of
 {es}. In other words, once you have a working artifact, you can re-use it when
 {es}. In other words, once you have a working artifact, you can re-use it when
 you upgrade {es} to a new bugfix or minor version.
 you upgrade {es} to a new bugfix or minor version.
@@ -48,9 +48,9 @@ require code changes.
 
 
 Stable plugins are ZIP files composed of JAR files and two metadata files:
 Stable plugins are ZIP files composed of JAR files and two metadata files:
 
 
-* `stable-plugin-descriptor.properties` - a Java properties file that describes 
+* `stable-plugin-descriptor.properties` - a Java properties file that describes
 the plugin. Refer to <<plugin-descriptor-file-{plugin-type}>>.
 the plugin. Refer to <<plugin-descriptor-file-{plugin-type}>>.
-* `named_components.json` - a JSON file mapping interfaces to key-value pairs 
+* `named_components.json` - a JSON file mapping interfaces to key-value pairs
 of component names and implementation classes.
 of component names and implementation classes.
 
 
 Note that only JAR files at the root of the plugin are added to the classpath
 Note that only JAR files at the root of the plugin are added to the classpath
@@ -65,7 +65,7 @@ you use this plugin. However, you don't need Gradle to create plugins.
 
 
 The {es} Github repository contains
 The {es} Github repository contains
 {es-repo}tree/main/plugins/examples/stable-analysis[an example analysis plugin].
 {es-repo}tree/main/plugins/examples/stable-analysis[an example analysis plugin].
-The example `build.gradle` build script provides a good starting point for 
+The example `build.gradle` build script provides a good starting point for
 developing your own plugin.
 developing your own plugin.
 
 
 [discrete]
 [discrete]
@@ -77,29 +77,29 @@ Plugins are written in Java, so you need to install a Java Development Kit
 [discrete]
 [discrete]
 ===== Step by step
 ===== Step by step
 
 
-. Create a directory for your project. 
+. Create a directory for your project.
 . Copy the example `build.gradle` build script to your project directory.  Note
 . Copy the example `build.gradle` build script to your project directory.  Note
 that this build script uses the `elasticsearch.stable-esplugin` gradle plugin to
 that this build script uses the `elasticsearch.stable-esplugin` gradle plugin to
 build your plugin.
 build your plugin.
 . Edit the `build.gradle` build script:
 . Edit the `build.gradle` build script:
-** Add a definition for the `pluginApiVersion` and matching `luceneVersion` 
-variables to the top of the file. You can find these versions in the 
-`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch 
+** Add a definition for the `pluginApiVersion` and matching `luceneVersion`
+variables to the top of the file. You can find these versions in the
+`build-tools-internal/version.properties` file in the {es-repo}[Elasticsearch
 Github repository].
 Github repository].
-** Edit the `name` and `description` in the `esplugin` section of the build 
-script. This will create the plugin descriptor file. If you're not using the 
-`elasticsearch.stable-esplugin` gradle plugin, refer to 
+** Edit the `name` and `description` in the `esplugin` section of the build
+script. This will create the plugin descriptor file. If you're not using the
+`elasticsearch.stable-esplugin` gradle plugin, refer to
 <<plugin-descriptor-file-{plugin-type}>> to create the file manually.
 <<plugin-descriptor-file-{plugin-type}>> to create the file manually.
 ** Add module information.
 ** Add module information.
-** Ensure you have declared the following compile-time dependencies. These 
-dependencies are compile-time only because {es} will provide these libraries at 
+** Ensure you have declared the following compile-time dependencies. These
+dependencies are compile-time only because {es} will provide these libraries at
 runtime.
 runtime.
 *** `org.elasticsearch.plugin:elasticsearch-plugin-api`
 *** `org.elasticsearch.plugin:elasticsearch-plugin-api`
 *** `org.elasticsearch.plugin:elasticsearch-plugin-analysis-api`
 *** `org.elasticsearch.plugin:elasticsearch-plugin-analysis-api`
 *** `org.apache.lucene:lucene-analysis-common`
 *** `org.apache.lucene:lucene-analysis-common`
-** For unit testing, ensure these dependencies have also been added to the 
+** For unit testing, ensure these dependencies have also been added to the
 `build.gradle` script as `testImplementation` dependencies.
 `build.gradle` script as `testImplementation` dependencies.
-. Implement an interface from the analysis plugin API, annotating it with 
+. Implement an interface from the analysis plugin API, annotating it with
 `NamedComponent`. Refer to <<example-text-analysis-plugin>> for an example.
 `NamedComponent`. Refer to <<example-text-analysis-plugin>> for an example.
 . You should now be able to assemble a plugin ZIP file by running:
 . You should now be able to assemble a plugin ZIP file by running:
 +
 +
@@ -107,22 +107,22 @@ runtime.
 ----
 ----
 gradle bundlePlugin
 gradle bundlePlugin
 ----
 ----
-The resulting plugin ZIP file is written to the  `build/distributions` 
+The resulting plugin ZIP file is written to the  `build/distributions`
 directory.
 directory.
 
 
 [discrete]
 [discrete]
 ===== YAML REST tests
 ===== YAML REST tests
 
 
-The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your 
-plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework]. 
+The Gradle `elasticsearch.yaml-rest-test` plugin enables testing of your
+plugin using the {es-repo}blob/main/rest-api-spec/src/yamlRestTest/resources/rest-api-spec/test/README.asciidoc[{es} yamlRestTest framework].
 These tests use a YAML-formatted domain language to issue REST requests against
 These tests use a YAML-formatted domain language to issue REST requests against
-an internal {es} cluster that has your plugin installed, and to check the 
-results of those requests. The structure of a YAML REST test directory is as 
+an internal {es} cluster that has your plugin installed, and to check the
+results of those requests. The structure of a YAML REST test directory is as
 follows:
 follows:
 
 
-* A test suite class, defined under `src/yamlRestTest/java`. This class should 
+* A test suite class, defined under `src/yamlRestTest/java`. This class should
 extend `ESClientYamlSuiteTestCase`.
 extend `ESClientYamlSuiteTestCase`.
-* The YAML tests themselves should be defined under 
+* The YAML tests themselves should be defined under
 `src/yamlRestTest/resources/test/`.
 `src/yamlRestTest/resources/test/`.
 
 
 [[plugin-descriptor-file-stable]]
 [[plugin-descriptor-file-stable]]

+ 1 - 1
docs/plugins/discovery-azure-classic.asciidoc

@@ -148,7 +148,7 @@ Before starting, you need to have:
 --
 --
 
 
 You should follow http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/[this guide] to learn
 You should follow http://azure.microsoft.com/en-us/documentation/articles/linux-use-ssh-key/[this guide] to learn
-how to create or use existing SSH keys. If you have already did it, you can skip the following.
+how to create or use existing SSH keys. If you have already done it, you can skip the following.
 
 
 Here is a description on how to generate SSH keys using `openssl`:
 Here is a description on how to generate SSH keys using `openssl`:
 
 

+ 1 - 1
docs/plugins/discovery-gce.asciidoc

@@ -478,7 +478,7 @@ discovery:
       seed_providers: gce
       seed_providers: gce
 --------------------------------------------------
 --------------------------------------------------
 
 
-Replaces `project_id` and `zone` with your settings.
+Replace `project_id` and `zone` with your settings.
 
 
 To run test:
 To run test:
 
 

+ 2 - 2
docs/plugins/integrations.asciidoc

@@ -91,7 +91,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
   Elasticsearch Grails plugin.
   Elasticsearch Grails plugin.
 
 
 * https://hibernate.org/search/[Hibernate Search]
 * https://hibernate.org/search/[Hibernate Search]
-  Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transaction from the reference database.
+  Integration with Hibernate ORM, from the Hibernate team. Automatic synchronization of write operations, yet exposes full Elasticsearch capabilities for queries. Can return either Elasticsearch native or re-map queries back into managed entities loaded within transactions from the reference database.
 
 
 * https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]:
 * https://github.com/spring-projects/spring-data-elasticsearch[Spring Data Elasticsearch]:
   Spring Data implementation for Elasticsearch
   Spring Data implementation for Elasticsearch
@@ -104,7 +104,7 @@ Integrations are not plugins, but are external tools or modules that make it eas
 
 
 * https://pulsar.apache.org/docs/en/io-elasticsearch[Apache Pulsar]:
 * https://pulsar.apache.org/docs/en/io-elasticsearch[Apache Pulsar]:
   The Elasticsearch Sink Connector is used to pull messages from Pulsar topics
   The Elasticsearch Sink Connector is used to pull messages from Pulsar topics
-  and persist the messages to a index.
+  and persist the messages to an index.
 
 
 * https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.html[Micronaut Elasticsearch Integration]:
 * https://micronaut-projects.github.io/micronaut-elasticsearch/latest/guide/index.html[Micronaut Elasticsearch Integration]:
   Integration of Micronaut with Elasticsearch
   Integration of Micronaut with Elasticsearch

+ 1 - 1
docs/plugins/mapper-annotated-text.asciidoc

@@ -143,7 +143,7 @@ broader positional queries e.g. finding mentions of a `Guitarist` near to `strat
 
 
 WARNING: Any use of `=` signs in annotation values eg `[Prince](person=Prince)` will
 WARNING: Any use of `=` signs in annotation values eg `[Prince](person=Prince)` will
 cause the document to be rejected with a parse failure. In future we hope to have a use for
 cause the document to be rejected with a parse failure. In future we hope to have a use for
-the equals signs so wil actively reject documents that contain this today.
+the equals signs so will actively reject documents that contain this today.
 
 
 [[annotated-text-synthetic-source]]
 [[annotated-text-synthetic-source]]
 ===== Synthetic `_source`
 ===== Synthetic `_source`

+ 2 - 2
docs/plugins/store-smb.asciidoc

@@ -10,7 +10,7 @@ include::install_remove.asciidoc[]
 ==== Working around a bug in Windows SMB and Java on windows
 ==== Working around a bug in Windows SMB and Java on windows
 
 
 When using a shared file system based on the SMB protocol (like Azure File Service) to store indices, the way Lucene
 When using a shared file system based on the SMB protocol (like Azure File Service) to store indices, the way Lucene
-open index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be
+opens index segment files is with a write only flag. This is the _correct_ way to open the files, as they will only be
 used for writes and allows different FS implementations to optimize for it. Sadly, in windows with SMB, this disables
 used for writes and allows different FS implementations to optimize for it. Sadly, in windows with SMB, this disables
 the cache manager, causing writes to be slow. This has been described in
 the cache manager, causing writes to be slow. This has been described in
 https://issues.apache.org/jira/browse/LUCENE-6176[LUCENE-6176], but it affects each and every Java program out there!.
 https://issues.apache.org/jira/browse/LUCENE-6176[LUCENE-6176], but it affects each and every Java program out there!.
@@ -44,7 +44,7 @@ This can be configured for all indices by adding this to the `elasticsearch.yml`
 index.store.type: smb_nio_fs
 index.store.type: smb_nio_fs
 ----
 ----
 
 
-Note that setting will be applied for newly created indices.
+Note that settings will be applied for newly created indices.
 
 
 It can also be set on a per-index basis at index creation time:
 It can also be set on a per-index basis at index creation time: