浏览代码

[DOCS] Fix typos (#83895)

Tobias Stadler 3 年之前
父节点
当前提交
e3deacf547

+ 1 - 1
docs/painless/painless-contexts/painless-watcher-context-variables.asciidoc

@@ -9,7 +9,7 @@ The following variables are available in all watcher contexts.
         The id of the watch.
 
 `ctx['id']` (`String`, read-only)::
-        The server generated unique identifer for the run watch.
+        The server generated unique identifier for the run watch.
 
 `ctx['metadata']` (`Map`, read-only)::
         Metadata can be added to the top level of the watch definition. This

+ 1 - 1
docs/plugins/repository.asciidoc

@@ -6,7 +6,7 @@ functionality in Elasticsearch by adding repositories backed by the cloud or
 by distributed file systems:
 
 [discrete]
-==== Offical repository plugins
+==== Official repository plugins
 
 NOTE: Support for S3, GCS and Azure repositories is now bundled in {es} by
 default.

+ 1 - 1
docs/reference/analysis/analyzers/pattern-analyzer.asciidoc

@@ -366,7 +366,7 @@ The regex above is easier to understand as:
 [discrete]
 === Definition
 
-The `pattern` anlayzer consists of:
+The `pattern` analyzer consists of:
 
 Tokenizer::
 * <<analysis-pattern-tokenizer,Pattern Tokenizer>>

+ 1 - 1
docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc

@@ -44,7 +44,7 @@ The filter produces the following tokens.
 
 The API response contains the position and offsets of each output token. Note
 the `predicate_token_filter` filter does not change the tokens' original
-positions or offets.
+positions or offsets.
 
 .*Response*
 [%collapsible]

+ 1 - 1
docs/reference/cat/trainedmodel.asciidoc

@@ -72,7 +72,7 @@ The estimated heap size to keep the trained model in memory.
 
 `id`:::
 (Default)
-Idetifier for the trained model.
+Identifier for the trained model.
 
 `ingest.count`, `ic`, `ingestCount`:::
 The total number of documents that are processed by the model.

+ 1 - 1
docs/reference/cluster/stats.asciidoc

@@ -1096,7 +1096,7 @@ Total size of all file stores across all selected nodes.
 
 `total_in_bytes`::
 (integer)
-Total size, in bytes, of all file stores across all seleced nodes.
+Total size, in bytes, of all file stores across all selected nodes.
 
 `free`::
 (<<byte-units, byte units>>)

+ 1 - 1
docs/reference/commands/keystore.asciidoc

@@ -218,7 +218,7 @@ password.
 [[show-keystore-value]]
 ==== Show settings in the keystore
 
-To display the value of a setting in the keystorem use the `show` command:
+To display the value of a setting in the keystore use the `show` command:
 
 [source,sh]
 ----------------------------------------------------------------

+ 1 - 1
docs/reference/graph/explore.asciidoc

@@ -84,7 +84,7 @@ graph as vertices. For example:
 field::: Identifies a field in the documents of interest.
 include::: Identifies the terms of interest that form the starting points
 from which you want to spider out. You do not have to specify a seed query
-if you specify an include clause. The include clause implicitly querys for
+if you specify an include clause. The include clause implicitly queries for
 documents that contain any of the listed terms listed.
 In addition to specifying a simple array of strings, you can also pass
 objects with `term` and `boost` values to boost matches on particular terms.

+ 1 - 1
docs/reference/how-to/recipes/scoring.asciidoc

@@ -192,7 +192,7 @@ While both options would return similar scores, there are trade-offs:
 <<query-dsl-script-score-query,script_score>> provides a lot of flexibility,
 enabling you to combine the text relevance score with static signals as you
 prefer. On the other hand, the <<rank-feature,`rank_feature` query>> only
-exposes a couple ways to incorporate static signails into the score. However,
+exposes a couple ways to incorporate static signals into the score. However,
 it relies on the <<rank-feature,`rank_feature`>> and
 <<rank-features,`rank_features`>> fields, which index values in a special way
 that allows the <<query-dsl-rank-feature-query,`rank_feature` query>> to skip

+ 1 - 1
docs/reference/migration/migrate_8_0/plugin-changes.asciidoc

@@ -13,7 +13,7 @@ TIP: {ess-skip-section}
 ====
 *Details* +
 In previous versions of {es}, in order to register a snapshot repository
-backed by Amazon S3, Google Cloud Storge (GCS) or Microsoft Azure Blob
+backed by Amazon S3, Google Cloud Storage (GCS) or Microsoft Azure Blob
 Storage, you first had to install the corresponding Elasticsearch plugin,
 for example `repository-s3`. These plugins are now included in {es} by
 default.

+ 1 - 1
docs/reference/migration/migrate_8_0/sql-jdbc-changes.asciidoc

@@ -12,7 +12,7 @@
 *Details* +
 To reduce the dependency of the JDBC driver onto Elasticsearch classes, the JDBC driver returns geometry data
 as strings using the WKT (well-known text) format instead of classes from the `org.elasticsearch.geometry`.
-Users can choose the geometry library desired to convert the string represantion into a full-blown objects
+Users can choose the geometry library desired to convert the string representation into a full-blown objects
 either such as the `elasticsearch-geo` library (which returned the object `org.elasticsearch.geo` as before),
 jts or spatial4j.
 

+ 1 - 1
docs/reference/ml/anomaly-detection/ml-configuring-alerts.asciidoc

@@ -330,7 +330,7 @@ formatting is based on the {kib} settings.
 The peak number of bytes of memory ever used by the model.
 ====
 
-==== _Data delay has occured_
+==== _Data delay has occurred_
 
 `context.message`::
 A preconstructed message for the rule.

+ 1 - 1
docs/reference/ml/ml-shared.asciidoc

@@ -995,7 +995,7 @@ Tokenize with special tokens. The tokens typically included in MPNet-style token
 end::inference-config-nlp-tokenization-mpnet-with-special-tokens[]
 
 tag::inference-config-nlp-vocabulary[]
-The configuration for retreiving the vocabulary of the model. The vocabulary is
+The configuration for retrieving the vocabulary of the model. The vocabulary is
 then used at inference time. This information is usually provided automatically
 by storing vocabulary in a known, internally managed index.
 end::inference-config-nlp-vocabulary[]

+ 1 - 1
docs/reference/modules/discovery/bootstrapping.asciidoc

@@ -75,7 +75,7 @@ configuration. If each node name is a fully-qualified domain name such as
 `master-a.example.com` then you must use fully-qualified domain names in the
 `cluster.initial_master_nodes` list too; conversely if your node names are bare
 hostnames (without the `.example.com` suffix) then you must use bare hostnames
-in the `cluster.initial_master_nodes` list. If you use a mix of fully-qualifed
+in the `cluster.initial_master_nodes` list. If you use a mix of fully-qualified
 and bare hostnames, or there is some other mismatch between `node.name` and
 `cluster.initial_master_nodes`, then the cluster will not form successfully and
 you will see log messages like the following.

+ 1 - 1
docs/reference/snapshot-restore/apis/put-repo-api.asciidoc

@@ -91,7 +91,7 @@ Repository type.
 
 Other repository types are available through official plugins:
 
-`hfds`:: {plugins}/repository-hdfs.html[Hadoop Distributed File System (HDFS) repository]
+`hdfs`:: {plugins}/repository-hdfs.html[Hadoop Distributed File System (HDFS) repository]
 ====
 
 [[put-snapshot-repo-api-settings-param]]

+ 1 - 1
docs/reference/sql/limitations.asciidoc

@@ -4,7 +4,7 @@
 
 [discrete]
 [[large-parsing-trees]]
-=== Large queries may throw `ParsingExpection`
+=== Large queries may throw `ParsingException`
 
 Extremely large queries can consume too much memory during the parsing phase, in which case the {es-sql} engine will
 abort parsing and throw an error. In such cases, consider reducing the query to a smaller size by potentially