Browse Source

[DOCS] Fix hyphenation for "time series" (#61472)

James Rodewig 5 years ago
parent
commit
c688cb6bfd

+ 2 - 2
docs/reference/api-conventions.asciidoc

@@ -49,8 +49,8 @@ syntax.
 [[date-math-index-names]]
 === Date math support in index names
 
-Date math index name resolution enables you to search a range of time-series indices, rather
-than searching all of your time-series indices and filtering the results or maintaining aliases.
+Date math index name resolution enables you to search a range of time series indices, rather
+than searching all of your time series indices and filtering the results or maintaining aliases.
 Limiting the number of indices that are searched reduces the load on the cluster and improves
 execution performance. For example, if you are searching for errors in your
 daily logs, you can use a date math name template to restrict the search to the past

+ 5 - 5
docs/reference/data-streams/data-streams.asciidoc

@@ -6,9 +6,9 @@
 ++++
 
 A _data stream_ is a convenient, scalable way to ingest, search, and manage
-continuously generated time-series data.
+continuously generated time series data.
 
-Time-series data, such as logs, tends to grow over time. While storing an entire
+Time series data, such as logs, tends to grow over time. While storing an entire
 time series in a single {es} index is simpler, it is often more efficient and
 cost-effective to store large volumes of data across multiple, time-based
 indices. Multiple indices let you move indices containing older, less frequently
@@ -38,10 +38,10 @@ budget, performance, resiliency, and retention needs.
 
 We recommend using data streams if you:
 
-* Use {es} to ingest, search, and manage large volumes of time-series data
+* Use {es} to ingest, search, and manage large volumes of time series data
 * Want to scale and reduce costs by using {ilm-init} to automate the management
   of your indices
-* Index large volumes of time-series data in {es} but rarely delete or update
+* Index large volumes of time series data in {es} but rarely delete or update
   individual documents
 
 
@@ -161,7 +161,7 @@ manually perform a rollover. See <<manually-roll-over-a-data-stream>>.
 [[data-streams-append-only]]
 == Append-only
 
-For most time-series use cases, existing data is rarely, if ever, updated.
+For most time series use cases, existing data is rarely, if ever, updated.
 Because of this, data streams are designed to be append-only.
 
 You can send <<add-documents-to-a-data-stream,indexing requests for new

+ 1 - 1
docs/reference/data-streams/set-up-a-data-stream.asciidoc

@@ -21,7 +21,7 @@ and its backing indices.
 [[data-stream-prereqs]]
 === Prerequisites
 
-* {es} data streams are intended for time-series data only. Each document
+* {es} data streams are intended for time series data only. Each document
 indexed to a data stream must contain the `@timestamp` field. This field must be
 mapped as a <<date,`date`>> or <<date_nanos,`date_nanos`>> field data type.
 

+ 1 - 1
docs/reference/eql/eql.asciidoc

@@ -9,7 +9,7 @@
 experimental::[]
 
 {eql-ref}/index.html[Event Query Language (EQL)] is a query language for
-event-based, time-series data, such as logs.
+event-based, time series data, such as logs.
 
 [discrete]
 [[eql-advantages]]

+ 1 - 1
docs/reference/glossary.asciidoc

@@ -77,7 +77,7 @@ See {ref}/modules-cross-cluster-search.html[Search across clusters].
 +
 --
 // tag::data-stream-def[]
-A named resource used to ingest, search, and manage time-series data in {es}. A
+A named resource used to ingest, search, and manage time series data in {es}. A
 data stream's data is stored across multiple hidden, auto-generated
 <<glossary-index,indices>>. You can automate management of these indices to more
 efficiently store large data volumes.

+ 1 - 1
docs/reference/ilm/ilm-overview.asciidoc

@@ -23,7 +23,7 @@ include::../glossary.asciidoc[tag=freeze-def-short]
 * **Delete**: Permanently remove an index, including all of its data and metadata.
 
 {ilm-init} makes it easier to manage indices in hot-warm-cold architectures,
-which are common when you're working with time-series data such as logs and metrics.
+which are common when you're working with time series data such as logs and metrics.
 
 You can specify:
 

+ 5 - 5
docs/reference/ilm/ilm-tutorial.asciidoc

@@ -271,18 +271,18 @@ DELETE /_index_template/timeseries_template
 
 [discrete]
 [[manage-time-series-data-without-data-streams]]
-=== Manage time-series data without data streams
+=== Manage time series data without data streams
 
 Even though <<data-streams, data streams>> are a convenient way to scale
-and manage time-series data, they are designed to be append-only. We recognise there
+and manage time series data, they are designed to be append-only. We recognise there
 might be use-cases where data needs to be updated or deleted in place and the
 data streams don't support delete and update requests directly,
 so the index APIs would need to be used directly on the data stream's backing indices.
 
-In these cases, you can use an index alias to manage indices containing the time-series data
+In these cases, you can use an index alias to manage indices containing the time series data
 and periodically roll over to a new index.
 
-To automate rollover and management of time-series indices with {ilm-init} using an index
+To automate rollover and management of time series indices with {ilm-init} using an index
 alias, you:
 
 . Create a lifecycle policy that defines the appropriate phases and actions.
@@ -352,7 +352,7 @@ DELETE _index_template/timeseries_template
 
 [discrete]
 [[ilm-gs-alias-bootstrap]]
-=== Bootstrap the initial time-series index with a write index alias
+=== Bootstrap the initial time series index with a write index alias
 
 To get things started, you need to bootstrap an initial index and
 designate it as the write index for the rollover alias specified in your index template.

+ 3 - 3
docs/reference/ilm/index-rollover.asciidoc

@@ -1,7 +1,7 @@
 [[index-rollover]]
 === Rollover
 
-When indexing time-series data like logs or metrics, you can't write to a single index indefinitely. 
+When indexing time series data like logs or metrics, you can't write to a single index indefinitely. 
 To meet your indexing and search performance requirements and manage resource usage, 
 you write to an index until some threshold is met and 
 then create a new index and start writing to it instead. 
@@ -12,7 +12,7 @@ Using rolling indices enables you to:
 * Shift older, less frequently accessed data to less expensive _cold_ nodes,
 * Delete data according to your retention policies by removing entire indices.
 
-We recommend using <<indices-create-data-stream, data streams>> to manage time-series
+We recommend using <<indices-create-data-stream, data streams>> to manage time series
 data. Data streams automatically track the write index while keeping configuration to a minimum.
 
 Each data stream requires an <<indices-templates,index template>> that contains:
@@ -27,7 +27,7 @@ Each data stream requires an <<indices-templates,index template>> that contains:
 
 Data streams are designed for append-only data, where the data stream name
 can be used as the operations (read, write, rollover, shrink etc.) target.
-If your use case requires data to be updated in place, you can instead manage your time-series data using <<indices-aliases, indices aliases>>. However, there are a few more configuration steps and
+If your use case requires data to be updated in place, you can instead manage your time series data using <<indices-aliases, indices aliases>>. However, there are a few more configuration steps and
 concepts:
 
 * An _index template_ that specifies the settings for each new index in the series.

+ 1 - 1
docs/reference/intro.asciidoc

@@ -163,7 +163,7 @@ embroidery_ needles.
 [[more-features]]
 ===== But wait, there’s more
 
-Want to automate the analysis of your time-series data? You can use
+Want to automate the analysis of your time series data? You can use
 {ml-docs}/ml-overview.html[machine learning] features to create accurate
 baselines of normal behavior in your data and identify anomalous patterns. With
 machine learning, you can detect:

+ 1 - 1
docs/reference/mapping/dynamic/templates.asciidoc

@@ -385,7 +385,7 @@ default rules of dynamic mappings. Of course if you do not need them because
 you don't need to perform exact search or aggregate on this field, you could
 remove it as described in the previous section.
 
-===== Time-series
+===== Time series
 
 When doing time series analysis with Elasticsearch, it is common to have many
 numeric fields that you will often aggregate on but never filter on. In such a

+ 2 - 2
docs/reference/query-dsl/_query-template.asciidoc

@@ -98,9 +98,9 @@ Guidelines
 By default, {es} changes the values of `text` fields during analysis. For
 example, ...
 
-===== Using the `sample` query on time-series data
+===== Using the `sample` query on time series data
 
-You can use the `sample` query to perform searches on time-series data.
+You can use the `sample` query to perform searches on time series data.
 For example:
 
 [source,console]

+ 1 - 1
docs/reference/search/search-shard-routing.asciidoc

@@ -68,7 +68,7 @@ session ID. This string cannot start with a `_`.
 TIP: You can use this option to serve cached results for frequently used and
 resource-intensive searches. If the shard's data doesn't change, repeated
 searches with the same `preference` string retrieve results from the same
-<<shard-request-cache,shard request cache>>. For time-series use cases, such as
+<<shard-request-cache,shard request cache>>. For time series use cases, such as
 logging, data in older indices is rarely updated and can be served directly from
 this cache.
 

+ 1 - 1
x-pack/docs/en/watcher/example-watches/watching-time-series-data.asciidoc

@@ -2,7 +2,7 @@
 [[watching-time-series-data]]
 === Watching time series data
 
-If you are indexing time-series data such as logs, RSS feeds, or network traffic,
+If you are indexing time series data such as logs, RSS feeds, or network traffic,
 you can use {watcher} to send notifications when certain events occur.
 
 For example, you could index an RSS feed of posts on Stack Overflow that are