Răsfoiți Sursa

Updated docs for 3.0.0-beta

Clinton Gormley 10 ani în urmă
părinte
comite
dc018cf622
25 a modificat fișierele cu 8 adăugiri și 70 ștergeri
  1. 0 2
      docs/reference/aggregations/pipeline.asciidoc
  2. 0 2
      docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc
  3. 0 2
      docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc
  4. 0 2
      docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc
  5. 0 2
      docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc
  6. 0 2
      docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc
  7. 0 2
      docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc
  8. 0 2
      docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc
  9. 0 2
      docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc
  10. 0 2
      docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc
  11. 0 2
      docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc
  12. 0 2
      docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc
  13. 0 2
      docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc
  14. 0 2
      docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc
  15. 0 4
      docs/reference/docs/bulk.asciidoc
  16. 0 4
      docs/reference/docs/index_.asciidoc
  17. 0 4
      docs/reference/docs/termvectors.asciidoc
  18. 2 2
      docs/reference/index.asciidoc
  19. 0 10
      docs/reference/indices/analyze.asciidoc
  20. 0 2
      docs/reference/mapping/fields/parent-field.asciidoc
  21. 0 2
      docs/reference/mapping/fields/timestamp-field.asciidoc
  22. 0 2
      docs/reference/mapping/fields/ttl-field.asciidoc
  23. 2 2
      docs/reference/modules/snapshots.asciidoc
  24. 4 4
      docs/reference/query-dsl/mlt-query.asciidoc
  25. 0 6
      docs/reference/search/request/scroll.asciidoc

+ 0 - 2
docs/reference/aggregations/pipeline.asciidoc

@@ -2,8 +2,6 @@
 
 == Pipeline Aggregations
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding

+ 0 - 2
docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-avg-bucket-aggregation]]
 === Avg Bucket Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation. 

+ 0 - 2
docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-bucket-script-aggregation]]
 === Bucket Script Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics 

+ 0 - 2
docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-bucket-selector-aggregation]]
 === Bucket Selector Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained 

+ 0 - 2
docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-cumulative-sum-aggregation]]
 === Cumulative Sum Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram) 

+ 0 - 2
docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-derivative-aggregation]]
 === Derivative Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram) 

+ 0 - 2
docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-extended-stats-bucket-aggregation]]
 === Extended Stats Bucket Aggregation
 
-coming[2.1.0]
-
 experimental[]
 
 A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.

+ 0 - 2
docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-max-bucket-aggregation]]
 === Max Bucket Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation

+ 0 - 2
docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-min-bucket-aggregation]]
 === Min Bucket Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation 

+ 0 - 2
docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-movavg-aggregation]]
 === Moving Average Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average

+ 0 - 2
docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-percentiles-bucket-aggregation]]
 === Percentiles Bucket Aggregation
 
-coming[2.1.0]
-
 experimental[]
 
 A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation.

+ 0 - 2
docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-serialdiff-aggregation]]
 === Serial Differencing Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 Serial differencing is a technique where values in a time series are subtracted from itself at

+ 0 - 2
docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-stats-bucket-aggregation]]
 === Stats Bucket Aggregation
 
-coming[2.1.0]
-
 experimental[]
 
 A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.

+ 0 - 2
docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc

@@ -1,8 +1,6 @@
 [[search-aggregations-pipeline-sum-bucket-aggregation]]
 === Sum Bucket Aggregation
 
-coming[2.0.0-beta1]
-
 experimental[]
 
 A sibling pipeline aggregation which calculates the sum across all bucket of a specified metric in a sibling aggregation. 

+ 0 - 4
docs/reference/docs/bulk.asciidoc

@@ -131,8 +131,6 @@ operation based on the `_parent` / `_routing` mapping.
 [[bulk-timestamp]]
 === Timestamp
 
-deprecated[2.0.0,The `_timestamp` field is deprecated.  Instead, use a normal <<date,`date`>> field and set its value explicitly]
-
 Each bulk item can include the timestamp value using the
 `_timestamp`/`timestamp` field. It automatically follows the behavior of
 the index operation based on the `_timestamp` mapping.
@@ -141,8 +139,6 @@ the index operation based on the `_timestamp` mapping.
 [[bulk-ttl]]
 === TTL
 
-deprecated[2.0.0,The current `_ttl` implementation is deprecated and will be replaced with a different implementation in a future version]
-
 Each bulk item can include the ttl value using the `_ttl`/`ttl` field.
 It automatically follows the behavior of the index operation based on
 the `_ttl` mapping.

+ 0 - 4
docs/reference/docs/index_.asciidoc

@@ -257,8 +257,6 @@ specified using the `routing` parameter.
 [[index-timestamp]]
 === Timestamp
 
-deprecated[2.0.0,The `_timestamp` field is deprecated.  Instead, use a normal <<date,`date`>> field and set its value explicitly]
-
 A document can be indexed with a `timestamp` associated with it. The
 `timestamp` value of a document can be set using the `timestamp`
 parameter. For example:
@@ -281,8 +279,6 @@ page>>.
 [[index-ttl]]
 === TTL
 
-deprecated[2.0.0,The current `_ttl` implementation is deprecated and will be replaced with a different implementation in a future version]
-
 
 A document can be indexed with a `ttl` (time to live) associated with
 it. Expired documents will be expunged automatically. The expiration

+ 0 - 4
docs/reference/docs/termvectors.asciidoc

@@ -81,8 +81,6 @@ omit :
 [float]
 ==== Distributed frequencies
 
-coming[2.0.0-beta1]
-
 Setting `dfs` to `true` (default is `false`) will return the term statistics
 or the field statistics of the entire index, and not just at the shard. Use it
 with caution as distributed frequencies can have a serious performance impact.
@@ -90,8 +88,6 @@ with caution as distributed frequencies can have a serious performance impact.
 [float]
 ==== Terms Filtering
 
-coming[2.0.0-beta1]
-
 With the parameter `filter`, the terms returned could also be filtered based
 on their tf-idf scores. This could be useful in order find out a good
 characteristic vector of a document. This feature works in a similar manner to

+ 2 - 2
docs/reference/index.asciidoc

@@ -1,8 +1,8 @@
 [[elasticsearch-reference]]
 = Elasticsearch Reference
 
-:version:   2.0.0-beta1
-:branch:    2.0
+:version:   3.0.0-beta1
+:branch:    3.0
 :jdk:       1.8.0_25
 :defguide:  https://www.elastic.co/guide/en/elasticsearch/guide/current
 :plugins:   https://www.elastic.co/guide/en/elasticsearch/plugins/master

+ 0 - 10
docs/reference/indices/analyze.asciidoc

@@ -16,8 +16,6 @@ curl -XGET 'localhost:9200/_analyze' -d '
 }'
 --------------------------------------------------
 
-coming[2.0.0-beta1, body based parameters were added in 2.0.0]
-
 If text parameter is provided as array of strings, it is analyzed as a multi-valued field.
 
 [source,js]
@@ -29,8 +27,6 @@ curl -XGET 'localhost:9200/_analyze' -d '
 }'
 --------------------------------------------------
 
-coming[2.0.0-beta1, body based parameters were added in 2.0.0]
-
 Or by building a custom transient analyzer out of tokenizers,
 token filters and char filters. Token filters can use the shorter 'filters'
 parameter name:
@@ -53,8 +49,6 @@ curl -XGET 'localhost:9200/_analyze' -d '
 }'
 --------------------------------------------------
 
-coming[2.0.0-beta1, body based parameters were added in 2.0.0]
-
 It can also run against a specific index:
 
 [source,js]
@@ -78,8 +72,6 @@ curl -XGET 'localhost:9200/test/_analyze' -d '
 }'
 --------------------------------------------------
 
-coming[2.0.0-beta1, body based parameters were added in 2.0.0]
-
 Also, the analyzer can be derived based on a field mapping, for example:
 
 [source,js]
@@ -91,8 +83,6 @@ curl -XGET 'localhost:9200/test/_analyze' -d '
 }'
 --------------------------------------------------
 
-coming[2.0.0-beta1, body based parameters were added in 2.0.0]
-
 Will cause the analysis to happen based on the analyzer configured in the
 mapping for `obj1.field1` (and if not, the default index analyzer).
 

+ 0 - 2
docs/reference/mapping/fields/parent-field.asciidoc

@@ -1,8 +1,6 @@
 [[mapping-parent-field]]
 === `_parent` field
 
-added[2.0.0-beta1,The parent-child implementation has been completely rewritten. It is advisable to reindex any 1.x indices which use parent-child to take advantage of the new optimizations]
-
 A parent-child relationship can be established between documents in the same
 index by making one mapping type the parent of another:
 

+ 0 - 2
docs/reference/mapping/fields/timestamp-field.asciidoc

@@ -1,8 +1,6 @@
 [[mapping-timestamp-field]]
 === `_timestamp` field
 
-deprecated[2.0.0,The `_timestamp` field is deprecated.  Instead, use a normal <<date,`date`>> field and set its value explicitly]
-
 The `_timestamp` field, when enabled, allows a timestamp to be indexed and
 stored with a document. The timestamp may be specified manually, generated
 automatically, or set to a default value:

+ 0 - 2
docs/reference/mapping/fields/ttl-field.asciidoc

@@ -1,8 +1,6 @@
 [[mapping-ttl-field]]
 === `_ttl` field
 
-deprecated[2.0.0,The current `_ttl` implementation is deprecated and will be replaced with a different implementation in a future version]
-
 Some types of documents, such as session data or special offers, come with an
 expiration date. The `_ttl` field allows you to specify the minimum time a
 document should live, after which time the document is deleted automatically.

+ 2 - 2
docs/reference/modules/snapshots.asciidoc

@@ -121,7 +121,7 @@ The following settings are supported:
  using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
 `max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
 `max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
-`readonly`:: Makes repository read-only. coming[2.1.0]  Defaults to `false`.
+`readonly`:: Makes repository read-only.  Defaults to `false`.
 
 [float]
 ===== Read-only URL Repository
@@ -259,7 +259,7 @@ GET /_snapshot/my_backup/_all
 -----------------------------------
 // AUTOSENSE
 
-coming[2.0.0-beta1] A currently running snapshot can be retrieved using the following command:
+A currently running snapshot can be retrieved using the following command:
 
 [source,sh]
 -----------------------------------

+ 4 - 4
docs/reference/query-dsl/mlt-query.asciidoc

@@ -149,7 +149,7 @@ input, the other one for term selection and for query formation.
 ==== Document Input Parameters
 
 [horizontal]
-`like`:: coming[2.0.0-beta1]
+`like`::
 The only *required* parameter of the MLT query is `like` and follows a
 versatile syntax, in which the user can specify free form text and/or a single
 or multiple documents (see examples above). The syntax to specify documents is
@@ -162,7 +162,7 @@ follows a similar syntax to the `per_field_analyzer` parameter of the
 Additionally, to provide documents not necessarily present in the index,
 <<docs-termvectors-artificial-doc,artificial documents>> are also supported.
 
-`unlike`:: coming[2.0.0-beta1] 
+`unlike`:: 
 The `unlike` parameter is used in conjunction with `like` in order not to
 select terms found in a chosen set of documents. In other words, we could ask
 for documents `like: "Apple"`, but `unlike: "cake crumble tree"`. The syntax
@@ -172,10 +172,10 @@ is the same as `like`.
 A list of fields to fetch and analyze the text from. Defaults to the `_all`
 field for free text and to all possible fields for document inputs.
 
-`like_text`:: deprecated[2.0.0-beta1,Replaced by `like`]
+`like_text`::
 The text to find documents like it.
 
-`ids` or `docs`:: deprecated[2.0.0-beta1,Replaced by `like`]
+`ids` or `docs`::
 A list of documents following the same syntax as the <<docs-multi-get,Multi GET API>>.
 
 [float]

+ 0 - 6
docs/reference/search/request/scroll.asciidoc

@@ -63,8 +63,6 @@ curl -XGET <1> 'localhost:9200/_search/scroll' <2> -d'
 '
 --------------------------------------------------
 
-coming[2.0.0-beta1, body based parameters were added in 2.0.0]
-
 <1> `GET` or `POST` can be used.
 <2> The URL should not include the `index` or `type` name -- these
     are specified in the original `search` request instead.
@@ -151,8 +149,6 @@ curl -XDELETE localhost:9200/_search/scroll -d '
 }'
 ---------------------------------------
 
-coming[2.0.0-beta1, Body based parameters were added in 2.0.0]
-
 Multiple scroll IDs can be passed as array:
 
 [source,js]
@@ -163,8 +159,6 @@ curl -XDELETE localhost:9200/_search/scroll -d '
 }'
 ---------------------------------------
 
-coming[2.0.0-beta1, Body based parameters were added in 2.0.0]
-
 All search contexts can be cleared with the `_all` parameter:
 
 [source,js]