Jelajahi Sumber

[DOCS] Add collapsible sections to 8.0 breaking changes [Part 4] (#56356)

James Rodewig 5 tahun lalu
induk
melakukan
303bed81ad

+ 10 - 7
docs/reference/migration/migrate_8_0/java.asciidoc

@@ -9,9 +9,10 @@
 
 
 // end::notable-breaking-changes[]
 // end::notable-breaking-changes[]
 
 
-[float]
-==== Changes to Fuzziness
-
+.Changes to `Fuzziness`.
+[%collapsible]
+====
+*Details* +
 To create `Fuzziness` instances, use the `fromString` and `fromEdits` method
 To create `Fuzziness` instances, use the `fromString` and `fromEdits` method
 instead of the `build` method that used to accept both Strings and numeric
 instead of the `build` method that used to accept both Strings and numeric
 values. Several fuzziness setters on query builders (e.g.
 values. Several fuzziness setters on query builders (e.g.
@@ -25,11 +26,13 @@ while silently truncating them to one of the three allowed edit distances 0, 1
 or 2. This leniency is now removed and the class will throw errors when trying
 or 2. This leniency is now removed and the class will throw errors when trying
 to construct an instance with another value (e.g. floats like 1.3 used to get
 to construct an instance with another value (e.g. floats like 1.3 used to get
 accepted but truncated to 1). You should use one of the allowed values.
 accepted but truncated to 1). You should use one of the allowed values.
+====
 
 
-
-[float]
-==== Changes to Repository
-
+.Changes to `Repository`.
+[%collapsible]
+====
+*Details* +
 Repository has no dependency on IndexShard anymore. The contract of restoreShard
 Repository has no dependency on IndexShard anymore. The contract of restoreShard
 and snapshotShard has been reduced to Store and MappingService in order to improve
 and snapshotShard has been reduced to Store and MappingService in order to improve
 testability.
 testability.
+====

+ 13 - 5
docs/reference/migration/migrate_8_0/network.asciidoc

@@ -8,9 +8,17 @@
 
 
 // end::notable-breaking-changes[]
 // end::notable-breaking-changes[]
 
 
-[float]
-==== Removal of old network settings
-
+.The `network.tcp.connect_timeout` setting has been removed.
+[%collapsible]
+====
+*Details* +
 The `network.tcp.connect_timeout` setting was deprecated in 7.x and has been removed in 8.0. This setting
 The `network.tcp.connect_timeout` setting was deprecated in 7.x and has been removed in 8.0. This setting
-was a fallback setting for `transport.connect_timeout`. To change the default connection timeout for client
-connections `transport.connect_timeout` should be modified.
+was a fallback setting for `transport.connect_timeout`.
+
+*Impact* +
+Use the `transport.connect_timeout` setting to change the default connection
+timeout for client connections. Discontinue use of the
+`network.tcp.connect_timeout` setting. Specifying the
+`network.tcp.connect_timeout` setting in `elasticsearch.yml` will result in an
+error on startup.
+====

+ 30 - 13
docs/reference/migration/migrate_8_0/node.asciidoc

@@ -8,26 +8,36 @@
 
 
 // end::notable-breaking-changes[]
 // end::notable-breaking-changes[]
 
 
-[float]
-==== Removal of `node.max_local_storage_nodes` setting
-
+.The `node.max_local_storage_nodes` setting has been removed.
+[%collapsible]
+====
+*Details* +
 The `node.max_local_storage_nodes` setting was deprecated in 7.x and
 The `node.max_local_storage_nodes` setting was deprecated in 7.x and
 has been removed in 8.0. Nodes should be run on separate data paths
 has been removed in 8.0. Nodes should be run on separate data paths
 to ensure that each node is consistently assigned to the same data path.
 to ensure that each node is consistently assigned to the same data path.
 
 
-[float]
-==== Change of data folder layout
+*Impact* +
+Discontinue use of the `node.max_local_storage_nodes` setting. Specifying this
+setting in `elasticsearch.yml` will result in an error on startup.
+====
 
 
+.The layout of the data folder has changed.
+[%collapsible]
+====
+*Details* +
 Each node's data is now stored directly in the data directory set by the
 Each node's data is now stored directly in the data directory set by the
 `path.data` setting, rather than in `${path.data}/nodes/0`, because the removal
 `path.data` setting, rather than in `${path.data}/nodes/0`, because the removal
 of the `node.max_local_storage_nodes` setting means that nodes may no longer
 of the `node.max_local_storage_nodes` setting means that nodes may no longer
-share a data path. At startup, Elasticsearch will automatically migrate the data
-path to the new layout. This automatic migration will not proceed if the data
-path contains data for more than one node. You should move to a configuration in
-which each node has its own data path before upgrading.
+share a data path. 
+
+*Impact* +
+At startup, {es} will automatically migrate the data path to the new layout.
+This automatic migration will not proceed if the data path contains data for
+more than one node. You should move to a configuration in which each node has
+its own data path before upgrading.
 
 
 If you try to upgrade a configuration in which there is data for more than one
 If you try to upgrade a configuration in which there is data for more than one
-node in a data path then the automatic migration will fail and Elasticsearch
+node in a data path then the automatic migration will fail and {es}
 will refuse to start. To resolve this you will need to perform the migration
 will refuse to start. To resolve this you will need to perform the migration
 manually. The data for the extra nodes are stored in folders named
 manually. The data for the extra nodes are stored in folders named
 `${path.data}/nodes/1`, `${path.data}/nodes/2` and so on, and you should move
 `${path.data}/nodes/1`, `${path.data}/nodes/2` and so on, and you should move
@@ -36,11 +46,18 @@ corresponding node to use this location for its data path. If your nodes each
 have more than one data path in their `path.data` settings then you should move
 have more than one data path in their `path.data` settings then you should move
 all the corresponding subfolders in parallel. Each node uses the same subfolder
 all the corresponding subfolders in parallel. Each node uses the same subfolder
 (e.g. `nodes/2`) across all its data paths.
 (e.g. `nodes/2`) across all its data paths.
+====
 
 
-[float]
-==== Rejection of ancient closed indices
-
+.Closed indices created in {es} 6.x and earlier versions are not supported.
+[%collapsible]
+====
+*Details* +
 In earlier versions a node would start up even if it had data from indices
 In earlier versions a node would start up even if it had data from indices
 created in a version before the previous major version, as long as those
 created in a version before the previous major version, as long as those
 indices were closed. {es} now ensures that it is compatible with every index,
 indices were closed. {es} now ensures that it is compatible with every index,
 open or closed, at startup time.
 open or closed, at startup time.
+
+*Impact* +
+Reindex closed indices created in {es} 6.x or before with {es} 7.x if they need
+to be carried forward to {es} 8.x.
+====

+ 10 - 4
docs/reference/migration/migrate_8_0/packaging.asciidoc

@@ -3,9 +3,15 @@
 === Packaging changes
 === Packaging changes
 
 
 //tag::notable-breaking-changes[]
 //tag::notable-breaking-changes[]
-[float]
-==== Java 11 is required
-
-Java 11 or higher is now required to run Elasticsearch and any of its command
+.Java 11 is required.
+[%collapsible]
+====
+*Details* +
+Java 11 or higher is now required to run {es} and any of its command
 line tools.
 line tools.
+
+*Impact* +
+Use Java 11 or higher. Attempts to run {es} 8.0 using earlier Java versions will
+fail.
+====
 //end::notable-breaking-changes[]
 //end::notable-breaking-changes[]

+ 28 - 9
docs/reference/migration/migrate_8_0/reindex.asciidoc

@@ -8,21 +8,35 @@
 //tag::notable-breaking-changes[]
 //tag::notable-breaking-changes[]
 //end::notable-breaking-changes[]
 //end::notable-breaking-changes[]
 
 
-Reindex from remote would previously allow URL encoded index-names and not
+.Reindex from remote now re-encodes URL-encoded index names.
+[%collapsible]
+====
+*Details* +
+Reindex from remote would previously allow URL-encoded index names and not
 re-encode them when generating the search request for the remote host. This
 re-encode them when generating the search request for the remote host. This
-leniency has been removed such that all index-names are correctly encoded when
+leniency has been removed such that all index names are correctly encoded when
 reindex generates remote search requests.
 reindex generates remote search requests.
 
 
-Instead, please specify the index-name without any encoding.
-
-[float]
-==== Removal of types
+*Impact* +
+Specify unencoded index names for reindex from remote requests.
+====
 
 
+.Reindex-related REST API endpoints containing mapping types have been removed.
+[%collapsible]
+====
+*Details* +
 The `/{index}/{type}/_delete_by_query` and `/{index}/{type}/_update_by_query` REST endpoints have been removed in favour of `/{index}/_delete_by_query` and `/{index}/_update_by_query`, since indexes no longer contain types, these typed endpoints are obsolete.
 The `/{index}/{type}/_delete_by_query` and `/{index}/{type}/_update_by_query` REST endpoints have been removed in favour of `/{index}/_delete_by_query` and `/{index}/_update_by_query`, since indexes no longer contain types, these typed endpoints are obsolete.
 
 
-[float]
-==== Removal of size parameter
+*Impact* +
+Use the replacement REST API endpoints. Requests submitted to API endpoints
+that contain a mapping type will return an error.
+====
 
 
+
+.In the reindex, delete by query, and update by query APIs, the `size` parameter has been renamed.
+[%collapsible]
+====
+*Details* +
 Previously, a `_reindex` request had two different size specifications in the body:
 Previously, a `_reindex` request had two different size specifications in the body:
 
 
 - Outer level, determining the maximum number of documents to process
 - Outer level, determining the maximum number of documents to process
@@ -32,4 +46,9 @@ The outer level `size` parameter has now been renamed to `max_docs` to
 avoid confusion and clarify its semantics.
 avoid confusion and clarify its semantics.
 
 
 Similarly, the `size` parameter has been renamed to `max_docs` for
 Similarly, the `size` parameter has been renamed to `max_docs` for
-`_delete_by_query` and `_update_by_query` to keep the 3 interfaces consistent.
+`_delete_by_query` and `_update_by_query` to keep the 3 interfaces consistent.
+
+*Impact* +
+Use the replacement parameters. Requests containing the `size` parameter will
+return an error.
+====

+ 12 - 4
docs/reference/migration/migrate_8_0/rollup.asciidoc

@@ -9,12 +9,20 @@
 
 
 // end::notable-breaking-changes[]
 // end::notable-breaking-changes[]
 
 
-[float]
-==== StartRollupJob endpoint returns success if job already started
-
+.The StartRollupJob endpoint now returns a success status if a job has already started.
+[%collapsible]
+====
+*Details* +
 Previously, attempting to start an already-started rollup job would
 Previously, attempting to start an already-started rollup job would
 result in a `500 InternalServerError: Cannot start task for Rollup Job
 result in a `500 InternalServerError: Cannot start task for Rollup Job
 [job] because state was [STARTED]` exception.
 [job] because state was [STARTED]` exception.
 
 
 Now, attempting to start a job that is already started will just
 Now, attempting to start a job that is already started will just
-return a successful `200 OK: started` response.
+return a successful `200 OK: started` response.
+
+*Impact* +
+Update your workflow and applications to assume that a 200 status in response to
+attempting to start a rollup job means the job is in an actively started state.
+The request itself may have started the job, or it was previously running and so
+the request had no effect.
+====