Browse Source

[DOCS] Remove unused upgrade doc files (#83617)

* Removes several unused asciidoc files for upgrade docs
* Fixes a redirect for "Rolling upgrade"
* Adds redirects for "Reindex before upgrading," "Reindex in place," and "Reindex from a remote cluster"

Relates to https://github.com/elastic/elasticsearch/pull/83489 and
https://github.com/elastic/docs/pull/2312
James Rodewig 3 years ago
parent
commit
2a1d666024

+ 32 - 3
docs/reference/redirects.asciidoc

@@ -1808,13 +1808,42 @@ See the <<sql-search-api-request-body,request body parameters>> for the
 
 When upgrading to {es} 8.0 and later, you must first upgrade to {prev-major-last}
 even if you opt to perform a full-cluster restart instead of a rolling upgrade.
-For more information about upgrading, see 
+For more information about upgrading, refer to 
 {stack-ref}/upgrading-elastic-stack.html[Upgrading to Elastic {version}].
 
-role="exclude",id="rolling-upgrade"]
+[role="exclude",id="rolling-upgrades"]
 === Rolling upgrade
 
 When upgrading to {es} 8.0 and later, you must first upgrade to {prev-major-last}
 whether you opt to perform a rolling upgrade (upgrade one node at a time) or a full-cluster restart upgrade.
-For more information about upgrading, see 
+For more information about upgrading, refer to 
 {stack-ref}/upgrading-elastic-stack.html[Upgrading to Elastic {version}].
+
+[role="exclude",id="reindex-upgrade"]
+=== Reindex before upgrading
+
+//tag::upgrade-reindex[]
+Before upgrading to {es} 8.0 and later, you must reindex any indices created in
+a 6.x version. We recommend using the **Upgrade Assistant** to guide you
+through this process.
+
+For more information about upgrading, refer to 
+{stack-ref}/upgrading-elastic-stack.html[Upgrading to Elastic {version}].
+//end::upgrade-reindex[]
+
+For more information about reindexing, refer to <<docs-reindex>>.
+
+[role="exclude",id="reindex-upgrade-inplace"]
+=== Reindex in place
+
+include::redirects.asciidoc[tag=upgrade-reindex]
+
+For more information about reindexing, refer to <<docs-reindex>>.
+
+[role="exclude",id="reindex-upgrade-remote"]
+=== Reindex from a remote cluster
+
+include::redirects.asciidoc[tag=upgrade-reindex]
+
+For more information about reindexing from a remote cluster, refer to
+<<reindex-from-remote>>.

+ 0 - 40
docs/reference/upgrade/close-ml.asciidoc

@@ -1,40 +0,0 @@
-
-////////////
-Take us out of upgrade mode after running any snippets on this page.
-
-[source,console]
---------------------------------------------------
-POST _ml/set_upgrade_mode?enabled=false
---------------------------------------------------
-// TEARDOWN
-////////////
-
-If your {ml} indices were created before {prev-major-version}, you must
-<<reindex-upgrade,reindex the indices>>.
-
-If your {ml} indices were created in {prev-major-version}, you can:
-
-* Leave your {ml} jobs running during the upgrade. When you shut down a
-{ml} node, its jobs automatically move to another node and restore the model
-states. This option enables your jobs to continue running during the upgrade but
-it puts increased load on the cluster.
-
-* Temporarily halt the tasks associated with your {ml} jobs and {dfeeds} and
-prevent new jobs from opening by using the
-<<ml-set-upgrade-mode,set upgrade mode API>>:
-+
-[source,console]
---------------------------------------------------
-POST _ml/set_upgrade_mode?enabled=true
---------------------------------------------------
-+
-When you disable upgrade mode, the jobs resume using the last model
-state that was automatically saved. This option avoids the overhead of managing
-active jobs during the upgrade and is faster than explicitly stopping {dfeeds}
-and closing jobs.
-
-* {ml-docs}/stopping-ml.html[Stop all {dfeeds} and close all jobs]. This option
-saves the model state at the time of closure. When you reopen the jobs after the
-upgrade, they use the exact same model. However, saving the latest model state
-takes longer than using upgrade mode, especially if you have a lot of jobs or
-jobs with large model states.

+ 0 - 154
docs/reference/upgrade/cluster_restart.asciidoc

@@ -1,154 +0,0 @@
-[[restart-upgrade]]
-== Full cluster restart upgrade
-
-To upgrade directly to {es} {version} from versions 6.0-6.6, you must shut down
-all nodes in the cluster, upgrade each node to {version}, and restart the cluster.
-
-NOTE: If you are running a version prior to 6.0,
-{stack-ref-68}/upgrading-elastic-stack.html[upgrade to 6.8]
-and reindex your old indices or bring up a new {version} cluster and
-<<reindex-upgrade-remote, reindex from remote>>.
-
-include::preparing_to_upgrade.asciidoc[]
-
-[discrete]
-=== Upgrading your cluster
-
-To perform a full cluster restart upgrade to {version}:
-
-. *Disable shard allocation.*
-+
---
-include::disable-shard-alloc.asciidoc[]
---
-
-. *Stop indexing and perform a flush.*
-+
---
-Performing a <<indices-flush, flush>> speeds up shard recovery.
-
-[source,console]
---------------------------------------------------
-POST /_flush
---------------------------------------------------
---
-
-. *Temporarily stop the tasks associated with active {ml} jobs and {dfeeds}.* (Optional)
-+
---
-include::close-ml.asciidoc[]
---
-
-. *Shutdown all nodes.*
-+
---
-include::shut-down-node.asciidoc[]
---
-
-. *Upgrade all nodes.*
-+
---
-include::remove-xpack.asciidoc[]
---
-+
---
-include::upgrade-node.asciidoc[]
---
-+
---
-include::set-paths-tip.asciidoc[]
---
-
-If upgrading from a 6.x cluster, you must also
-<<modules-discovery-bootstrap-cluster,configure cluster bootstrapping>> by
-setting the <<initial_master_nodes,`cluster.initial_master_nodes` setting>> on
-the master-eligible nodes.
-
-. *Upgrade any plugins.*
-+
-Use the `elasticsearch-plugin` script to install the upgraded version of each
-installed {es} plugin. All plugins must be upgraded when you upgrade
-a node.
-
-. If you use {es} {security-features} to define realms, verify that your realm
-settings are up-to-date. The format of realm settings changed in version 7.0, in
-particular, the placement of the realm type changed. See
-<<realm-settings,Realm settings>>.
-
-. *Start each upgraded node.*
-+
---
-If you have dedicated master nodes, start them first and wait for them to
-form a cluster and elect a master before proceeding with your data nodes.
-You can check progress by looking at the logs.
-
-As soon as enough master-eligible nodes have discovered each other, they form a
-cluster and elect a master. At that point, you can use
-<<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>> to monitor nodes
-joining the cluster:
-
-[source,console]
---------------------------------------------------
-GET _cat/health
-
-GET _cat/nodes
---------------------------------------------------
-
-The `status` column returned by `_cat/health` shows the health of each node
-in the cluster: `red`, `yellow`, or `green`.
---
-
-. *Wait for all nodes to join the cluster and report a status of yellow.*
-+
---
-When a node joins the cluster, it begins to recover any primary shards that
-are stored locally. The <<cat-health,`_cat/health`>> API initially reports
-a `status` of `red`, indicating that not all primary shards have been allocated.
-
-Once a node recovers its local shards, the cluster `status` switches to `yellow`,
-indicating that all primary shards have been recovered, but not all replica
-shards are allocated. This is to be expected because you have not yet
-reenabled allocation. Delaying the allocation of replicas until all nodes
-are `yellow` allows the master to allocate replicas to nodes that
-already have local shard copies.
---
-
-. *Reenable allocation.*
-+
---
-When all nodes have joined the cluster and recovered their primary shards,
-reenable allocation by restoring `cluster.routing.allocation.enable` to its
-default:
-
-[source,console]
-------------------------------------------------------
-PUT _cluster/settings
-{
-  "persistent": {
-    "cluster.routing.allocation.enable": null
-  }
-}
-------------------------------------------------------
-
-Once allocation is reenabled, the cluster starts allocating replica shards to
-the data nodes. At this point it is safe to resume indexing and searching,
-but your cluster will recover more quickly if you can wait until all primary
-and replica shards have been successfully allocated and the status of all nodes
-is `green`.
-
-You can monitor progress with the <<cat-health,`_cat/health`>> and
-<<cat-recovery,`_cat/recovery`>> APIs:
-
-[source,console]
---------------------------------------------------
-GET _cat/health
-
-GET _cat/recovery
---------------------------------------------------
---
-
-. *Restart machine learning jobs.*
-+
---
-include::open-ml.asciidoc[]
---

+ 0 - 12
docs/reference/upgrade/open-ml.asciidoc

@@ -1,12 +0,0 @@
-If you temporarily halted the tasks associated with your {ml} jobs,
-use the <<ml-set-upgrade-mode,set upgrade mode API>> to return them to active
-states:
-
-[source,console]
---------------------------------------------------
-POST _ml/set_upgrade_mode?enabled=false
---------------------------------------------------
-
-If you closed all {ml} jobs before the upgrade, open the jobs and start the
-datafeeds from {kib} or with the <<ml-open-job,open jobs>> and
-<<ml-start-datafeed,start datafeed>> APIs.

+ 0 - 28
docs/reference/upgrade/preparing_to_upgrade.asciidoc

@@ -1,28 +0,0 @@
-[discrete]
-=== Preparing to upgrade
-
-It is important to prepare carefully before starting an upgrade. Once you have
-started to upgrade your cluster to version {version} you must complete the
-upgrade. As soon as the cluster contains nodes of version {version} it may make
-changes to its internal state that cannot be reverted. If you cannot complete
-the upgrade then you should discard the partially-upgraded cluster, deploy an
-empty cluster of the version before the upgrade, and restore its contents from
-a snapshot.
-
-Before you start to upgrade your cluster to version {version} you should do the
-following.
-
-. Check the <<deprecation-logging, deprecation log>> to see if you are using any
-deprecated features and update your code accordingly.
-
-. Review the <<breaking-changes,breaking changes>> and make any necessary
-changes to your code and configuration for version {version}.
-
-. If you use any plugins, make sure there is a version of each plugin that is
-compatible with {es} version {version}.
-
-. Test the upgrade in an isolated environment before upgrading your production
-cluster.
-
-. <<modules-snapshots,Back up your data by taking a snapshot!>>
-

+ 0 - 205
docs/reference/upgrade/reindex_upgrade.asciidoc

@@ -1,205 +0,0 @@
-[[reindex-upgrade]]
-== Reindex before upgrading
-
-{es} can read indices created in the previous major version. If you
-have indices created in 5.x or before, you must reindex or delete them
-before upgrading to {version}. {es} nodes will fail to start if
-incompatible indices are present. Snapshots of 5.x or earlier indices cannot be
-restored to a 7.x cluster even if they were created by a 6.x cluster.
-Any index created in 6.x is compatible with 7.x and does not require a reindex.
-
-
-This restriction also applies to the internal indices that are used by
-{kib} and the {xpack} features. Therefore, before you can use {kib} and
-{xpack} features in {version}, you must ensure the internal indices have a
-compatible index structure.
-
-You have two options for reindexing old indices:
-
-* <<reindex-upgrade-inplace, Reindex in place>> on your 6.x cluster before upgrading.
-* Create a new {version} cluster and <<reindex-upgrade-remote, Reindex from remote>>.
-This enables you to reindex indices that reside on clusters running any version of {es}.
-
-.Upgrading time-based indices
-*******************************************
-
-If you use time-based indices, you likely won't need to carry
-pre-6.x indices forward to {version}. Data in time-based indices
-generally becomes less useful as time passes and are
-deleted as they age past your retention period.
-
-Unless you have an unusually long retention period, you can just
-wait to upgrade to 6.x until all of your pre-6.x indices have
-been deleted.
-
-*******************************************
-
-
-[[reindex-upgrade-inplace]]
-=== Reindex in place
-
-You can use the Upgrade Assistant in {kib} 6.8 to automatically reindex 5.x
-indices you need to carry forward to {version}.
-
-To manually reindex your old indices in place:
-
-. Create an index with 7.x compatible mappings.
-. Set the `refresh_interval` to `-1` and the `number_of_replicas` to `0` for
-  efficient reindexing.
-. Use the <<docs-reindex,`reindex` API>> to copy documents from the
-5.x index into the new index. You can use a script to perform any necessary
-modifications to the document data and metadata during reindexing.
-. Reset the `refresh_interval` and `number_of_replicas` to the values
-  used in the old index.
-. Wait for the index status to change to `green`.
-. In a single <<indices-aliases,aliases API>> request:
-.. Delete the old index.
-.. Add an alias with the old index name to the new index.
-.. Add any aliases that existed on the old index to the new index.
-
-ifdef::include-xpack[]
-[TIP]
-====
-If you use {ml-features} and your {ml} indices were created before
-{prev-major-version}, you must temporarily halt the tasks associated with your
-{ml} jobs and {dfeeds} and prevent new jobs from opening during the reindex. Use
-the <<ml-set-upgrade-mode,set upgrade mode API>> or
-{ml-docs}/stopping-ml.html[stop all {dfeeds} and close all {ml} jobs].
-
-If you use {es} {security-features}, before you reindex `.security*` internal
-indices it is a good idea to create a temporary superuser account in the `file`
-realm.
-
-. On a single node, add a temporary superuser account to the `file` realm. For
-example, run the <<users-command,elasticsearch-users useradd>> command:
-+
---
-[source,sh]
-----------------------------------------------------------
-bin/elasticsearch-users useradd <user_name> \
--p <password> -r superuser
-----------------------------------------------------------
---
-
-. Use these credentials when you reindex the `.security*` index. That is to say,
-use them to log in to {kib} and run the Upgrade Assistant or to call the
-reindex API. You can use your regular administration credentials to
-reindex the other internal indices.
-
-. Delete the temporary superuser account from the file realm. For
-example, run the {ref}/users-command.html[elasticsearch-users userdel] command:
-+
---
-[source,sh]
-----------------------------------------------------------
-bin/elasticsearch-users userdel <user_name>
-----------------------------------------------------------
---
-
-For more information, see <<file-realm>>.
-====
-endif::include-xpack[]
-
-[[reindex-upgrade-remote]]
-=== Reindex from a remote cluster
-
-You can use <<reindex-from-remote,reindex from remote>> to migrate indices from
-your old cluster to a new {version} cluster. This enables you to move to
-{version} from a pre-6.8 cluster without interrupting service.
-
-[WARNING]
-=============================================
-
-{es} provides backwards compatibility support that enables
-indices from the previous major version to be upgraded to the
-current major version. Skipping a major version means that you must
-resolve any backward compatibility issues yourself.
-
-{es} does not support forward compatibility across major versions.
-For example, you cannot reindex from a 7.x cluster into a 6.x cluster.
-
-ifdef::include-xpack[]
-If you use {ml-features} and you're migrating indices from a 6.5 or earlier
-cluster, the job and {dfeed} configuration information are not stored in an
-index. You must recreate your {ml} jobs in the new cluster. If you are migrating
-from a 6.6 or later cluster, it is a good idea to temporarily halt the tasks
-associated with your {ml} jobs and {dfeeds} to prevent inconsistencies between
-different {ml} indices that are reindexed at slightly different times. Use the
-<<ml-set-upgrade-mode,set upgrade mode API>> or 
-{ml-docs}/stopping-ml.html[stop all {dfeeds} and close all {ml} jobs].
-endif::include-xpack[]
-
-=============================================
-
-To migrate your indices:
-
-. Set up a new {version} cluster and add the existing cluster to the
-`reindex.remote.whitelist` in `elasticsearch.yml`.
-+
---
-[source,yaml]
---------------------------------------------------
-reindex.remote.whitelist: oldhost:9200
---------------------------------------------------
-
-[NOTE]
-=============================================
-The new cluster doesn't have to start fully-scaled out. As you migrate
-indices and shift the load to the new cluster, you can add nodes to the new
-cluster and remove nodes from the old one.
-
-=============================================
---
-
-. For each index that you need to migrate to the new cluster:
-
-.. Create an index with the appropriate mappings and settings. Set the
-  `refresh_interval` to `-1` and set `number_of_replicas` to `0` for
-  faster reindexing.
-
-.. Use the <<docs-reindex,`reindex` API>> to pull documents from the
-remote index into the new {version} index.
-+
-include::{es-ref-dir}/docs/reindex.asciidoc[tag=remote-reindex-slicing]
-+
---
-[source,console]
---------------------------------------------------
-POST _reindex
-{
-  "source": {
-    "remote": {
-      "host": "http://oldhost:9200",
-      "username": "user",
-      "password": "pass"
-    },
-    "index": "source",
-    "query": {
-      "match": {
-        "test": "data"
-      }
-    }
-  },
-  "dest": {
-    "index": "dest"
-  }
-}
---------------------------------------------------
-// TEST[setup:host]
-// TEST[s/^/PUT source\n/]
-// TEST[s/oldhost:9200",/\${host}"/]
-// TEST[s/"username": "user",//]
-// TEST[s/"password": "pass"//]
-
-If you run the reindex job in the background by setting `wait_for_completion`
-to `false`, the reindex request returns a `task_id` you can use to
-monitor progress of the reindex job with the <<tasks,task API>>:
-`GET _tasks/TASK_ID`.
---
-
-.. When the reindex job completes, set the `refresh_interval` and
-  `number_of_replicas` to the desired values (the default settings are
-  `30s` and `1`).
-
-.. Once reindexing is complete and the status of the new index is `green`,
-  you can delete the old index.

+ 0 - 6
docs/reference/upgrade/remove-xpack.asciidoc

@@ -1,6 +0,0 @@
-IMPORTANT: If you are upgrading from 6.2 or earlier and use {xpack},
-run `bin/elasticsearch-plugin remove x-pack` to remove the {xpack} plugin before
-you upgrade. The {xpack} functionality is now included in the default distribution
-and is no longer installed separately. The node won't start after upgrade if
-the {xpack} plugin is present. You will need to downgrade, remove the plugin,
-and reapply the upgrade.

+ 0 - 248
docs/reference/upgrade/rolling_upgrade.asciidoc

@@ -1,248 +0,0 @@
-[[rolling-upgrades]]
-== Rolling upgrades
-
-A rolling upgrade allows an {es} cluster to be upgraded one node at
-a time so upgrading does not interrupt service. Running multiple versions of
-{es} in the same cluster beyond the duration of an upgrade is
-not supported, as shards cannot be replicated from upgraded nodes to nodes
-running the older version.
-
-We strongly recommend that when you upgrade you divide your cluster's nodes
-into the following two groups and upgrade the groups in this order:
-
-. Nodes that are not <<master-node,master-eligible>>. You can retrieve a list
-of these nodes with `GET /_nodes/_all,master:false/_none` or by finding all the
-nodes configured with `node.master: false`.
-
-.. If you are using data tiers, or a hot-warm-cold architecture based on node 
-attributes, you should upgrade the nodes tier-by-tier, completing the upgrade
-of each tier before starting the next. Upgrade the frozen tier first, then the
-cold tier, then the warm tier, and finally the hot tier.  This is to ensure ILM
-can continue to move shards between phases and ensure version
-compatibility. You can get the list of nodes from specific tiers by
-`GET /_nodes/data_frozen:true/_none`, `GET /_nodes/data_cold:true/_none` etc.
-
-.. If you are not using data tiers, you may upgrade the nodes within the group 
-in any order.
-
-. Master-eligible nodes, which are the remaining nodes. You can retrieve a list
-of these nodes with `GET /_nodes/master:true`.
-
-Upgrading the nodes in this order ensures that the master-ineligible nodes are
-always running a version at least as new as the master-eligible nodes. Newer
-nodes can always join a cluster with an older master, but older nodes cannot
-always join a cluster with a newer master. By upgrading the master-eligible
-nodes last you ensure that all the master-ineligible nodes will be able to join
-the cluster whether the master-eligible nodes have been upgraded or not. If you
-upgrade any master-eligible nodes before the master-ineligible nodes then there
-is a risk that the older nodes will leave the cluster and will not be able to
-rejoin until they have been upgraded.
-
-Rolling upgrades are supported:
-
-include::{es-repo-dir}/upgrade.asciidoc[tag=rolling-upgrade-versions]
-
-Upgrading directly to {version} from 6.6 or earlier requires a
-<<restart-upgrade, full cluster restart>>.
-
-include::preparing_to_upgrade.asciidoc[]
-
-[discrete]
-=== Upgrading your cluster
-
-To perform a rolling upgrade to {version}:
-
-. *Disable shard allocation*.
-+
---
-include::disable-shard-alloc.asciidoc[]
---
-
-. *Stop non-essential indexing and perform a flush.* (Optional)
-+
---
-While you can continue indexing during the upgrade, shard recovery
-is much faster if you temporarily stop non-essential indexing and perform a
-<<indices-flush, flush>>.
-
-[source,console]
---------------------------------------------------
-POST /_flush
---------------------------------------------------
---
-
-. *Temporarily stop the tasks associated with active {ml} jobs and {dfeeds}.* (Optional)
-+
---
-include::close-ml.asciidoc[]
---
-
-. [[upgrade-node]] *Shut down a single node*.
-+
---
-include::shut-down-node.asciidoc[]
---
-
-. *Upgrade the node you shut down.*
-+
---
-include::upgrade-node.asciidoc[]
-include::set-paths-tip.asciidoc[]
-
-[[rolling-upgrades-bootstrapping]]
-NOTE: You should leave `cluster.initial_master_nodes` unset while performing a
-rolling upgrade. Each upgraded node is joining an existing cluster so there is
-no need for <<modules-discovery-bootstrap-cluster,cluster bootstrapping>>. You
-must configure <<built-in-hosts-providers,either `discovery.seed_hosts` or
-`discovery.seed_providers`>> on every node.
---
-
-. *Upgrade any plugins.*
-+
-Use the `elasticsearch-plugin` script to install the upgraded version of each
-installed {es} plugin. All plugins must be upgraded when you upgrade
-a node.
-
-. If you use {es} {security-features} to define realms, verify that your realm
-settings are up-to-date. The format of realm settings changed in version 7.0, in
-particular, the placement of the realm type changed. See
-<<realm-settings,Realm settings>>.
-
-. *Start the upgraded node.*
-+
---
-
-Start the newly-upgraded node and confirm that it joins the cluster by checking
-the log file or by submitting a `_cat/nodes` request:
-
-[source,console]
---------------------------------------------------
-GET _cat/nodes
---------------------------------------------------
---
-
-. *Reenable shard allocation.*
-+
---
-
-For data nodes, once the node has joined the cluster, remove the
-`cluster.routing.allocation.enable` setting to enable shard allocation and start
-using the node:
-
-[source,console]
---------------------------------------------------
-PUT _cluster/settings
-{
-  "persistent": {
-    "cluster.routing.allocation.enable": null
-  }
-}
---------------------------------------------------
---
-
-. *Wait for the node to recover.*
-+
---
-
-Before upgrading the next node, wait for the cluster to finish shard allocation.
-You can check progress by submitting a <<cat-health,`_cat/health`>> request:
-
-[source,console]
---------------------------------------------------
-GET _cat/health?v=true
---------------------------------------------------
-
-Wait for the `status` column to switch to `green`. Once the node is `green`, all
-primary and replica shards have been allocated.
-
-[IMPORTANT]
-====================================================
-During a rolling upgrade, primary shards assigned to a node running the new
-version cannot have their replicas assigned to a node with the old
-version. The new version might have a different data format that is
-not understood by the old version.
-
-If it is not possible to assign the replica shards to another node
-(there is only one upgraded node in the cluster), the replica
-shards remain unassigned and status stays `yellow`.
-
-In this case, you can proceed once there are no initializing or relocating shards
-(check the `init` and `relo` columns).
-
-As soon as another node is upgraded, the replicas can be assigned and the
-status will change to `green`.
-====================================================
-
-Shards that were not <<indices-flush,flushed>> might take longer to
-recover. You can monitor the recovery status of individual shards by
-submitting a <<cat-recovery,`_cat/recovery`>> request:
-
-[source,console]
---------------------------------------------------
-GET _cat/recovery
---------------------------------------------------
-
-If you stopped indexing, it is safe to resume indexing as soon as
-recovery completes.
---
-
-. *Repeat*
-+
---
-
-When the node has recovered and the cluster is stable, repeat these steps
-for each node that needs to be updated. You can monitor the health of the cluster
-with a <<cat-health,`_cat/health`>> request:
-
-[source,console]
---------------------------------------------------
-GET /_cat/health?v=true
---------------------------------------------------
-
-And check which nodes have been upgraded with a <<cat-nodes,`_cat/nodes`>> request:
-
-[source,console]
---------------------------------------------------
-GET /_cat/nodes?h=ip,name,version&v=true
---------------------------------------------------
-
---
-
-. *Restart machine learning jobs.*
-+
---
-include::open-ml.asciidoc[]
---
-
-
-[IMPORTANT]
-====================================================
-
-During a rolling upgrade, the cluster continues to operate normally. However,
-any new functionality is disabled or operates in a backward compatible mode
-until all nodes in the cluster are upgraded. New functionality becomes
-operational once the upgrade is complete and all nodes are running the new
-version. Once that has happened, there's no way to return to operating in a
-backward compatible mode. Nodes running the previous version will not be
-allowed to join the fully-updated cluster.
-
-In the unlikely case of a network malfunction during the upgrade process that
-isolates all remaining old nodes from the cluster, you must take the old nodes
-offline and upgrade them to enable them to join the cluster.
-
-If you stop half or more of the master-eligible nodes all at once during the
-upgrade then the cluster will become unavailable, meaning that the upgrade is
-no longer a _rolling_ upgrade. If this happens, you should upgrade and restart
-all of the stopped master-eligible nodes to allow the cluster to form again, as
-if performing a <<restart-upgrade,full-cluster restart upgrade>>. It may also
-be necessary to upgrade all of the remaining old nodes before they can join the
-cluster after it re-forms.
-
-Similarly, if you run a testing/development environment with only one master
-node, the master node should be upgraded last. Restarting a single master node
-forces the cluster to be reformed. The new cluster will initially only have the
-upgraded master node and will thus reject the older nodes when they re-join the
-cluster. Nodes that have already been upgraded will successfully re-join the
-upgraded master.
-
-====================================================

+ 0 - 17
docs/reference/upgrade/set-paths-tip.asciidoc

@@ -1,17 +0,0 @@
-[TIP]
-================================================
-
-When you extract the zip or tarball packages, the `elasticsearch-n.n.n`
-directory contains the {es} `config`, `data`, and `logs` directories.
-
-We recommend moving these directories out of the {es} directory
-so that there is no chance of deleting them when you upgrade {es}.
-To specify the new locations, use the `ES_PATH_CONF` environment
-variable and the `path.data` and `path.logs` settings. For more information,
-see <<important-settings,Important {es} configuration>>.
-
-The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the
-appropriate place for each operating system. In production, we recommend
-installing using the deb or rpm package.
-
-================================================

+ 0 - 28
docs/reference/upgrade/upgrade-node.asciidoc

@@ -1,28 +0,0 @@
-To upgrade using a <<deb,Debian>> or <<rpm,RPM>> package:
-
-*   Use `rpm` or `dpkg` to install the new package. All files are
-    installed in the appropriate location for the operating system
-    and {es} config files are not overwritten.
-
-To upgrade using a zip or compressed tarball:
-
-.. Extract the zip or tarball to a _new_ directory. This is critical if you
-   are not using external `config` and `data` directories.
-
-.. Set the `ES_PATH_CONF` environment variable to specify the location of
-   your external `config` directory and `jvm.options` file. If you are not
-   using an external `config` directory, copy your old configuration
-   over to the new installation.
-
-.. Set `path.data` in `config/elasticsearch.yml` to point to your external
-   data directory. If you are not using an external `data` directory, copy
-   your old data directory over to the new installation. +
-+
-IMPORTANT: If you use {monitor-features}, re-use the data directory when you upgrade
-{es}. Monitoring identifies unique {es} nodes by using the persistent UUID, which
-is stored in the data directory.
-
-
-.. Set `path.logs` in `config/elasticsearch.yml` to point to the location
-   where you want to store your logs. If you do not specify this setting,
-   logs are stored in the directory you extracted the archive to.