123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166 |
- [[rolling-upgrades]]
- == Rolling upgrades
- A rolling upgrade allows an Elasticsearch cluster to be upgraded one node at
- a time so upgrading does not interrupt service. Running multiple versions of
- Elasticsearch in the same cluster beyond the duration of an upgrade is
- not supported, as shards cannot be replicated from upgraded nodes to nodes
- running the older version.
- Rolling upgrades can be performed between minor versions. Elasticsearch
- 6.x supports rolling upgrades from *Elasticsearch 5.6*.
- Upgrading from earlier 5.x versions requires a <<restart-upgrade,
- full cluster restart>>. You must <<reindex-upgrade,reindex to upgrade>> from
- versions prior to 5.x.
- To perform a rolling upgrade:
- . *Disable shard allocation*.
- +
- --
- include::disable-shard-alloc.asciidoc[]
- --
- . *Stop non-essential indexing and perform a synced flush.* (Optional)
- +
- --
- While you can continue indexing during the upgrade, shard recovery
- is much faster if you temporarily stop non-essential indexing and perform a
- <<indices-synced-flush, synced-flush>>.
- include::synced-flush.asciidoc[]
- --
- . *Stop any machine learning jobs that are running.* See
- {xpack-ref}/stopping-ml.html[Stopping Machine Learning].
- . [[upgrade-node]] *Shut down a single node*.
- +
- --
- include::shut-down-node.asciidoc[]
- --
- . *Upgrade the node you shut down.*
- +
- --
- include::upgrade-node.asciidoc[]
- include::set-paths-tip.asciidoc[]
- --
- . *Upgrade any plugins.*
- +
- Use the `elasticsearch-plugin` script to install the upgraded version of each
- installed Elasticsearch plugin. All plugins must be upgraded when you upgrade
- a node.
- +
- include::remove-xpack.asciidoc[]
- . *Start the upgraded node.*
- +
- --
- Start the newly-upgraded node and confirm that it joins the cluster by checking
- the log file or by submitting a `_cat/nodes` request:
- [source,sh]
- --------------------------------------------------
- GET _cat/nodes
- --------------------------------------------------
- // CONSOLE
- --
- . *Reenable shard allocation.*
- +
- --
- Once the node has joined the cluster, remove the `cluster.routing.allocation.enable`
- setting to enable shard allocation and start using the node:
- [source,js]
- --------------------------------------------------
- PUT _cluster/settings
- {
- "persistent": {
- "cluster.routing.allocation.enable": null
- }
- }
- --------------------------------------------------
- // CONSOLE
- --
- . *Wait for the node to recover.*
- +
- --
- Before upgrading the next node, wait for the cluster to finish shard allocation.
- You can check progress by submitting a <<cat-health,`_cat/health`>> request:
- [source,sh]
- --------------------------------------------------
- GET _cat/health
- --------------------------------------------------
- // CONSOLE
- Wait for the `status` column to switch from `yellow` to `green`. Once the
- node is `green`, all primary and replica shards have been allocated.
- [IMPORTANT]
- ====================================================
- During a rolling upgrade, primary shards assigned to a node running the new
- version cannot have their replicas assigned to a node with the old
- version. The new version might have a different data format that is
- not understood by the old version.
- If it is not possible to assign the replica shards to another node
- (there is only one upgraded node in the cluster), the replica
- shards remain unassigned and status stays `yellow`.
- In this case, you can proceed once there are no initializing or relocating shards
- (check the `init` and `relo` columns).
- As soon as another node is upgraded, the replicas can be assigned and the
- status will change to `green`.
- ====================================================
- Shards that were not <<indices-synced-flush,sync-flushed>> might take longer to
- recover. You can monitor the recovery status of individual shards by
- submitting a <<cat-recovery,`_cat/recovery`>> request:
- [source,sh]
- --------------------------------------------------
- GET _cat/recovery
- --------------------------------------------------
- // CONSOLE
- If you stopped indexing, it is safe to resume indexing as soon as
- recovery completes.
- --
- . *Repeat*
- +
- --
- When the node has recovered and the cluster is stable, repeat these steps
- for each node that needs to be updated.
- --
- . *Restart machine learning jobs.*
- [IMPORTANT]
- ====================================================
- During a rolling upgrade, the cluster continues to operate normally. However,
- any new functionality is disabled or operates in a backward compatible mode
- until all nodes in the cluster are upgraded. New functionality
- becomes operational once the upgrade is complete and all nodes are running the
- new version. Once that has happened, there's no way to return to operating
- in a backward compatible mode. Nodes running the previous major version will
- not be allowed to join the fully-updated cluster.
- In the unlikely case of a network malfunction during the upgrade process that
- isolates all remaining old nodes from the cluster, you must take the
- old nodes offline and upgrade them to enable them to join the cluster.
- ====================================================
|