| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130 | [[restart-upgrade]]== Full cluster restart upgradeA full cluster restart upgrade requires that you shut all nodes in the clusterdown, upgrade them, and restart the cluster. A full cluster restart was requiredwhen upgrading to major versions prior to 6.x. Elasticsearch 6.x supports<<rolling-upgrades, rolling upgrades>> from *Elasticsearch 5.6*. Upgrading to6.x from earlier versions requires a full cluster restart. See the<<upgrade-paths,Upgrade paths table>> to verify the type of upgrade you needto perform.To perform a full cluster restart upgrade:. *Disable shard allocation.*+--include::disable-shard-alloc.asciidoc[]--. *Stop indexing and perform a synced flush.*+--Performing a <<indices-synced-flush, synced-flush>> speeds up shardrecovery.include::synced-flush.asciidoc[]--. *Stop any machine learning jobs that are running.* See{xpack-ref}/stopping-ml.html[Stopping Machine Learning].. *Shutdown all nodes.*+--include::shut-down-node.asciidoc[]--. *Upgrade all nodes.*+--include::remove-xpack.asciidoc[]--+--include::upgrade-node.asciidoc[]include::set-paths-tip.asciidoc[]--. *Upgrade any plugins.*+Use the `elasticsearch-plugin` script to install the upgraded version of eachinstalled Elasticsearch plugin. All plugins must be upgraded when you upgradea node.. *Start each upgraded node.*+--If you have dedicated master nodes, start them first and wait for them toform a cluster and elect a master before proceeding with your data nodes.You can check progress by looking at the logs.As soon as the <<master-election,minimum number of master-eligible nodes>>have discovered each other, they form a cluster and elect a master. Atthat point, you can use <<cat-health,`_cat/health`>> and<<cat-nodes,`_cat/nodes`>> to monitor nodes joining the cluster:[source,sh]--------------------------------------------------GET _cat/healthGET _cat/nodes--------------------------------------------------// CONSOLEThe `status` column returned by `_cat/health` shows the health of each nodein the cluster: `red`, `yellow`, or `green`.--. *Wait for all nodes to join the cluster and report a status of yellow.*+--When a node joins the cluster, it begins to recover any primary shards thatare stored locally. The <<cat-health,`_cat/health`>> API initially reportsa `status` of `red`, indicating that not all primary shards have been allocated.Once a node recovers its local shards, the cluster `status` switches to `yellow`,indicating that all primary shards have been recovered, but not all replicashards are allocated. This is to be expected because you have not yetreenabled allocation. Delaying the allocation of replicas until all nodesare `yellow` allows the master to allocate replicas to nodes thatalready have local shard copies.--. *Reenable allocation.*+--When all nodes have joined the cluster and recovered their primary shards,reenable allocation by restoring `cluster.routing.allocation.enable` to itsdefault:[source,js]------------------------------------------------------PUT _cluster/settings{  "persistent": {    "cluster.routing.allocation.enable": null  }}------------------------------------------------------// CONSOLEOnce allocation is reenabled, the cluster starts allocating replica shards tothe data nodes. At this point it is safe to resume indexing and searching,but your cluster will recover more quickly if you can wait until all primaryand replica shards have been successfully allocated and the status of all nodesis `green`.You can monitor progress with the <<cat-health,`_cat/health`>> and<<cat-recovery,`_cat/recovery`>> APIs:[source,sh]--------------------------------------------------GET _cat/healthGET _cat/recovery--------------------------------------------------// CONSOLE--. *Restart machine learning jobs.*
 |