| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147 | [[restart-upgrade]]=== Full cluster restart upgradeElasticsearch requires a full cluster restart when upgrading across majorversions.  Rolling upgrades are not supported across major versions. Consultthis <<setup-upgrade,table>> to verify that a full cluster restart isrequired.The process to perform an upgrade with a full cluster restart is as follows:. *Disable shard allocation*+--When you shut down a node, the allocation process will immediately try toreplicate the shards that were on that node to other nodes in the cluster,causing a lot of wasted I/O.  This can be avoided by disabling allocationbefore shutting down a node:[source,js]--------------------------------------------------PUT _cluster/settings{  "persistent": {    "cluster.routing.allocation.enable": "none"  }}--------------------------------------------------// CONSOLE// TEST[skip:indexes don't assign]--. *Perform a synced flush*+--Shard recovery will be much faster if you stop indexing and issue a<<indices-synced-flush, synced-flush>> request:[source,sh]--------------------------------------------------POST _flush/synced--------------------------------------------------// CONSOLEA synced flush request is a ``best effort'' operation. It will fail if thereare any pending indexing operations, but it is safe to reissue the requestmultiple times if necessary.--. *Shutdown and upgrade all nodes*+--Stop all Elasticsearch services on all nodes in the cluster. Each node can beupgraded following the same procedure described in <<upgrade-node>>.--. *Upgrade any plugins*+--Elasticsearch plugins must be upgraded when upgrading a node.  Use the`elasticsearch-plugin` script to install the correct version of any pluginsthat you need.--. *Start the cluster*+--If you have dedicated master nodes -- nodes with `node.master` set to`true`(the default) and `node.data` set to `false` --  then it is a good ideato start them first.  Wait for them to form a cluster and to elect a masterbefore proceeding with the data nodes. You can check progress by looking at thelogs.As soon as the <<master-election,minimum number of master-eligible nodes>>have discovered each other, they will form a cluster and elect a master.  Fromthat point on, the <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>>APIs can be used to monitor nodes joining the cluster:[source,sh]--------------------------------------------------GET _cat/healthGET _cat/nodes--------------------------------------------------// CONSOLEUse these APIs to check that all nodes have successfully joined the cluster.--. *Wait for yellow*+--As soon as each node has joined the cluster, it will start to recover anyprimary shards that are stored locally.  Initially, the<<cat-health,`_cat/health`>> request will report a `status` of `red`, meaningthat not all primary shards have been allocated.Once each node has recovered its local shards, the `status` will become`yellow`, meaning all primary shards have been recovered, but not all replicashards are allocated.  This is to be expected because allocation is stilldisabled.--. *Reenable allocation*+--Delaying the allocation of replicas until all nodes have joined the clusterallows the master to allocate replicas to nodes which already have local shardcopies.   At this point, with all the nodes in the cluster, it is safe toreenable shard allocation:[source,js]------------------------------------------------------PUT _cluster/settings{  "persistent": {    "cluster.routing.allocation.enable": "all"  }}------------------------------------------------------// CONSOLEThe cluster will now start allocating replica shards to all data nodes. At thispoint it is safe to resume indexing and searching, but your cluster willrecover more quickly if you can delay indexing and searching until all shardshave recovered.You can monitor progress with the <<cat-health,`_cat/health`>> and<<cat-recovery,`_cat/recovery`>> APIs:[source,sh]--------------------------------------------------GET _cat/healthGET _cat/recovery--------------------------------------------------// CONSOLEOnce the `status` column in the `_cat/health` output has reached `green`, allprimary and replica shards have been successfully allocated.--
 |