| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125 | [[modules-discovery-adding-removing-nodes]]=== Adding and removing nodesAs nodes are added or removed Elasticsearch maintains an optimal level of faulttolerance by automatically updating the cluster's _voting configuration_, whichis the set of <<master-node,master-eligible nodes>> whose responses are countedwhen making decisions such as electing a new master or committing a new clusterstate.It is recommended to have a small and fixed number of master-eligible nodes in acluster, and to scale the cluster up and down by adding and removingmaster-ineligible nodes only. However there are situations in which it may bedesirable to add or remove some master-eligible nodes to or from a cluster.==== Adding master-eligible nodesIf you wish to add some master-eligible nodes to your cluster, simply configurethe new nodes to find the existing cluster and start them up. Elasticsearch willadd the new nodes to the voting configuration if it is appropriate to do so.==== Removing master-eligible nodesWhen removing master-eligible nodes, it is important not to remove too many allat the same time. For instance, if there are currently seven master-eligiblenodes and you wish to reduce this to three, it is not possible simply to stopfour of the nodes at once: to do so would leave only three nodes remaining,which is less than half of the voting configuration, which means the clustercannot take any further actions.As long as there are at least three master-eligible nodes in the cluster, as ageneral rule it is best to remove nodes one-at-a-time, allowing enough time forthe cluster to <<modules-discovery-quorums,automatically adjust>> the votingconfiguration and adapt the fault tolerance level to the new set of nodes.If there are only two master-eligible nodes remaining then neither node can besafely removed since both are required to reliably make progress. You must firstinform Elasticsearch that one of the nodes should not be part of the votingconfiguration, and that the voting power should instead be given to other nodes.You can then take the excluded node offline without preventing the other nodefrom making progress. A node which is added to a voting configuration exclusionlist still works normally, but Elasticsearch tries to remove it from the votingconfiguration so its vote is no longer required.  Importantly, Elasticsearchwill never automatically move a node on the voting exclusions list back into thevoting configuration. Once an excluded node has been successfullyauto-reconfigured out of the voting configuration, it is safe to shut it downwithout affecting the cluster's master-level availability. A node can be addedto the voting configuration exclusion list using the following API:[source,js]--------------------------------------------------# Add node to voting configuration exclusions list and wait for the system to# auto-reconfigure the node out of the voting configuration up to the default# timeout of 30 secondsPOST /_cluster/voting_config_exclusions/node_name# Add node to voting configuration exclusions list and wait for# auto-reconfiguration up to one minutePOST /_cluster/voting_config_exclusions/node_name?timeout=1m--------------------------------------------------// CONSOLE// TEST[skip:this would break the test cluster if executed]The node that should be added to the exclusions list is specified using<<cluster-nodes,node filters>> in place of `node_name` here. If a call to thevoting configuration exclusions API fails, you can safely retry it.  Only asuccessful response guarantees that the node has actually been removed from thevoting configuration and will not be reinstated.Although the voting configuration exclusions API is most useful for down-scalinga two-node to a one-node cluster, it is also possible to use it to removemultiple master-eligible nodes all at the same time. Adding multiple nodes tothe exclusions list has the system try to auto-reconfigure all of these nodesout of the voting configuration, allowing them to be safely shut down whilekeeping the cluster available. In the example described above, shrinking aseven-master-node cluster down to only have three master nodes, you could addfour nodes to the exclusions list, wait for confirmation, and then shut themdown simultaneously.NOTE: Voting exclusions are only required when removing at least half of themaster-eligible nodes from a cluster in a short time period. They are notrequired when removing master-ineligible nodes, nor are they required whenremoving fewer than half of the master-eligible nodes.Adding an exclusion for a node creates an entry for that node in the votingconfiguration exclusions list, which has the system automatically try toreconfigure the voting configuration to remove that node and prevents it fromreturning to the voting configuration once it has removed. The current list ofexclusions is stored in the cluster state and can be inspected as follows:[source,js]--------------------------------------------------GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions--------------------------------------------------// CONSOLEThis list is limited in size by the following setting:`cluster.max_voting_config_exclusions`::    Sets a limits on the number of voting configuration exclusions at any one    time.  Defaults to `10`.Since voting configuration exclusions are persistent and limited in number, theymust be cleaned up. Normally an exclusion is added when performing somemaintenance on the cluster, and the exclusions should be cleaned up when themaintenance is complete. Clusters should have no voting configuration exclusionsin normal operation.If a node is excluded from the voting configuration because it is to be shutdown permanently, its exclusion can be removed after it is shut down and removedfrom the cluster. Exclusions can also be cleared if they were created in erroror were only required temporarily:[source,js]--------------------------------------------------# Wait for all the nodes with voting configuration exclusions to be removed from# the cluster and then remove all the exclusions, allowing any node to return to# the voting configuration in the future.DELETE /_cluster/voting_config_exclusions# Immediately remove all the voting configuration exclusions, allowing any node# to return to the voting configuration in the future.DELETE /_cluster/voting_config_exclusions?wait_for_removal=false--------------------------------------------------// CONSOLE
 |