adding-removing-nodes.asciidoc 6.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125
  1. [[modules-discovery-adding-removing-nodes]]
  2. === Adding and removing nodes
  3. As nodes are added or removed Elasticsearch maintains an optimal level of fault
  4. tolerance by automatically updating the cluster's _voting configuration_, which
  5. is the set of <<master-node,master-eligible nodes>> whose responses are counted
  6. when making decisions such as electing a new master or committing a new cluster
  7. state.
  8. It is recommended to have a small and fixed number of master-eligible nodes in a
  9. cluster, and to scale the cluster up and down by adding and removing
  10. master-ineligible nodes only. However there are situations in which it may be
  11. desirable to add or remove some master-eligible nodes to or from a cluster.
  12. ==== Adding master-eligible nodes
  13. If you wish to add some master-eligible nodes to your cluster, simply configure
  14. the new nodes to find the existing cluster and start them up. Elasticsearch will
  15. add the new nodes to the voting configuration if it is appropriate to do so.
  16. ==== Removing master-eligible nodes
  17. When removing master-eligible nodes, it is important not to remove too many all
  18. at the same time. For instance, if there are currently seven master-eligible
  19. nodes and you wish to reduce this to three, it is not possible simply to stop
  20. four of the nodes at once: to do so would leave only three nodes remaining,
  21. which is less than half of the voting configuration, which means the cluster
  22. cannot take any further actions.
  23. As long as there are at least three master-eligible nodes in the cluster, as a
  24. general rule it is best to remove nodes one-at-a-time, allowing enough time for
  25. the cluster to <<modules-discovery-quorums,automatically adjust>> the voting
  26. configuration and adapt the fault tolerance level to the new set of nodes.
  27. If there are only two master-eligible nodes remaining then neither node can be
  28. safely removed since both are required to reliably make progress. You must first
  29. inform Elasticsearch that one of the nodes should not be part of the voting
  30. configuration, and that the voting power should instead be given to other nodes.
  31. You can then take the excluded node offline without preventing the other node
  32. from making progress. A node which is added to a voting configuration exclusion
  33. list still works normally, but Elasticsearch tries to remove it from the voting
  34. configuration so its vote is no longer required. Importantly, Elasticsearch
  35. will never automatically move a node on the voting exclusions list back into the
  36. voting configuration. Once an excluded node has been successfully
  37. auto-reconfigured out of the voting configuration, it is safe to shut it down
  38. without affecting the cluster's master-level availability. A node can be added
  39. to the voting configuration exclusion list using the following API:
  40. [source,js]
  41. --------------------------------------------------
  42. # Add node to voting configuration exclusions list and wait for the system to
  43. # auto-reconfigure the node out of the voting configuration up to the default
  44. # timeout of 30 seconds
  45. POST /_cluster/voting_config_exclusions/node_name
  46. # Add node to voting configuration exclusions list and wait for
  47. # auto-reconfiguration up to one minute
  48. POST /_cluster/voting_config_exclusions/node_name?timeout=1m
  49. --------------------------------------------------
  50. // CONSOLE
  51. // TEST[skip:this would break the test cluster if executed]
  52. The node that should be added to the exclusions list is specified using
  53. <<cluster-nodes,node filters>> in place of `node_name` here. If a call to the
  54. voting configuration exclusions API fails, you can safely retry it. Only a
  55. successful response guarantees that the node has actually been removed from the
  56. voting configuration and will not be reinstated.
  57. Although the voting configuration exclusions API is most useful for down-scaling
  58. a two-node to a one-node cluster, it is also possible to use it to remove
  59. multiple master-eligible nodes all at the same time. Adding multiple nodes to
  60. the exclusions list has the system try to auto-reconfigure all of these nodes
  61. out of the voting configuration, allowing them to be safely shut down while
  62. keeping the cluster available. In the example described above, shrinking a
  63. seven-master-node cluster down to only have three master nodes, you could add
  64. four nodes to the exclusions list, wait for confirmation, and then shut them
  65. down simultaneously.
  66. NOTE: Voting exclusions are only required when removing at least half of the
  67. master-eligible nodes from a cluster in a short time period. They are not
  68. required when removing master-ineligible nodes, nor are they required when
  69. removing fewer than half of the master-eligible nodes.
  70. Adding an exclusion for a node creates an entry for that node in the voting
  71. configuration exclusions list, which has the system automatically try to
  72. reconfigure the voting configuration to remove that node and prevents it from
  73. returning to the voting configuration once it has removed. The current list of
  74. exclusions is stored in the cluster state and can be inspected as follows:
  75. [source,js]
  76. --------------------------------------------------
  77. GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions
  78. --------------------------------------------------
  79. // CONSOLE
  80. This list is limited in size by the following setting:
  81. `cluster.max_voting_config_exclusions`::
  82. Sets a limits on the number of voting configuration exclusions at any one
  83. time. Defaults to `10`.
  84. Since voting configuration exclusions are persistent and limited in number, they
  85. must be cleaned up. Normally an exclusion is added when performing some
  86. maintenance on the cluster, and the exclusions should be cleaned up when the
  87. maintenance is complete. Clusters should have no voting configuration exclusions
  88. in normal operation.
  89. If a node is excluded from the voting configuration because it is to be shut
  90. down permanently, its exclusion can be removed after it is shut down and removed
  91. from the cluster. Exclusions can also be cleared if they were created in error
  92. or were only required temporarily:
  93. [source,js]
  94. --------------------------------------------------
  95. # Wait for all the nodes with voting configuration exclusions to be removed from
  96. # the cluster and then remove all the exclusions, allowing any node to return to
  97. # the voting configuration in the future.
  98. DELETE /_cluster/voting_config_exclusions
  99. # Immediately remove all the voting configuration exclusions, allowing any node
  100. # to return to the voting configuration in the future.
  101. DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
  102. --------------------------------------------------
  103. // CONSOLE