1
0

bootstrapping.asciidoc 7.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155
  1. [[modules-discovery-bootstrap-cluster]]
  2. === Bootstrapping a cluster
  3. Starting an Elasticsearch cluster for the very first time requires the initial
  4. set of <<master-node,master-eligible nodes>> to be explicitly defined on one or
  5. more of the master-eligible nodes in the cluster. This is known as _cluster
  6. bootstrapping_. This is only required the very first time the cluster starts
  7. up: nodes that have already joined a cluster store this information in their
  8. data folder for use in a <<restart-upgrade,full cluster restart>>, and
  9. freshly-started nodes that are joining a running cluster obtain this
  10. information from the cluster's elected master.
  11. The initial set of master-eligible nodes is defined in the
  12. <<initial_master_nodes,`cluster.initial_master_nodes` setting>>. This should be
  13. set to a list containing one of the following items for each master-eligible
  14. node:
  15. - The <<node.name,node name>> of the node.
  16. - The node's hostname if `node.name` is not set, because `node.name` defaults
  17. to the node's hostname. You must use either the fully-qualified hostname or
  18. the bare hostname <<modules-discovery-bootstrap-cluster-fqdns,depending on
  19. your system configuration>>.
  20. - The IP address of the node's <<modules-transport,publish address>>, if it is
  21. not possible to use the `node.name` of the node. This is normally the IP
  22. address to which <<common-network-settings,`network.host`>> resolves but
  23. <<advanced-network-settings,this can be overridden>>.
  24. - The IP address and port of the node's publish address, in the form `IP:PORT`,
  25. if it is not possible to use the `node.name` of the node and there are
  26. multiple nodes sharing a single IP address.
  27. When you start a master-eligible node, you can provide this setting on the
  28. command line or in the `elasticsearch.yml` file. After the cluster has formed,
  29. this setting is no longer required and is ignored. It need not be set on
  30. master-ineligible nodes, nor on master-eligible nodes that are started to join
  31. an existing cluster. Note that master-eligible nodes should use storage that
  32. persists across restarts. If they do not, and `cluster.initial_master_nodes` is
  33. set, and a full cluster restart occurs, then another brand-new cluster will
  34. form and this may result in data loss.
  35. It is technically sufficient to set `cluster.initial_master_nodes` on a single
  36. master-eligible node in the cluster, and only to mention that single node in the
  37. setting's value, but this provides no fault tolerance before the cluster has
  38. fully formed. It is therefore better to bootstrap using at least three
  39. master-eligible nodes, each with a `cluster.initial_master_nodes` setting
  40. containing all three nodes.
  41. WARNING: You must set `cluster.initial_master_nodes` to the same list of nodes
  42. on each node on which it is set in order to be sure that only a single cluster
  43. forms during bootstrapping and therefore to avoid the risk of data loss.
  44. For a cluster with 3 master-eligible nodes (with <<node.name,node names>>
  45. `master-a`, `master-b` and `master-c`) the configuration will look as follows:
  46. [source,yaml]
  47. --------------------------------------------------
  48. cluster.initial_master_nodes:
  49. - master-a
  50. - master-b
  51. - master-c
  52. --------------------------------------------------
  53. Like all node settings, it is also possible to specify the initial set of master
  54. nodes on the command-line that is used to start Elasticsearch:
  55. [source,bash]
  56. --------------------------------------------------
  57. $ bin/elasticsearch -Ecluster.initial_master_nodes=master-a,master-b,master-c
  58. --------------------------------------------------
  59. [NOTE]
  60. ==================================================
  61. [[modules-discovery-bootstrap-cluster-fqdns]] The node names used in the
  62. `cluster.initial_master_nodes` list must exactly match the `node.name`
  63. properties of the nodes. By default the node name is set to the machine's
  64. hostname which may or may not be fully-qualified depending on your system
  65. configuration. If each node name is a fully-qualified domain name such as
  66. `master-a.example.com` then you must use fully-qualified domain names in the
  67. `cluster.initial_master_nodes` list too; conversely if your node names are bare
  68. hostnames (without the `.example.com` suffix) then you must use bare hostnames
  69. in the `cluster.initial_master_nodes` list. If you use a mix of fully-qualifed
  70. and bare hostnames, or there is some other mismatch between `node.name` and
  71. `cluster.initial_master_nodes`, then the cluster will not form successfully and
  72. you will see log messages like the following.
  73. [source,text]
  74. --------------------------------------------------
  75. [master-a.example.com] master not discovered yet, this node has
  76. not previously joined a bootstrapped (v7+) cluster, and this
  77. node must discover master-eligible nodes [master-a, master-b] to
  78. bootstrap a cluster: have discovered [{master-b.example.com}{...
  79. --------------------------------------------------
  80. This message shows the node names `master-a.example.com` and
  81. `master-b.example.com` as well as the `cluster.initial_master_nodes` entries
  82. `master-a` and `master-b`, and it is clear from this message that they do not
  83. match exactly.
  84. ==================================================
  85. [float]
  86. ==== Choosing a cluster name
  87. The <<cluster.name,`cluster.name`>> setting enables you to create multiple
  88. clusters which are separated from each other. Nodes verify that they agree on
  89. their cluster name when they first connect to each other, and Elasticsearch
  90. will only form a cluster from nodes that all have the same cluster name. The
  91. default value for the cluster name is `elasticsearch`, but it is recommended to
  92. change this to reflect the logical name of the cluster.
  93. [float]
  94. ==== Auto-bootstrapping in development mode
  95. If the cluster is running with a completely default configuration then it will
  96. automatically bootstrap a cluster based on the nodes that could be discovered to
  97. be running on the same host within a short time after startup. This means that
  98. by default it is possible to start up several nodes on a single machine and have
  99. them automatically form a cluster which is very useful for development
  100. environments and experimentation. However, since nodes may not always
  101. successfully discover each other quickly enough this automatic bootstrapping
  102. cannot be relied upon and cannot be used in production deployments.
  103. If any of the following settings are configured then auto-bootstrapping will not
  104. take place, and you must configure `cluster.initial_master_nodes` as described
  105. in the <<modules-discovery-bootstrap-cluster,section on cluster bootstrapping>>:
  106. * `discovery.seed_providers`
  107. * `discovery.seed_hosts`
  108. * `cluster.initial_master_nodes`
  109. [NOTE]
  110. ==================================================
  111. [[modules-discovery-bootstrap-cluster-joining]] If you start an {es} node
  112. without configuring these settings then it will start up in development mode and
  113. auto-bootstrap itself into a new cluster. If you start some {es} nodes on
  114. different hosts then by default they will not discover each other and will form
  115. a different cluster on each host. {es} will not merge separate clusters together
  116. after they have formed, even if you subsequently try and configure all the nodes
  117. into a single cluster. This is because there is no way to merge these separate
  118. clusters together without a risk of data loss. You can tell that you have formed
  119. separate clusters by checking the cluster UUID reported by `GET /` on each node.
  120. If you intended to form a single cluster then you should start again:
  121. * Take a <<modules-snapshots,snapshot>> of each of the single-host clusters if
  122. you do not want to lose any data that they hold. Note that each cluster must
  123. use its own snapshot repository.
  124. * Shut down all the nodes.
  125. * Completely wipe each node by deleting the contents of their
  126. <<data-path,data folders>>.
  127. * Configure `cluster.initial_master_nodes` as described above.
  128. * Restart all the nodes and verify that they have formed a single cluster.
  129. * <<modules-snapshots,Restore>> any snapshots as required.
  130. ==================================================