|
@@ -6,8 +6,9 @@ set of <<master-node,master-eligible nodes>> to be explicitly defined on one or
|
|
|
more of the master-eligible nodes in the cluster. This is known as _cluster
|
|
|
bootstrapping_. This is only required the very first time the cluster starts
|
|
|
up: nodes that have already joined a cluster store this information in their
|
|
|
-data folder and freshly-started nodes that are joining an existing cluster
|
|
|
-obtain this information from the cluster's elected master.
|
|
|
+data folder for use in a <<restart-upgrade,full cluster restart>>, and
|
|
|
+freshly-started nodes that are joining a running cluster obtain this
|
|
|
+information from the cluster's elected master.
|
|
|
|
|
|
The initial set of master-eligible nodes is defined in the
|
|
|
<<initial_master_nodes,`cluster.initial_master_nodes` setting>>. This should be
|
|
@@ -58,19 +59,6 @@ cluster.initial_master_nodes:
|
|
|
- master-c
|
|
|
--------------------------------------------------
|
|
|
|
|
|
-If it is not possible to use the names of the nodes then you can also use IP
|
|
|
-addresses, or IP addresses and ports, or even a mix of IP addresses and node
|
|
|
-names:
|
|
|
-
|
|
|
-[source,yaml]
|
|
|
---------------------------------------------------
|
|
|
-cluster.initial_master_nodes:
|
|
|
- - 10.0.10.101
|
|
|
- - 10.0.10.102:9300
|
|
|
- - 10.0.10.102:9301
|
|
|
- - master-node-name
|
|
|
---------------------------------------------------
|
|
|
-
|
|
|
Like all node settings, it is also possible to specify the initial set of master
|
|
|
nodes on the command-line that is used to start Elasticsearch:
|
|
|
|
|
@@ -139,3 +127,29 @@ in the <<modules-discovery-bootstrap-cluster,section on cluster bootstrapping>>:
|
|
|
* `discovery.seed_providers`
|
|
|
* `discovery.seed_hosts`
|
|
|
* `cluster.initial_master_nodes`
|
|
|
+
|
|
|
+[NOTE]
|
|
|
+==================================================
|
|
|
+
|
|
|
+[[modules-discovery-bootstrap-cluster-joining]] If you start an {es} node
|
|
|
+without configuring these settings then it will start up in development mode and
|
|
|
+auto-bootstrap itself into a new cluster. If you start some {es} nodes on
|
|
|
+different hosts then by default they will not discover each other and will form
|
|
|
+a different cluster on each host. {es} will not merge separate clusters together
|
|
|
+after they have formed, even if you subsequently try and configure all the nodes
|
|
|
+into a single cluster. This is because there is no way to merge these separate
|
|
|
+clusters together without a risk of data loss. You can tell that you have formed
|
|
|
+separate clusters by checking the cluster UUID reported by `GET /` on each node.
|
|
|
+If you intended to form a single cluster then you should start again:
|
|
|
+
|
|
|
+* Take a <<modules-snapshots,snapshot>> of each of the single-host clusters if
|
|
|
+ you do not want to lose any data that they hold. Note that each cluster must
|
|
|
+ use its own snapshot repository.
|
|
|
+* Shut down all the nodes.
|
|
|
+* Completely wipe each node by deleting the contents of their
|
|
|
+ <<data-path,data folders>>.
|
|
|
+* Configure `cluster.initial_master_nodes` as described above.
|
|
|
+* Restart all the nodes and verify that they have formed a single cluster.
|
|
|
+* <<modules-snapshots,Restore>> any snapshots as required.
|
|
|
+
|
|
|
+==================================================
|