|
@@ -5,7 +5,7 @@ You can use custom node attributes as _awareness attributes_ to enable {es}
|
|
|
to take your physical hardware configuration into account when allocating shards.
|
|
|
If {es} knows which nodes are on the same physical server, in the same rack, or
|
|
|
in the same zone, it can distribute the primary shard and its replica shards to
|
|
|
-minimise the risk of losing all shard copies in the event of a failure.
|
|
|
+minimize the risk of losing all shard copies in the event of a failure.
|
|
|
|
|
|
When shard allocation awareness is enabled with the
|
|
|
<<dynamic-cluster-setting,dynamic>>
|
|
@@ -19,22 +19,27 @@ allocated in each location. If the number of nodes in each location is
|
|
|
unbalanced and there are a lot of replicas, replica shards might be left
|
|
|
unassigned.
|
|
|
|
|
|
+TIP: Learn more about <<high-availability-cluster-design-large-clusters,designing resilient clusters>>.
|
|
|
+
|
|
|
[[enabling-awareness]]
|
|
|
===== Enabling shard allocation awareness
|
|
|
|
|
|
To enable shard allocation awareness:
|
|
|
|
|
|
-. Specify the location of each node with a custom node attribute. For example,
|
|
|
-if you want Elasticsearch to distribute shards across different racks, you might
|
|
|
-set an awareness attribute called `rack_id` in each node's `elasticsearch.yml`
|
|
|
-config file.
|
|
|
+. Specify the location of each node with a custom node attribute. For example,
|
|
|
+if you want Elasticsearch to distribute shards across different racks, you might
|
|
|
+use an awareness attribute called `rack_id`.
|
|
|
++
|
|
|
+You can set custom attributes in two ways:
|
|
|
+
|
|
|
+- By editing the `elasticsearch.yml` config file:
|
|
|
+
|
|
|
[source,yaml]
|
|
|
--------------------------------------------------------
|
|
|
node.attr.rack_id: rack_one
|
|
|
--------------------------------------------------------
|
|
|
+
|
|
|
-You can also set custom attributes when you start a node:
|
|
|
+- Using the `-E` command line argument when you start a node:
|
|
|
+
|
|
|
[source,sh]
|
|
|
--------------------------------------------------------
|
|
@@ -56,17 +61,33 @@ cluster.routing.allocation.awareness.attributes: rack_id <1>
|
|
|
+
|
|
|
You can also use the
|
|
|
<<cluster-update-settings,cluster-update-settings>> API to set or update
|
|
|
-a cluster's awareness attributes.
|
|
|
+a cluster's awareness attributes:
|
|
|
++
|
|
|
+[source,console]
|
|
|
+--------------------------------------------------
|
|
|
+PUT /_cluster/settings
|
|
|
+{
|
|
|
+ "persistent" : {
|
|
|
+ "cluster.routing.allocation.awareness.attributes" : "rack_id"
|
|
|
+ }
|
|
|
+}
|
|
|
+--------------------------------------------------
|
|
|
|
|
|
With this example configuration, if you start two nodes with
|
|
|
`node.attr.rack_id` set to `rack_one` and create an index with 5 primary
|
|
|
shards and 1 replica of each primary, all primaries and replicas are
|
|
|
-allocated across the two nodes.
|
|
|
+allocated across the two node.
|
|
|
+
|
|
|
+.All primaries and replicas allocated across two nodes in the same rack
|
|
|
+image::images/shard-allocation/shard-allocation-awareness-one-rack.png[All primaries and replicas are allocated across two nodes in the same rack]
|
|
|
|
|
|
If you add two nodes with `node.attr.rack_id` set to `rack_two`,
|
|
|
{es} moves shards to the new nodes, ensuring (if possible)
|
|
|
that no two copies of the same shard are in the same rack.
|
|
|
|
|
|
+.Primaries and replicas allocated across four nodes in two racks, with no two copies of the same shard in the same rack
|
|
|
+image::images/shard-allocation/shard-allocation-awareness-two-racks.png[Primaries and replicas are allocated across four nodes in two racks with no two copies of the same shard in the same rack]
|
|
|
+
|
|
|
If `rack_two` fails and takes down both its nodes, by default {es}
|
|
|
allocates the lost shard copies to nodes in `rack_one`. To prevent multiple
|
|
|
copies of a particular shard from being allocated in the same location, you can
|