| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239 | [[modules-cluster]]== Cluster[float][[shards-allocation]]=== Shards AllocationShards allocation is the process of allocating shards to nodes. This canhappen during initial recovery, replica allocation, rebalancing, orhandling nodes being added or removed.The following settings may be used:`cluster.routing.allocation.allow_rebalance`::        Allow to control when rebalancing will happen based on the total        state of all the indices shards in the cluster. `always`,        `indices_primaries_active`, and `indices_all_active` are allowed,        defaulting to `indices_all_active` to reduce chatter during        initial recovery.`cluster.routing.allocation.cluster_concurrent_rebalance`::      Allow to control how many concurrent rebalancing of shards are      allowed cluster wide, and default it to `2`.`cluster.routing.allocation.node_initial_primaries_recoveries`::       Allow to control specifically the number of initial recoveries       of primaries that are allowed per node. Since most times local       gateway is used, those should be fast and we can handle more of       those per node without creating load.`cluster.routing.allocation.node_concurrent_recoveries`::     How many concurrent recoveries are allowed to happen on a node.     Defaults to `2`.`cluster.routing.allocation.enable`::    Controls shard allocation for all indices, by allowing specific    kinds of shard to be allocated.    added[1.0.0.RC1,Replaces `cluster.routing.allocation.disable*`]    Can be set to:    * `all` (default) - Allows shard allocation for all kinds of shards.    * `primaries` - Allows shard allocation only for primary shards.    * `new_primaries` - Allows shard allocation only for primary shards for new indices.    * `none` - No shard allocations of any kind are allowed for all indices.`cluster.routing.allocation.disable_new_allocation`::    deprecated[1.0.0.RC1,Replaced by `cluster.routing.allocation.enable`]`cluster.routing.allocation.disable_allocation`::    deprecated[1.0.0.RC1,Replaced by `cluster.routing.allocation.enable`]`cluster.routing.allocation.disable_replica_allocation`::    deprecated[1.0.0.RC1,Replaced by `cluster.routing.allocation.enable`]`cluster.routing.allocation.same_shard.host`::      Allows to perform a check to prevent allocation of multiple instances      of the same shard on a single host, based on host name and host address.      Defaults to `false`, meaning that no check is performed by default. This      setting only applies if multiple nodes are started on the same machine.`indices.recovery.concurrent_streams`::       The number of streams to open (on a *node* level) to recover a       shard from a peer shard. Defaults to `3`.[float][[allocation-awareness]]=== Shard Allocation AwarenessCluster allocation awareness allows to configure shard and replicasallocation across generic attributes associated the nodes. Lets explainit through an example:Assume we have several racks. When we start a node, we can configure anattribute called `rack_id` (any attribute name works), for example, hereis a sample config:----------------------node.rack_id: rack_one----------------------The above sets an attribute called `rack_id` for the relevant node witha value of `rack_one`. Now, we need to configure the `rack_id` attributeas one of the awareness allocation attributes (set it on *all* (mastereligible) nodes config):--------------------------------------------------------cluster.routing.allocation.awareness.attributes: rack_id--------------------------------------------------------The above will mean that the `rack_id` attribute will be used to doawareness based allocation of shard and its replicas. For example, letssay we start 2 nodes with `node.rack_id` set to `rack_one`, and deploy asingle index with 5 shards and 1 replica. The index will be fullydeployed on the current nodes (5 shards and 1 replica each, total of 10shards).Now, if we start two more nodes, with `node.rack_id` set to `rack_two`,shards will relocate to even the number of shards across the nodes, but,a shard and its replica will not be allocated in the same `rack_id`value.The awareness attributes can hold several values, for example:-------------------------------------------------------------cluster.routing.allocation.awareness.attributes: rack_id,zone-------------------------------------------------------------*NOTE*: When using awareness attributes, shards will not be allocated tonodes that don't have values set for those attributes.[float][[forced-awareness]]=== Forced AwarenessSometimes, we know in advance the number of values an awarenessattribute can have, and more over, we would like never to have morereplicas then needed allocated on a specific group of nodes with thesame awareness attribute value. For that, we can force awareness onspecific attributes.For example, lets say we have an awareness attribute called `zone`, andwe know we are going to have two zones, `zone1` and `zone2`. Here is howwe can force awareness on a node:[source,js]-------------------------------------------------------------------cluster.routing.allocation.awareness.force.zone.values: zone1,zone2cluster.routing.allocation.awareness.attributes: zone-------------------------------------------------------------------Now, lets say we start 2 nodes with `node.zone` set to `zone1` andcreate an index with 5 shards and 1 replica. The index will be created,but only 5 shards will be allocated (with no replicas). Only when westart more shards with `node.zone` set to `zone2` will the replicas beallocated.[float]==== Automatic Preference When Searching / GETingWhen executing a search, or doing a get, the node receiving the requestwill prefer to execute the request on shards that exists on nodes thathave the same attribute values as the executing node.[float]==== Realtime Settings UpdateThe settings can be updated using the <<cluster-update-settings,cluster update settings API>> on a live cluster.[float][[allocation-filtering]]=== Shard Allocation FilteringAllow to control allocation of indices on nodes based on include/excludefilters. The filters can be set both on the index level and on thecluster level. Lets start with an example of setting it on the clusterlevel:Lets say we have 4 nodes, each has specific attribute called `tag`associated with it (the name of the attribute can be any name). Eachnode has a specific value associated with `tag`. Node 1 has a setting`node.tag: value1`, Node 2 a setting of `node.tag: value2`, and so on.We can create an index that will only deploy on nodes that have `tag`set to `value1` and `value2` by setting`index.routing.allocation.include.tag` to `value1,value2`. For example:[source,js]--------------------------------------------------curl -XPUT localhost:9200/test/_settings -d '{      "index.routing.allocation.include.tag" : "value1,value2"}'--------------------------------------------------On the other hand, we can create an index that will be deployed on allnodes except for nodes with a `tag` of value `value3` by setting`index.routing.allocation.exclude.tag` to `value3`. For example:[source,js]--------------------------------------------------curl -XPUT localhost:9200/test/_settings -d '{      "index.routing.allocation.exclude.tag" : "value3"}'--------------------------------------------------`index.routing.allocation.require.*` can be used tospecify a number of rules, all of which MUST match in order for a shardto be  allocated to a node. This is in contrast to `include` which willinclude a node if ANY rule matches.The `include`, `exclude` and `require` values can have generic simplematching wildcards, for example, `value1*`. A special attribute namecalled `_ip` can be used to match on node ip values. In addition `_host`attribute can be used to match on either the node's hostname or its ipaddress. Similarly `_name` and `_id` attributes can be used to match onnode name and node id accordingly.Obviously a node can have several attributes associated with it, andboth the attribute name and value are controlled in the setting. Forexample, here is a sample of several node configurations:[source,js]--------------------------------------------------node.group1: group1_value1node.group2: group2_value4--------------------------------------------------In the same manner, `include`, `exclude` and `require` can work againstseveral attributes, for example:[source,js]--------------------------------------------------curl -XPUT localhost:9200/test/_settings -d '{    "index.routing.allocation.include.group1" : "xxx"    "index.routing.allocation.include.group2" : "yyy",    "index.routing.allocation.exclude.group3" : "zzz",    "index.routing.allocation.require.group4" : "aaa"}'--------------------------------------------------The provided settings can also be updated in real time using the updatesettings API, allowing to "move" indices (shards) around in realtime.Cluster wide filtering can also be defined, and be updated in real timeusing the cluster update settings API. This setting can come in handyfor things like decommissioning nodes (even if the replica count is setto 0). Here is a sample of how to decommission a node based on `_ip`address:[source,js]--------------------------------------------------curl -XPUT localhost:9200/_cluster/settings -d '{    "transient" : {        "cluster.routing.allocation.exclude._ip" : "10.0.0.1"    }}'--------------------------------------------------
 |