cluster.asciidoc 10.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250
  1. [[modules-cluster]]
  2. == Cluster
  3. [float]
  4. [[shards-allocation]]
  5. === Shards Allocation
  6. Shards allocation is the process of allocating shards to nodes. This can
  7. happen during initial recovery, replica allocation, rebalancing, or
  8. handling nodes being added or removed.
  9. The following settings may be used:
  10. `cluster.routing.allocation.allow_rebalance`::
  11. Allow to control when rebalancing will happen based on the total
  12. state of all the indices shards in the cluster. `always`,
  13. `indices_primaries_active`, and `indices_all_active` are allowed,
  14. defaulting to `indices_all_active` to reduce chatter during
  15. initial recovery.
  16. `cluster.routing.allocation.cluster_concurrent_rebalance`::
  17. Allow to control how many concurrent rebalancing of shards are
  18. allowed cluster wide, and default it to `2`.
  19. `cluster.routing.allocation.node_initial_primaries_recoveries`::
  20. Allow to control specifically the number of initial recoveries
  21. of primaries that are allowed per node. Since most times local
  22. gateway is used, those should be fast and we can handle more of
  23. those per node without creating load.
  24. `cluster.routing.allocation.node_concurrent_recoveries`::
  25. How many concurrent recoveries are allowed to happen on a node.
  26. Defaults to `2`.
  27. added[1.0.0.RC1]
  28. `cluster.routing.allocation.enable`::
  29. Controls shard allocation for all indices, by allowing specific
  30. kinds of shard to be allocated. Can be set to:
  31. * `all` (default) - Allows shard allocation for all kinds of shards.
  32. * `primaries` - Allows shard allocation only for primary shards.
  33. * `new_primaries` - Allows shard allocation only for primary shards for new indices.
  34. * `none` - No shard allocations of any kind are allowed for all indices.
  35. `cluster.routing.allocation.disable_new_allocation`::
  36. Allows to disable new primary allocations. Note, this will prevent
  37. allocations for newly created indices. This setting really make
  38. sense when dynamically updating it using the cluster update
  39. settings API. This setting has been deprecated in favour
  40. for `cluster.routing.allocation.enable`.
  41. `cluster.routing.allocation.disable_allocation`::
  42. Allows to disable either primary or replica allocation (does not
  43. apply to newly created primaries, see `disable_new_allocation`
  44. above). Note, a replica will still be promoted to primary if
  45. one does not exist. This setting really make sense when
  46. dynamically updating it using the cluster update settings API.
  47. This setting has been deprecated in favour for `cluster.routing.allocation.enable`.
  48. `cluster.routing.allocation.disable_replica_allocation`::
  49. Allows to disable only replica allocation. Similar to the previous
  50. setting, mainly make sense when using it dynamically using the
  51. cluster update settings API. This setting has been deprecated in
  52. favour for `cluster.routing.allocation.enable`.
  53. `cluster.routing.allocation.same_shard.host`::
  54. Prevents that multiple instances of the same shard are allocated
  55. on a single host. Defaults to `false`. This setting only applies
  56. if multiple nodes are started on the same machine.
  57. `indices.recovery.concurrent_streams`::
  58. The number of streams to open (on a *node* level) to recover a
  59. shard from a peer shard. Defaults to `3`.
  60. [float]
  61. [[allocation-awareness]]
  62. === Shard Allocation Awareness
  63. Cluster allocation awareness allows to configure shard and replicas
  64. allocation across generic attributes associated the nodes. Lets explain
  65. it through an example:
  66. Assume we have several racks. When we start a node, we can configure an
  67. attribute called `rack_id` (any attribute name works), for example, here
  68. is a sample config:
  69. ----------------------
  70. node.rack_id: rack_one
  71. ----------------------
  72. The above sets an attribute called `rack_id` for the relevant node with
  73. a value of `rack_one`. Now, we need to configure the `rack_id` attribute
  74. as one of the awareness allocation attributes (set it on *all* (master
  75. eligible) nodes config):
  76. --------------------------------------------------------
  77. cluster.routing.allocation.awareness.attributes: rack_id
  78. --------------------------------------------------------
  79. The above will mean that the `rack_id` attribute will be used to do
  80. awareness based allocation of shard and its replicas. For example, lets
  81. say we start 2 nodes with `node.rack_id` set to `rack_one`, and deploy a
  82. single index with 5 shards and 1 replica. The index will be fully
  83. deployed on the current nodes (5 shards and 1 replica each, total of 10
  84. shards).
  85. Now, if we start two more nodes, with `node.rack_id` set to `rack_two`,
  86. shards will relocate to even the number of shards across the nodes, but,
  87. a shard and its replica will not be allocated in the same `rack_id`
  88. value.
  89. The awareness attributes can hold several values, for example:
  90. -------------------------------------------------------------
  91. cluster.routing.allocation.awareness.attributes: rack_id,zone
  92. -------------------------------------------------------------
  93. *NOTE*: When using awareness attributes, shards will not be allocated to
  94. nodes that don't have values set for those attributes.
  95. [float]
  96. [[forced-awareness]]
  97. === Forced Awareness
  98. Sometimes, we know in advance the number of values an awareness
  99. attribute can have, and more over, we would like never to have more
  100. replicas then needed allocated on a specific group of nodes with the
  101. same awareness attribute value. For that, we can force awareness on
  102. specific attributes.
  103. For example, lets say we have an awareness attribute called `zone`, and
  104. we know we are going to have two zones, `zone1` and `zone2`. Here is how
  105. we can force awareness one a node:
  106. [source,js]
  107. -------------------------------------------------------------------
  108. cluster.routing.allocation.awareness.force.zone.values: zone1,zone2
  109. cluster.routing.allocation.awareness.attributes: zone
  110. -------------------------------------------------------------------
  111. Now, lets say we start 2 nodes with `node.zone` set to `zone1` and
  112. create an index with 5 shards and 1 replica. The index will be created,
  113. but only 5 shards will be allocated (with no replicas). Only when we
  114. start more shards with `node.zone` set to `zone2` will the replicas be
  115. allocated.
  116. [float]
  117. ==== Automatic Preference When Searching / GETing
  118. When executing a search, or doing a get, the node receiving the request
  119. will prefer to execute the request on shards that exists on nodes that
  120. have the same attribute values as the executing node.
  121. [float]
  122. ==== Realtime Settings Update
  123. The settings can be updated using the <<cluster-update-settings,cluster update settings API>> on a live cluster.
  124. [float]
  125. [[allocation-filtering]]
  126. === Shard Allocation Filtering
  127. Allow to control allocation if indices on nodes based on include/exclude
  128. filters. The filters can be set both on the index level and on the
  129. cluster level. Lets start with an example of setting it on the cluster
  130. level:
  131. Lets say we have 4 nodes, each has specific attribute called `tag`
  132. associated with it (the name of the attribute can be any name). Each
  133. node has a specific value associated with `tag`. Node 1 has a setting
  134. `node.tag: value1`, Node 2 a setting of `node.tag: value2`, and so on.
  135. We can create an index that will only deploy on nodes that have `tag`
  136. set to `value1` and `value2` by setting
  137. `index.routing.allocation.include.tag` to `value1,value2`. For example:
  138. [source,js]
  139. --------------------------------------------------
  140. curl -XPUT localhost:9200/test/_settings -d '{
  141. "index.routing.allocation.include.tag" : "value1,value2"
  142. }'
  143. --------------------------------------------------
  144. On the other hand, we can create an index that will be deployed on all
  145. nodes except for nodes with a `tag` of value `value3` by setting
  146. `index.routing.allocation.exclude.tag` to `value3`. For example:
  147. [source,js]
  148. --------------------------------------------------
  149. curl -XPUT localhost:9200/test/_settings -d '{
  150. "index.routing.allocation.exclude.tag" : "value3"
  151. }'
  152. --------------------------------------------------
  153. `index.routing.allocation.require.*` can be used to
  154. specify a number of rules, all of which MUST match in order for a shard
  155. to be allocated to a node. This is in contrast to `include` which will
  156. include a node if ANY rule matches.
  157. The `include`, `exclude` and `require` values can have generic simple
  158. matching wildcards, for example, `value1*`. A special attribute name
  159. called `_ip` can be used to match on node ip values. In addition `_host`
  160. attribute can be used to match on either the node's hostname or its ip
  161. address.
  162. Obviously a node can have several attributes associated with it, and
  163. both the attribute name and value are controlled in the setting. For
  164. example, here is a sample of several node configurations:
  165. [source,js]
  166. --------------------------------------------------
  167. node.group1: group1_value1
  168. node.group2: group2_value4
  169. --------------------------------------------------
  170. In the same manner, `include`, `exclude` and `require` can work against
  171. several attributes, for example:
  172. [source,js]
  173. --------------------------------------------------
  174. curl -XPUT localhost:9200/test/_settings -d '{
  175. "index.routing.allocation.include.group1" : "xxx"
  176. "index.routing.allocation.include.group2" : "yyy",
  177. "index.routing.allocation.exclude.group3" : "zzz",
  178. "index.routing.allocation.require.group4" : "aaa"
  179. }'
  180. --------------------------------------------------
  181. The provided settings can also be updated in real time using the update
  182. settings API, allowing to "move" indices (shards) around in realtime.
  183. Cluster wide filtering can also be defined, and be updated in real time
  184. using the cluster update settings API. This setting can come in handy
  185. for things like decommissioning nodes (even if the replica count is set
  186. to 0). Here is a sample of how to decommission a node based on `_ip`
  187. address:
  188. [source,js]
  189. --------------------------------------------------
  190. curl -XPUT localhost:9200/_cluster/settings -d '{
  191. "transient" : {
  192. "cluster.routing.allocation.exclude._ip" : "10.0.0.1"
  193. }
  194. }'
  195. --------------------------------------------------