shards_allocation.asciidoc 9.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187
  1. [[cluster-shard-allocation-settings]]
  2. ==== Cluster-level shard allocation settings
  3. You can use the following settings to control shard allocation and recovery:
  4. [[cluster-routing-allocation-enable]]
  5. `cluster.routing.allocation.enable`::
  6. +
  7. --
  8. (<<dynamic-cluster-setting,Dynamic>>)
  9. Enable or disable allocation for specific kinds of shards:
  10. * `all` - (default) Allows shard allocation for all kinds of shards.
  11. * `primaries` - Allows shard allocation only for primary shards.
  12. * `new_primaries` - Allows shard allocation only for primary shards for new indices.
  13. * `none` - No shard allocations of any kind are allowed for any indices.
  14. This setting does not affect the recovery of local primary shards when
  15. restarting a node. A restarted node that has a copy of an unassigned primary
  16. shard will recover that primary immediately, assuming that its allocation id matches
  17. one of the active allocation ids in the cluster state.
  18. --
  19. `cluster.routing.allocation.node_concurrent_incoming_recoveries`::
  20. (<<dynamic-cluster-setting,Dynamic>>)
  21. How many concurrent incoming shard recoveries are allowed to happen on a node. Incoming recoveries are the recoveries
  22. where the target shard (most likely the replica unless a shard is relocating) is allocated on the node. Defaults to `2`.
  23. `cluster.routing.allocation.node_concurrent_outgoing_recoveries`::
  24. (<<dynamic-cluster-setting,Dynamic>>)
  25. How many concurrent outgoing shard recoveries are allowed to happen on a node. Outgoing recoveries are the recoveries
  26. where the source shard (most likely the primary unless a shard is relocating) is allocated on the node. Defaults to `2`.
  27. `cluster.routing.allocation.node_concurrent_recoveries`::
  28. (<<dynamic-cluster-setting,Dynamic>>)
  29. A shortcut to set both `cluster.routing.allocation.node_concurrent_incoming_recoveries` and
  30. `cluster.routing.allocation.node_concurrent_outgoing_recoveries`. Defaults to 2.
  31. `cluster.routing.allocation.node_initial_primaries_recoveries`::
  32. (<<dynamic-cluster-setting,Dynamic>>)
  33. While the recovery of replicas happens over the network, the recovery of
  34. an unassigned primary after node restart uses data from the local disk.
  35. These should be fast so more initial primary recoveries can happen in
  36. parallel on the same node. Defaults to `4`.
  37. [[cluster-routing-allocation-same-shard-host]]
  38. `cluster.routing.allocation.same_shard.host`::
  39. (<<dynamic-cluster-setting,Dynamic>>)
  40. If `true`, forbids multiple copies of a shard from being allocated to
  41. distinct nodes on the same host, i.e. which have the same network
  42. address. Defaults to `false`, meaning that copies of a shard may
  43. sometimes be allocated to nodes on the same host. This setting is only
  44. relevant if you run multiple nodes on each host.
  45. [[shards-rebalancing-settings]]
  46. ==== Shard rebalancing settings
  47. A cluster is _balanced_ when it has an equal number of shards on each node, with
  48. all nodes needing equal resources, without having a concentration of shards from
  49. any index on any node. {es} runs an automatic process called _rebalancing_ which
  50. moves shards between the nodes in your cluster to improve its balance.
  51. Rebalancing obeys all other shard allocation rules such as
  52. <<cluster-shard-allocation-filtering,allocation filtering>> and
  53. <<forced-awareness,forced awareness>> which may prevent it from completely
  54. balancing the cluster. In that case, rebalancing strives to achieve the most
  55. balanced cluster possible within the rules you have configured. If you are using
  56. <<data-tiers,data tiers>> then {es} automatically applies allocation filtering
  57. rules to place each shard within the appropriate tier. These rules mean that the
  58. balancer works independently within each tier.
  59. You can use the following settings to control the rebalancing of shards across
  60. the cluster:
  61. `cluster.routing.rebalance.enable`::
  62. +
  63. --
  64. (<<dynamic-cluster-setting,Dynamic>>)
  65. Enable or disable rebalancing for specific kinds of shards:
  66. * `all` - (default) Allows shard balancing for all kinds of shards.
  67. * `primaries` - Allows shard balancing only for primary shards.
  68. * `replicas` - Allows shard balancing only for replica shards.
  69. * `none` - No shard balancing of any kind are allowed for any indices.
  70. --
  71. `cluster.routing.allocation.allow_rebalance`::
  72. +
  73. --
  74. (<<dynamic-cluster-setting,Dynamic>>)
  75. Specify when shard rebalancing is allowed:
  76. * `always` - Always allow rebalancing.
  77. * `indices_primaries_active` - Only when all primaries in the cluster are allocated.
  78. * `indices_all_active` - (default) Only when all shards (primaries and replicas) in the cluster are allocated.
  79. --
  80. `cluster.routing.allocation.cluster_concurrent_rebalance`::
  81. (<<dynamic-cluster-setting,Dynamic>>)
  82. Defines the number of concurrent shard rebalances are allowed across the whole
  83. cluster. Defaults to `2`. Note that this setting only controls the number of
  84. concurrent shard relocations due to imbalances in the cluster. This setting does
  85. not limit shard relocations due to
  86. <<cluster-shard-allocation-filtering,allocation filtering>> or
  87. <<forced-awareness,forced awareness>>.
  88. `cluster.routing.allocation.type`::
  89. +
  90. --
  91. Selects the algorithm used for computing the cluster balance. Defaults to
  92. `desired_balance` which selects the _desired balance allocator_. This allocator
  93. runs a background task which computes the desired balance of shards in the
  94. cluster. Once this background task completes, {es} moves shards to their
  95. desired locations.
  96. May also be set to `balanced` to select the legacy _balanced allocator_. This
  97. allocator was the default allocator in versions of {es} before 8.6.0. It runs
  98. in the foreground, preventing the master from doing other work in parallel. It
  99. works by selecting a small number of shard movements which immediately improve
  100. the balance of the cluster, and when those shard movements complete it runs
  101. again and selects another few shards to move. Since this allocator makes its
  102. decisions based only on the current state of the cluster, it will sometimes
  103. move a shard several times while balancing the cluster.
  104. --
  105. [[shards-rebalancing-heuristics]]
  106. ==== Shard balancing heuristics settings
  107. Rebalancing works by computing a _weight_ for each node based on its allocation
  108. of shards, and then moving shards between nodes to reduce the weight of the
  109. heavier nodes and increase the weight of the lighter ones. The cluster is
  110. balanced when there is no possible shard movement that can bring the weight of
  111. any node closer to the weight of any other node by more than a configurable
  112. threshold.
  113. The weight of a node depends on the number of shards it holds and on the total
  114. estimated resource usage of those shards expressed in terms of the size of the
  115. shard on disk and the number of threads needed to support write traffic to the
  116. shard. {es} estimates the resource usage of shards belonging to data streams
  117. when they are created by a rollover. The estimated disk size of the new shard
  118. is the mean size of the other shards in the data stream. The estimated write
  119. load of the new shard is a weighted average of the actual write loads of recent
  120. shards in the data stream. Shards that do not belong to the write index of a
  121. data stream have an estimated write load of zero.
  122. The following settings control how {es} combines these values into an overall
  123. measure of each node's weight.
  124. `cluster.routing.allocation.balance.shard`::
  125. (float, <<dynamic-cluster-setting,Dynamic>>)
  126. Defines the weight factor for the total number of shards allocated to each node.
  127. Defaults to `0.45f`. Raising this value increases the tendency of {es} to
  128. equalize the total number of shards across nodes ahead of the other balancing
  129. variables.
  130. `cluster.routing.allocation.balance.index`::
  131. (float, <<dynamic-cluster-setting,Dynamic>>)
  132. Defines the weight factor for the number of shards per index allocated to each
  133. node. Defaults to `0.55f`. Raising this value increases the tendency of {es} to
  134. equalize the number of shards of each index across nodes ahead of the other
  135. balancing variables.
  136. `cluster.routing.allocation.balance.disk_usage`::
  137. (float, <<dynamic-cluster-setting,Dynamic>>)
  138. Defines the weight factor for balancing shards according to their predicted disk
  139. size in bytes. Defaults to `2e-11f`. Raising this value increases the tendency
  140. of {es} to equalize the total disk usage across nodes ahead of the other
  141. balancing variables.
  142. `cluster.routing.allocation.balance.write_load`::
  143. (float, <<dynamic-cluster-setting,Dynamic>>)
  144. Defines the weight factor for the write load of each shard, in terms of the
  145. estimated number of indexing threads needed by the shard. Defaults to `10.0f`.
  146. Raising this value increases the tendency of {es} to equalize the total write
  147. load across nodes ahead of the other balancing variables.
  148. `cluster.routing.allocation.balance.threshold`::
  149. (float, <<dynamic-cluster-setting,Dynamic>>)
  150. The minimum improvement in weight which triggers a rebalancing shard movement.
  151. Defaults to `1.0f`. Raising this value will cause {es} to stop rebalancing
  152. shards sooner, leaving the cluster in a more unbalanced state.
  153. NOTE: Regardless of the result of the balancing algorithm, rebalancing might
  154. not be allowed due to allocation rules such as forced awareness and allocation
  155. filtering.