123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231 |
- [[cluster-shard-allocation-settings]]
- ==== Cluster-level shard allocation settings
- You can use the following settings to control shard allocation and recovery:
- [[cluster-routing-allocation-enable]]
- `cluster.routing.allocation.enable`::
- +
- --
- (<<dynamic-cluster-setting,Dynamic>>)
- Enable or disable allocation for specific kinds of shards:
- * `all` - (default) Allows shard allocation for all kinds of shards.
- * `primaries` - Allows shard allocation only for primary shards.
- * `new_primaries` - Allows shard allocation only for primary shards for new indices.
- * `none` - No shard allocations of any kind are allowed for any indices.
- This setting does not affect the recovery of local primary shards when
- restarting a node. A restarted node that has a copy of an unassigned primary
- shard will recover that primary immediately, assuming that its allocation id matches
- one of the active allocation ids in the cluster state.
- --
- [[cluster-routing-allocation-same-shard-host]]
- `cluster.routing.allocation.same_shard.host`::
- (<<dynamic-cluster-setting,Dynamic>>)
- If `true`, forbids multiple copies of a shard from being allocated to
- distinct nodes on the same host, i.e. which have the same network
- address. Defaults to `false`, meaning that copies of a shard may
- sometimes be allocated to nodes on the same host. This setting is only
- relevant if you run multiple nodes on each host.
- `cluster.routing.allocation.node_concurrent_incoming_recoveries`::
- (<<dynamic-cluster-setting,Dynamic>>)
- How many concurrent incoming shard recoveries are allowed to happen on a
- node. Incoming recoveries are the recoveries where the target shard (most
- likely the replica unless a shard is relocating) is allocated on the node.
- Defaults to `2`. Increasing this setting may cause shard movements to have
- a performance impact on other activity in your cluster, but may not make
- shard movements complete noticeably sooner. We do not recommend adjusting
- this setting from its default of `2`.
- `cluster.routing.allocation.node_concurrent_outgoing_recoveries`::
- (<<dynamic-cluster-setting,Dynamic>>)
- How many concurrent outgoing shard recoveries are allowed to happen on a
- node. Outgoing recoveries are the recoveries where the source shard (most
- likely the primary unless a shard is relocating) is allocated on the node.
- Defaults to `2`. Increasing this setting may cause shard movements to have
- a performance impact on other activity in your cluster, but may not make
- shard movements complete noticeably sooner. We do not recommend adjusting
- this setting from its default of `2`.
- `cluster.routing.allocation.node_concurrent_recoveries`::
- (<<dynamic-cluster-setting,Dynamic>>)
- A shortcut to set both
- `cluster.routing.allocation.node_concurrent_incoming_recoveries` and
- `cluster.routing.allocation.node_concurrent_outgoing_recoveries`. The
- value of this setting takes effect only when the more specific setting is
- not configured. Defaults to `2`. Increasing this setting may cause shard
- movements to have a performance impact on other activity in your cluster,
- but may not make shard movements complete noticeably sooner. We do not
- recommend adjusting this setting from its default of `2`.
- `cluster.routing.allocation.node_initial_primaries_recoveries`::
- (<<dynamic-cluster-setting,Dynamic>>)
- While the recovery of replicas happens over the network, the recovery of
- an unassigned primary after node restart uses data from the local disk.
- These should be fast so more initial primary recoveries can happen in
- parallel on each node. Defaults to `4`. Increasing this setting may cause
- shard recoveries to have a performance impact on other activity in your
- cluster, but may not make shard recoveries complete noticeably sooner. We
- do not recommend adjusting this setting from its default of `4`.
- [[shards-rebalancing-settings]]
- ==== Shard rebalancing settings
- A cluster is _balanced_ when it has an equal number of shards on each node, with
- all nodes needing equal resources, without having a concentration of shards from
- any index on any node. {es} runs an automatic process called _rebalancing_ which
- moves shards between the nodes in your cluster to improve its balance.
- Rebalancing obeys all other shard allocation rules such as
- <<cluster-shard-allocation-filtering,allocation filtering>> and
- <<forced-awareness,forced awareness>> which may prevent it from completely
- balancing the cluster. In that case, rebalancing strives to achieve the most
- balanced cluster possible within the rules you have configured. If you are using
- <<data-tiers,data tiers>> then {es} automatically applies allocation filtering
- rules to place each shard within the appropriate tier. These rules mean that the
- balancer works independently within each tier.
- You can use the following settings to control the rebalancing of shards across
- the cluster:
- `cluster.routing.allocation.allow_rebalance`::
- +
- --
- (<<dynamic-cluster-setting,Dynamic>>)
- Specify when shard rebalancing is allowed:
- * `always` - Always allow rebalancing.
- * `indices_primaries_active` - Only when all primaries in the cluster are allocated.
- * `indices_all_active` - (default) Only when all shards (primaries and replicas) in the cluster are allocated.
- --
- `cluster.routing.rebalance.enable`::
- +
- --
- (<<dynamic-cluster-setting,Dynamic>>)
- Enable or disable rebalancing for specific kinds of shards:
- * `all` - (default) Allows shard balancing for all kinds of shards.
- * `primaries` - Allows shard balancing only for primary shards.
- * `replicas` - Allows shard balancing only for replica shards.
- * `none` - No shard balancing of any kind are allowed for any indices.
- Rebalancing is important to ensure the cluster returns to a healthy and fully
- resilient state after a disruption. If you adjust this setting, remember to set
- it back to `all` as soon as possible.
- --
- `cluster.routing.allocation.cluster_concurrent_rebalance`::
- (<<dynamic-cluster-setting,Dynamic>>)
- Defines the number of concurrent shard rebalances are allowed across the whole
- cluster. Defaults to `2`. Note that this setting only controls the number of
- concurrent shard relocations due to imbalances in the cluster. This setting
- does not limit shard relocations due to
- <<cluster-shard-allocation-filtering,allocation filtering>> or
- <<forced-awareness,forced awareness>>. Increasing this setting may cause the
- cluster to use additional resources moving shards between nodes, so we
- generally do not recommend adjusting this setting from its default of `2`.
- `cluster.routing.allocation.type`::
- +
- --
- Selects the algorithm used for computing the cluster balance. Defaults to
- `desired_balance` which selects the _desired balance allocator_. This allocator
- runs a background task which computes the desired balance of shards in the
- cluster. Once this background task completes, {es} moves shards to their
- desired locations.
- deprecated:[8.8,The `balanced` allocator type is deprecated and no longer recommended]
- May also be set to `balanced` to select the legacy _balanced allocator_. This
- allocator was the default allocator in versions of {es} before 8.6.0. It runs
- in the foreground, preventing the master from doing other work in parallel. It
- works by selecting a small number of shard movements which immediately improve
- the balance of the cluster, and when those shard movements complete it runs
- again and selects another few shards to move. Since this allocator makes its
- decisions based only on the current state of the cluster, it will sometimes
- move a shard several times while balancing the cluster.
- --
- [[shards-rebalancing-heuristics]]
- ==== Shard balancing heuristics settings
- Rebalancing works by computing a _weight_ for each node based on its allocation
- of shards, and then moving shards between nodes to reduce the weight of the
- heavier nodes and increase the weight of the lighter ones. The cluster is
- balanced when there is no possible shard movement that can bring the weight of
- any node closer to the weight of any other node by more than a configurable
- threshold.
- The weight of a node depends on the number of shards it holds and on the total
- estimated resource usage of those shards expressed in terms of the size of the
- shard on disk and the number of threads needed to support write traffic to the
- shard. {es} estimates the resource usage of shards belonging to data streams
- when they are created by a rollover. The estimated disk size of the new shard
- is the mean size of the other shards in the data stream. The estimated write
- load of the new shard is a weighted average of the actual write loads of recent
- shards in the data stream. Shards that do not belong to the write index of a
- data stream have an estimated write load of zero.
- The following settings control how {es} combines these values into an overall
- measure of each node's weight.
- `cluster.routing.allocation.balance.threshold`::
- (float, <<dynamic-cluster-setting,Dynamic>>)
- The minimum improvement in weight which triggers a rebalancing shard movement.
- Defaults to `1.0f`. Raising this value will cause {es} to stop rebalancing
- shards sooner, leaving the cluster in a more unbalanced state.
- `cluster.routing.allocation.balance.shard`::
- (float, <<dynamic-cluster-setting,Dynamic>>)
- Defines the weight factor for the total number of shards allocated to each node.
- Defaults to `0.45f`. Raising this value increases the tendency of {es} to
- equalize the total number of shards across nodes ahead of the other balancing
- variables.
- `cluster.routing.allocation.balance.index`::
- (float, <<dynamic-cluster-setting,Dynamic>>)
- Defines the weight factor for the number of shards per index allocated to each
- node. Defaults to `0.55f`. Raising this value increases the tendency of {es} to
- equalize the number of shards of each index across nodes ahead of the other
- balancing variables.
- `cluster.routing.allocation.balance.disk_usage`::
- (float, <<dynamic-cluster-setting,Dynamic>>)
- Defines the weight factor for balancing shards according to their predicted disk
- size in bytes. Defaults to `2e-11f`. Raising this value increases the tendency
- of {es} to equalize the total disk usage across nodes ahead of the other
- balancing variables.
- `cluster.routing.allocation.balance.write_load`::
- (float, <<dynamic-cluster-setting,Dynamic>>)
- Defines the weight factor for the write load of each shard, in terms of the
- estimated number of indexing threads needed by the shard. Defaults to `10.0f`.
- Raising this value increases the tendency of {es} to equalize the total write
- load across nodes ahead of the other balancing variables.
- [NOTE]
- ====
- * If you have a large cluster, it may be unnecessary to keep it in
- a perfectly balanced state at all times. It is less resource-intensive for the
- cluster to operate in a somewhat unbalanced state rather than to perform all
- the shard movements needed to achieve the perfect balance. If so, increase the
- value of `cluster.routing.allocation.balance.threshold` to define the
- acceptable imbalance between nodes. For instance, if you have an average of 500
- shards per node and can accept a difference of 5% (25 typical shards) between
- nodes, set `cluster.routing.allocation.balance.threshold` to `25`.
- * We do not recommend adjusting the values of the heuristic weight factor
- settings. The default values work well in all reasonable clusters. Although
- different values may improve the current balance in some ways, it is possible
- that they will create unexpected problems in the future or prevent it from
- gracefully handling an unexpected disruption.
- * Regardless of the result of the balancing algorithm, rebalancing might
- not be allowed due to allocation rules such as forced awareness and allocation
- filtering. Use the <<cluster-allocation-explain>> API to explain the current
- allocation of shards.
- ====
|