123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384 |
- [[modules-gateway]]
- === Local gateway settings
- The local gateway stores the cluster state and shard data across full
- cluster restarts.
- The following _static_ settings, which must be set on every master node,
- control how long a freshly elected master should wait before it tries to
- recover the cluster state and the cluster's data:
- `gateway.expected_nodes`::
- deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.expected_data_nodes` instead.]
- The number of (data or master) nodes that are expected to be in the cluster.
- Recovery of local shards will start as soon as the expected number of
- nodes have joined the cluster. Defaults to `0`
- `gateway.expected_master_nodes`::
- deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.expected_data_nodes` instead.]
- The number of master nodes that are expected to be in the cluster.
- Recovery of local shards will start as soon as the expected number of
- master nodes have joined the cluster. Defaults to `0`
- `gateway.expected_data_nodes`::
- The number of data nodes that are expected to be in the cluster.
- Recovery of local shards will start as soon as the expected number of
- data nodes have joined the cluster. Defaults to `0`
- `gateway.recover_after_time`::
- If the expected number of nodes is not achieved, the recovery process waits
- for the configured amount of time before trying to recover regardless.
- Defaults to `5m` if one of the `expected_nodes` settings is configured.
- Once the `recover_after_time` duration has timed out, recovery will start
- as long as the following conditions are met:
- `gateway.recover_after_nodes`::
- deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.recover_after_data_nodes` instead.]
- Recover as long as this many data or master nodes have joined the cluster.
- `gateway.recover_after_master_nodes`::
- deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.recover_after_data_nodes` instead.]
- Recover as long as this many master nodes have joined the cluster.
- `gateway.recover_after_data_nodes`::
- Recover as long as this many data nodes have joined the cluster.
- NOTE: These settings only take effect on a full cluster restart.
- [[dangling-indices]]
- ==== Dangling indices
- When a node joins the cluster, if it finds any shards stored in its local data
- directory that do not already exist in the cluster, it will consider those
- shards to be "dangling". Importing dangling indices
- into the cluster using `gateway.auto_import_dangling_indices` is not safe.
- Instead, use the <<dangling-indices-api,Dangling indices API>>. Neither
- mechanism provides any guarantees as to whether the imported data truly
- represents the latest state of the data when the index was still part of
- the cluster.
- `gateway.auto_import_dangling_indices`::
- deprecated:[7.9.0, This setting will be removed in 8.0. You should use the dedicated dangling indices API instead.]
- Whether to automatically import dangling indices into the cluster
- state, provided no indices already exist with the same name. Defaults
- to `false`.
- WARNING: The auto-import functionality was intended as a best effort to help users
- who lose all master nodes. For example, if a new master node were to be
- started which was unaware of the other indices in the cluster, adding the
- old nodes would cause the old indices to be imported, instead of being
- deleted. However there are several issues with automatic importing, and
- its use is strongly discouraged in favour of the
- <<dangling-indices-api,dedicated API>.
- WARNING: Losing all master nodes is a situation that should be avoided at
- all costs, as it puts your cluster's metadata and data at risk.
|