gateway.asciidoc 2.2 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
  1. [[modules-gateway]]
  2. == Local Gateway
  3. The local gateway module stores the cluster state and shard data across full
  4. cluster restarts.
  5. The following _static_ settings, which must be set on every master node,
  6. control how long a freshly elected master should wait before it tries to
  7. recover the cluster state and the cluster's data:
  8. `gateway.expected_nodes`::
  9. The number of (data or master) nodes that are expected to be in the cluster.
  10. Recovery of local shards will start as soon as the expected number of
  11. nodes have joined the cluster. Defaults to `0`
  12. `gateway.expected_master_nodes`::
  13. The number of master nodes that are expected to be in the cluster.
  14. Recovery of local shards will start as soon as the expected number of
  15. master nodes have joined the cluster. Defaults to `0`
  16. `gateway.expected_data_nodes`::
  17. The number of data nodes that are expected to be in the cluster.
  18. Recovery of local shards will start as soon as the expected number of
  19. data nodes have joined the cluster. Defaults to `0`
  20. `gateway.recover_after_time`::
  21. If the expected number of nodes is not achieved, the recovery process waits
  22. for the configured amount of time before trying to recover regardless.
  23. Defaults to `5m` if one of the `expected_nodes` settings is configured.
  24. Once the `recover_after_time` duration has timed out, recovery will start
  25. as long as the following conditions are met:
  26. `gateway.recover_after_nodes`::
  27. Recover as long as this many data or master nodes have joined the cluster.
  28. `gateway.recover_after_master_nodes`::
  29. Recover as long as this many master nodes have joined the cluster.
  30. `gateway.recover_after_data_nodes`::
  31. Recover as long as this many data nodes have joined the cluster.
  32. NOTE: These settings only take effect on a full cluster restart.
  33. === Dangling indices
  34. When a node joins the cluster, any shards stored in its local data
  35. directory which do not already exist in the cluster will be imported into the
  36. cluster. This functionality is intended as a best effort to help users who
  37. lose all master nodes. If a new master node is started which is unaware of
  38. the other indices in the cluster, adding the old nodes will cause the old
  39. indices to be imported, instead of being deleted.