gateway.asciidoc 3.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384
  1. [[modules-gateway]]
  2. === Local gateway settings
  3. The local gateway stores the cluster state and shard data across full
  4. cluster restarts.
  5. The following _static_ settings, which must be set on every master node,
  6. control how long a freshly elected master should wait before it tries to
  7. recover the cluster state and the cluster's data:
  8. `gateway.expected_nodes`::
  9. deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.expected_data_nodes` instead.]
  10. The number of (data or master) nodes that are expected to be in the cluster.
  11. Recovery of local shards will start as soon as the expected number of
  12. nodes have joined the cluster. Defaults to `0`
  13. `gateway.expected_master_nodes`::
  14. deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.expected_data_nodes` instead.]
  15. The number of master nodes that are expected to be in the cluster.
  16. Recovery of local shards will start as soon as the expected number of
  17. master nodes have joined the cluster. Defaults to `0`
  18. `gateway.expected_data_nodes`::
  19. The number of data nodes that are expected to be in the cluster.
  20. Recovery of local shards will start as soon as the expected number of
  21. data nodes have joined the cluster. Defaults to `0`
  22. `gateway.recover_after_time`::
  23. If the expected number of nodes is not achieved, the recovery process waits
  24. for the configured amount of time before trying to recover regardless.
  25. Defaults to `5m` if one of the `expected_nodes` settings is configured.
  26. Once the `recover_after_time` duration has timed out, recovery will start
  27. as long as the following conditions are met:
  28. `gateway.recover_after_nodes`::
  29. deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.recover_after_data_nodes` instead.]
  30. Recover as long as this many data or master nodes have joined the cluster.
  31. `gateway.recover_after_master_nodes`::
  32. deprecated:[7.7.0, This setting will be removed in 8.0. You should use `gateway.recover_after_data_nodes` instead.]
  33. Recover as long as this many master nodes have joined the cluster.
  34. `gateway.recover_after_data_nodes`::
  35. Recover as long as this many data nodes have joined the cluster.
  36. NOTE: These settings only take effect on a full cluster restart.
  37. [[dangling-indices]]
  38. ==== Dangling indices
  39. When a node joins the cluster, if it finds any shards stored in its local data
  40. directory that do not already exist in the cluster, it will consider those
  41. shards to be "dangling". Importing dangling indices
  42. into the cluster using `gateway.auto_import_dangling_indices` is not safe.
  43. Instead, use the <<dangling-indices-api,Dangling indices API>>. Neither
  44. mechanism provides any guarantees as to whether the imported data truly
  45. represents the latest state of the data when the index was still part of
  46. the cluster.
  47. `gateway.auto_import_dangling_indices`::
  48. deprecated:[7.9.0, This setting will be removed in 8.0. You should use the dedicated dangling indices API instead.]
  49. Whether to automatically import dangling indices into the cluster
  50. state, provided no indices already exist with the same name. Defaults
  51. to `false`.
  52. WARNING: The auto-import functionality was intended as a best effort to help users
  53. who lose all master nodes. For example, if a new master node were to be
  54. started which was unaware of the other indices in the cluster, adding the
  55. old nodes would cause the old indices to be imported, instead of being
  56. deleted. However there are several issues with automatic importing, and
  57. its use is strongly discouraged in favour of the
  58. <<dangling-indices-api,dedicated API>.
  59. WARNING: Losing all master nodes is a situation that should be avoided at
  60. all costs, as it puts your cluster's metadata and data at risk.