misc.asciidoc 6.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166
  1. [[misc-cluster-settings]]
  2. ==== Miscellaneous cluster settings
  3. [[cluster-read-only]]
  4. ===== Metadata
  5. An entire cluster may be set to read-only with the following setting:
  6. `cluster.blocks.read_only`::
  7. (<<dynamic-cluster-setting,Dynamic>>)
  8. Make the whole cluster read only (indices do not accept write
  9. operations), metadata is not allowed to be modified (create or delete
  10. indices).
  11. `cluster.blocks.read_only_allow_delete`::
  12. (<<dynamic-cluster-setting,Dynamic>>)
  13. Identical to `cluster.blocks.read_only` but allows to delete indices
  14. to free up resources.
  15. WARNING: Don't rely on this setting to prevent changes to your cluster. Any
  16. user with access to the <<cluster-update-settings,cluster-update-settings>>
  17. API can make the cluster read-write again.
  18. [[cluster-shard-limit]]
  19. ===== Cluster shard limit
  20. There is a soft limit on the number of shards in a cluster, based on the number
  21. of nodes in the cluster. This is intended to prevent operations which may
  22. unintentionally destabilize the cluster.
  23. IMPORTANT: This limit is intended as a safety net, not a sizing recommendation. The
  24. exact number of shards your cluster can safely support depends on your hardware
  25. configuration and workload, but should remain well below this limit in almost
  26. all cases, as the default limit is set quite high.
  27. If an operation, such as creating a new index, restoring a snapshot of an index,
  28. or opening a closed index would lead to the number of shards in the cluster
  29. going over this limit, the operation will fail with an error indicating the
  30. shard limit.
  31. If the cluster is already over the limit, due to changes in node membership or
  32. setting changes, all operations that create or open indices will fail until
  33. either the limit is increased as described below, or some indices are
  34. <<indices-open-close,closed>> or <<indices-delete-index,deleted>> to bring the
  35. number of shards below the limit.
  36. The cluster shard limit defaults to 1,000 shards per data node.
  37. Both primary and replica shards of all open indices count toward the limit,
  38. including unassigned shards.
  39. For example, an open index with 5 primary shards and 2 replicas counts as 15 shards.
  40. Closed indices do not contribute to the shard count.
  41. You can dynamically adjust the cluster shard limit with the following setting:
  42. `cluster.max_shards_per_node`::
  43. (<<dynamic-cluster-setting,Dynamic>>)
  44. Controls the number of shards allowed in the cluster per data node.
  45. With the default setting, a 3-node cluster allows 3,000 shards total, across all open indexes.
  46. If you reduce the limit to 500, the cluster would allow 1,500 shards total.
  47. NOTE: If there are no data nodes in the cluster, the limit will not be enforced.
  48. This allows the creation of indices during cluster creation if dedicated master
  49. nodes are set up before data nodes.
  50. [[user-defined-data]]
  51. ===== User-defined cluster metadata
  52. User-defined metadata can be stored and retrieved using the Cluster Settings API.
  53. This can be used to store arbitrary, infrequently-changing data about the cluster
  54. without the need to create an index to store it. This data may be stored using
  55. any key prefixed with `cluster.metadata.`. For example, to store the email
  56. address of the administrator of a cluster under the key `cluster.metadata.administrator`,
  57. issue this request:
  58. [source,console]
  59. -------------------------------
  60. PUT /_cluster/settings
  61. {
  62. "persistent": {
  63. "cluster.metadata.administrator": "sysadmin@example.com"
  64. }
  65. }
  66. -------------------------------
  67. IMPORTANT: User-defined cluster metadata is not intended to store sensitive or
  68. confidential information. Any information stored in user-defined cluster
  69. metadata will be viewable by anyone with access to the
  70. <<cluster-get-settings,Cluster Get Settings>> API, and is recorded in the
  71. {es} logs.
  72. [[cluster-max-tombstones]]
  73. ===== Index tombstones
  74. The cluster state maintains index tombstones to explicitly denote indices that
  75. have been deleted. The number of tombstones maintained in the cluster state is
  76. controlled by the following setting:
  77. `cluster.indices.tombstones.size`::
  78. (<<static-cluster-setting,Static>>)
  79. Index tombstones prevent nodes that are not part of the cluster when a delete
  80. occurs from joining the cluster and reimporting the index as though the delete
  81. was never issued. To keep the cluster state from growing huge we only keep the
  82. last `cluster.indices.tombstones.size` deletes, which defaults to 500. You can
  83. increase it if you expect nodes to be absent from the cluster and miss more
  84. than 500 deletes. We think that is rare, thus the default. Tombstones don't take
  85. up much space, but we also think that a number like 50,000 is probably too big.
  86. include::{es-repo-dir}/indices/dangling-indices-list.asciidoc[tag=dangling-index-description]
  87. You can use the <<dangling-indices-api,Dangling indices API>> to manage
  88. this situation.
  89. [[cluster-logger]]
  90. ===== Logger
  91. The settings which control logging can be updated <<dynamic-cluster-setting,dynamically>> with the
  92. `logger.` prefix. For instance, to increase the logging level of the
  93. `indices.recovery` module to `DEBUG`, issue this request:
  94. [source,console]
  95. -------------------------------
  96. PUT /_cluster/settings
  97. {
  98. "transient": {
  99. "logger.org.elasticsearch.indices.recovery": "DEBUG"
  100. }
  101. }
  102. -------------------------------
  103. [[persistent-tasks-allocation]]
  104. ===== Persistent tasks allocation
  105. Plugins can create a kind of tasks called persistent tasks. Those tasks are
  106. usually long-lived tasks and are stored in the cluster state, allowing the
  107. tasks to be revived after a full cluster restart.
  108. Every time a persistent task is created, the master node takes care of
  109. assigning the task to a node of the cluster, and the assigned node will then
  110. pick up the task and execute it locally. The process of assigning persistent
  111. tasks to nodes is controlled by the following settings:
  112. `cluster.persistent_tasks.allocation.enable`::
  113. +
  114. --
  115. (<<dynamic-cluster-setting,Dynamic>>)
  116. Enable or disable allocation for persistent tasks:
  117. * `all` - (default) Allows persistent tasks to be assigned to nodes
  118. * `none` - No allocations are allowed for any type of persistent task
  119. This setting does not affect the persistent tasks that are already being executed.
  120. Only newly created persistent tasks, or tasks that must be reassigned (after a node
  121. left the cluster, for example), are impacted by this setting.
  122. --
  123. `cluster.persistent_tasks.allocation.recheck_interval`::
  124. (<<dynamic-cluster-setting,Dynamic>>)
  125. The master node will automatically check whether persistent tasks need to
  126. be assigned when the cluster state changes significantly. However, there
  127. may be other factors, such as memory usage, that affect whether persistent
  128. tasks can be assigned to nodes but do not cause the cluster state to change.
  129. This setting controls how often assignment checks are performed to react to
  130. these factors. The default is 30 seconds. The minimum permitted value is 10
  131. seconds.