misc.asciidoc 9.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226
  1. [[misc-cluster-settings]]
  2. === Miscellaneous cluster settings
  3. [[cluster-name]]
  4. include::{es-ref-dir}/setup/important-settings/cluster-name.asciidoc[]
  5. [discrete]
  6. [[cluster-read-only]]
  7. ==== Metadata
  8. An entire cluster may be set to read-only with the following setting:
  9. `cluster.blocks.read_only`::
  10. (<<dynamic-cluster-setting,Dynamic>>)
  11. Make the whole cluster read only (indices do not accept write
  12. operations), metadata is not allowed to be modified (create or delete
  13. indices). Defaults to `false`.
  14. `cluster.blocks.read_only_allow_delete`::
  15. (<<dynamic-cluster-setting,Dynamic>>)
  16. Identical to `cluster.blocks.read_only` but allows to delete indices
  17. to free up resources. Defaults to `false`.
  18. WARNING: Don't rely on this setting to prevent changes to your cluster. Any
  19. user with access to the <<cluster-update-settings,cluster-update-settings>>
  20. API can make the cluster read-write again.
  21. [discrete]
  22. [[cluster-shard-limit]]
  23. ==== Cluster shard limits
  24. There is a limit on the number of shards in a cluster, based on the number of
  25. nodes in the cluster. This is intended to prevent a runaway process from
  26. creating too many shards which can harm performance and in extreme cases may
  27. destabilize your cluster.
  28. [IMPORTANT]
  29. ====
  30. These limits are intended as a safety net to protect against runaway shard
  31. creation and are not a sizing recommendation. The exact number of shards your
  32. cluster can safely support depends on your hardware configuration and workload,
  33. and may be smaller than the default limits.
  34. We do not recommend increasing these limits beyond the defaults. Clusters with
  35. more shards may appear to run well in normal operation, but may take a very
  36. long time to recover from temporary disruptions such as a network partition or
  37. an unexpected node restart, and may encounter problems when performing
  38. maintenance activities such as a rolling restart or upgrade.
  39. ====
  40. If an operation, such as creating a new index, restoring a snapshot of an
  41. index, or opening a closed index would lead to the number of shards in the
  42. cluster going over this limit, the operation will fail with an error indicating
  43. the shard limit. To resolve this, either scale out your cluster by adding
  44. nodes, or <<indices-delete-index,delete some indices>> to bring the number of
  45. shards below the limit.
  46. If a cluster is already over the limit, perhaps due to changes in node
  47. membership or setting changes, all operations that create or open indices will
  48. fail.
  49. The cluster shard limit defaults to 1000 shards per non-frozen data node for
  50. normal (non-frozen) indices and 3000 shards per frozen data node for frozen
  51. indices. Both primary and replica shards of all open indices count toward the
  52. limit, including unassigned shards. For example, an open index with 5 primary
  53. shards and 2 replicas counts as 15 shards. Closed indices do not contribute to
  54. the shard count.
  55. You can dynamically adjust the cluster shard limit with the following setting:
  56. [[cluster-max-shards-per-node]]
  57. `cluster.max_shards_per_node`::
  58. +
  59. --
  60. (<<dynamic-cluster-setting,Dynamic>>)
  61. Limits the total number of primary and replica shards for the cluster. {es}
  62. calculates the limit as follows:
  63. `cluster.max_shards_per_node * number of non-frozen data nodes`
  64. Shards for closed indices do not count toward this limit. Defaults to `1000`.
  65. A cluster with no data nodes is unlimited.
  66. {es} rejects any request that creates more shards than this limit allows. For
  67. example, a cluster with a `cluster.max_shards_per_node` setting of `100` and
  68. three data nodes has a shard limit of 300. If the cluster already contains 296
  69. shards, {es} rejects any request that adds five or more shards to the cluster.
  70. Note that if `cluster.max_shards_per_node` is set to a higher value than the
  71. default, the limits for <<vm-max-map-count, mmap count>> and
  72. <<file-descriptors, open file descriptors>> might also require adjustment.
  73. Notice that frozen shards have their own independent limit.
  74. --
  75. [[cluster-max-shards-per-node-frozen]]
  76. `cluster.max_shards_per_node.frozen`::
  77. +
  78. --
  79. (<<dynamic-cluster-setting,Dynamic>>)
  80. Limits the total number of primary and replica frozen shards for the cluster.
  81. {es} calculates the limit as follows:
  82. `cluster.max_shards_per_node.frozen * number of frozen data nodes`
  83. Shards for closed indices do not count toward this limit. Defaults to `3000`.
  84. A cluster with no frozen data nodes is unlimited.
  85. {es} rejects any request that creates more frozen shards than this limit allows.
  86. For example, a cluster with a `cluster.max_shards_per_node.frozen` setting of
  87. `100` and three frozen data nodes has a frozen shard limit of 300. If the
  88. cluster already contains 296 shards, {es} rejects any request that adds five or
  89. more frozen shards to the cluster.
  90. --
  91. NOTE: These limits only apply to actions which create shards and do not limit
  92. the number of shards assigned to each node. To limit the number of shards
  93. assigned to each node, use the
  94. <<cluster-total-shards-per-node,`cluster.routing.allocation.total_shards_per_node`>>
  95. setting.
  96. [discrete]
  97. [[user-defined-data]]
  98. ==== User-defined cluster metadata
  99. User-defined metadata can be stored and retrieved using the Cluster Settings API.
  100. This can be used to store arbitrary, infrequently-changing data about the cluster
  101. without the need to create an index to store it. This data may be stored using
  102. any key prefixed with `cluster.metadata.`. For example, to store the email
  103. address of the administrator of a cluster under the key `cluster.metadata.administrator`,
  104. issue this request:
  105. [source,console]
  106. -------------------------------
  107. PUT /_cluster/settings
  108. {
  109. "persistent": {
  110. "cluster.metadata.administrator": "sysadmin@example.com"
  111. }
  112. }
  113. -------------------------------
  114. IMPORTANT: User-defined cluster metadata is not intended to store sensitive or
  115. confidential information. Any information stored in user-defined cluster
  116. metadata will be viewable by anyone with access to the
  117. <<cluster-get-settings,Cluster Get Settings>> API, and is recorded in the
  118. {es} logs.
  119. [discrete]
  120. [[cluster-max-tombstones]]
  121. ==== Index tombstones
  122. The cluster state maintains index tombstones to explicitly denote indices that
  123. have been deleted. The number of tombstones maintained in the cluster state is
  124. controlled by the following setting:
  125. `cluster.indices.tombstones.size`::
  126. (<<static-cluster-setting,Static>>)
  127. Index tombstones prevent nodes that are not part of the cluster when a delete
  128. occurs from joining the cluster and reimporting the index as though the delete
  129. was never issued. To keep the cluster state from growing huge we only keep the
  130. last `cluster.indices.tombstones.size` deletes, which defaults to 500. You can
  131. increase it if you expect nodes to be absent from the cluster and miss more
  132. than 500 deletes. We think that is rare, thus the default. Tombstones don't take
  133. up much space, but we also think that a number like 50,000 is probably too big.
  134. include::{es-ref-dir}/indices/dangling-indices-list.asciidoc[tag=dangling-index-description]
  135. You can use the <<dangling-indices-api,Dangling indices API>> to manage
  136. this situation.
  137. [discrete]
  138. [[cluster-logger]]
  139. ==== Logger
  140. The settings which control logging can be updated <<dynamic-cluster-setting,dynamically>> with the
  141. `logger.` prefix. For instance, to increase the logging level of the
  142. `indices.recovery` module to `DEBUG`, issue this request:
  143. [source,console]
  144. -------------------------------
  145. PUT /_cluster/settings
  146. {
  147. "persistent": {
  148. "logger.org.elasticsearch.indices.recovery": "DEBUG"
  149. }
  150. }
  151. -------------------------------
  152. [discrete]
  153. [[persistent-tasks-allocation]]
  154. ==== Persistent tasks allocation
  155. Plugins can create a kind of tasks called persistent tasks. Those tasks are
  156. usually long-lived tasks and are stored in the cluster state, allowing the
  157. tasks to be revived after a full cluster restart.
  158. Every time a persistent task is created, the master node takes care of
  159. assigning the task to a node of the cluster, and the assigned node will then
  160. pick up the task and execute it locally. The process of assigning persistent
  161. tasks to nodes is controlled by the following settings:
  162. `cluster.persistent_tasks.allocation.enable`::
  163. +
  164. --
  165. (<<dynamic-cluster-setting,Dynamic>>)
  166. Enable or disable allocation for persistent tasks:
  167. * `all` - (default) Allows persistent tasks to be assigned to nodes
  168. * `none` - No allocations are allowed for any type of persistent task
  169. This setting does not affect the persistent tasks that are already being executed.
  170. Only newly created persistent tasks, or tasks that must be reassigned (after a node
  171. left the cluster, for example), are impacted by this setting.
  172. --
  173. `cluster.persistent_tasks.allocation.recheck_interval`::
  174. (<<dynamic-cluster-setting,Dynamic>>)
  175. The master node will automatically check whether persistent tasks need to
  176. be assigned when the cluster state changes significantly. However, there
  177. may be other factors, such as memory usage, that affect whether persistent
  178. tasks can be assigned to nodes but do not cause the cluster state to change.
  179. This setting controls how often assignment checks are performed to react to
  180. these factors. The default is 30 seconds. The minimum permitted value is 10
  181. seconds.