misc.asciidoc 6.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
  1. [[misc-cluster]]
  2. === Miscellaneous cluster settings
  3. [[cluster-read-only]]
  4. ==== Metadata
  5. An entire cluster may be set to read-only with the following _dynamic_ setting:
  6. `cluster.blocks.read_only`::
  7. Make the whole cluster read only (indices do not accept write
  8. operations), metadata is not allowed to be modified (create or delete
  9. indices).
  10. `cluster.blocks.read_only_allow_delete`::
  11. Identical to `cluster.blocks.read_only` but allows to delete indices
  12. to free up resources.
  13. WARNING: Don't rely on this setting to prevent changes to your cluster. Any
  14. user with access to the <<cluster-update-settings,cluster-update-settings>>
  15. API can make the cluster read-write again.
  16. [[cluster-shard-limit]]
  17. ==== Cluster Shard Limit
  18. In a Elasticsearch 7.0 and later, there will be a soft limit on the number of
  19. shards in a cluster, based on the number of nodes in the cluster. This is
  20. intended to prevent operations which may unintentionally destabilize the
  21. cluster. Prior to 7.0, actions which would result in the cluster going over the
  22. limit will issue a deprecation warning.
  23. NOTE: You can set the system property `es.enforce_max_shards_per_node` to `true`
  24. to opt in to strict enforcement of the shard limit. If this system property is
  25. set, actions which would result in the cluster going over the limit will result
  26. in an error, rather than a deprecation warning. This property will be removed in
  27. Elasticsearch 7.0, as strict enforcement of the limit will be the default and
  28. only behavior.
  29. If an operation, such as creating a new index, restoring a snapshot of an index,
  30. or opening a closed index would lead to the number of shards in the cluster
  31. going over this limit, the operation will issue a deprecation warning.
  32. If the cluster is already over the limit, due to changes in node membership or
  33. setting changes, all operations that create or open indices will issue warnings
  34. until either the limit is increased as described below, or some indices are
  35. <<indices-open-close,closed>> or <<indices-delete-index,deleted>> to bring the
  36. number of shards below the limit.
  37. Replicas count towards this limit, but closed indexes do not. An index with 5
  38. primary shards and 2 replicas will be counted as 15 shards. Any closed index
  39. is counted as 0, no matter how many shards and replicas it contains.
  40. The limit defaults to 1,000 shards per node, and be dynamically adjusted using
  41. the following property:
  42. `cluster.max_shards_per_node`::
  43. Controls the number of shards allowed in the cluster per node.
  44. For example, a 3-node cluster with the default setting would allow 3,000 shards
  45. total, across all open indexes. If the above setting is changed to 1,500, then
  46. the cluster would allow 4,500 shards total.
  47. [[user-defined-data]]
  48. ==== User Defined Cluster Metadata
  49. User-defined metadata can be stored and retrieved using the Cluster Settings API.
  50. This can be used to store arbitrary, infrequently-changing data about the cluster
  51. without the need to create an index to store it. This data may be stored using
  52. any key prefixed with `cluster.metadata.`. For example, to store the email
  53. address of the administrator of a cluster under the key `cluster.metadata.administrator`,
  54. issue this request:
  55. [source,js]
  56. -------------------------------
  57. PUT /_cluster/settings
  58. {
  59. "persistent": {
  60. "cluster.metadata.administrator": "sysadmin@example.com"
  61. }
  62. }
  63. -------------------------------
  64. // CONSOLE
  65. IMPORTANT: User-defined cluster metadata is not intended to store sensitive or
  66. confidential information. Any information stored in user-defined cluster
  67. metadata will be viewable by anyone with access to the
  68. <<cluster-get-settings,Cluster Get Settings>> API, and is recorded in the
  69. {es} logs.
  70. [[cluster-max-tombstones]]
  71. ==== Index Tombstones
  72. The cluster state maintains index tombstones to explicitly denote indices that
  73. have been deleted. The number of tombstones maintained in the cluster state is
  74. controlled by the following property, which cannot be updated dynamically:
  75. `cluster.indices.tombstones.size`::
  76. Index tombstones prevent nodes that are not part of the cluster when a delete
  77. occurs from joining the cluster and reimporting the index as though the delete
  78. was never issued. To keep the cluster state from growing huge we only keep the
  79. last `cluster.indices.tombstones.size` deletes, which defaults to 500. You can
  80. increase it if you expect nodes to be absent from the cluster and miss more
  81. than 500 deletes. We think that is rare, thus the default. Tombstones don't take
  82. up much space, but we also think that a number like 50,000 is probably too big.
  83. [[cluster-logger]]
  84. ==== Logger
  85. The settings which control logging can be updated dynamically with the
  86. `logger.` prefix. For instance, to increase the logging level of the
  87. `indices.recovery` module to `DEBUG`, issue this request:
  88. [source,js]
  89. -------------------------------
  90. PUT /_cluster/settings
  91. {
  92. "transient": {
  93. "logger.org.elasticsearch.indices.recovery": "DEBUG"
  94. }
  95. }
  96. -------------------------------
  97. // CONSOLE
  98. [[persistent-tasks-allocation]]
  99. ==== Persistent Tasks Allocations
  100. Plugins can create a kind of tasks called persistent tasks. Those tasks are
  101. usually long-live tasks and are stored in the cluster state, allowing the
  102. tasks to be revived after a full cluster restart.
  103. Every time a persistent task is created, the master nodes takes care of
  104. assigning the task to a node of the cluster, and the assigned node will then
  105. pick up the task and execute it locally. The process of assigning persistent
  106. tasks to nodes is controlled by the following property, which can be updated
  107. dynamically:
  108. `cluster.persistent_tasks.allocation.enable`::
  109. +
  110. --
  111. Enable or disable allocation for persistent tasks:
  112. * `all` - (default) Allows persistent tasks to be assigned to nodes
  113. * `none` - No allocations are allowed for any type of persistent task
  114. This setting does not affect the persistent tasks that are already being executed.
  115. Only newly created persistent tasks, or tasks that must be reassigned (after a node
  116. left the cluster, for example), are impacted by this setting.
  117. --