| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163 | [[misc-cluster]]=== Miscellaneous cluster settings[[cluster-read-only]]==== MetadataAn entire cluster may be set to read-only with the following _dynamic_ setting:`cluster.blocks.read_only`::      Make the whole cluster read only (indices do not accept write      operations), metadata is not allowed to be modified (create or delete      indices).`cluster.blocks.read_only_allow_delete`::      Identical to `cluster.blocks.read_only` but allows to delete indices      to free up resources.WARNING: Don't rely on this setting to prevent changes to your cluster. Anyuser with access to the <<cluster-update-settings,cluster-update-settings>>API can make the cluster read-write again.[[cluster-shard-limit]]==== Cluster Shard LimitThere is a soft limit on the number of shards in a cluster, based on the numberof nodes in the cluster. This is intended to prevent operations which mayunintentionally destabilize the cluster.IMPORTANT: This limit is intended as a safety net, not a sizing recommendation. Theexact number of shards your cluster can safely support depends on your hardwareconfiguration and workload, but should remain well below this limit in almostall cases, as the default limit is set quite high.If an operation, such as creating a new index, restoring a snapshot of an index,or opening a closed index would lead to the number of shards in the clustergoing over this limit, the operation will fail with an error indicating theshard limit.If the cluster is already over the limit, due to changes in node membership orsetting changes, all operations that create or open indices will fail untileither the limit is increased as described below, or some indices are<<indices-open-close,closed>> or <<indices-delete-index,deleted>> to bring thenumber of shards below the limit.Replicas count towards this limit, but closed indexes do not. An index with 5primary shards and 2 replicas will be counted as 15 shards. Any closed indexis counted as 0, no matter how many shards and replicas it contains.The limit defaults to 1,000 shards per data node, and can be dynamicallyadjusted using the following property:`cluster.max_shards_per_node`::     Controls the number of shards allowed in the cluster per data node.For example, a 3-node cluster with the default setting would allow 3,000 shardstotal, across all open indexes. If the above setting is changed to 500, thenthe cluster would allow 1,500 shards total.NOTE: If there are no data nodes in the cluster, the limit will not be enforced.This allows the creation of indices during cluster creation if dedicated masternodes are set up before data nodes.[[user-defined-data]]==== User Defined Cluster MetadataUser-defined metadata can be stored and retrieved using the Cluster Settings API.This can be used to store arbitrary, infrequently-changing data about the clusterwithout the need to create an index to store it. This data may be stored usingany key prefixed with `cluster.metadata.`.  For example, to store the emailaddress of the administrator of a cluster under the key `cluster.metadata.administrator`,issue this request:[source,console]-------------------------------PUT /_cluster/settings{  "persistent": {    "cluster.metadata.administrator": "sysadmin@example.com"  }}-------------------------------IMPORTANT: User-defined cluster metadata is not intended to store sensitive orconfidential information. Any information stored in user-defined clustermetadata will be viewable by anyone with access to the<<cluster-get-settings,Cluster Get Settings>> API, and is recorded in the{es} logs.[[cluster-max-tombstones]]==== Index TombstonesThe cluster state maintains index tombstones to explicitly denote indices thathave been deleted.  The number of tombstones maintained in the cluster state iscontrolled by the following property, which cannot be updated dynamically:`cluster.indices.tombstones.size`::Index tombstones prevent nodes that are not part of the cluster when a deleteoccurs from joining the cluster and reimporting the index as though the deletewas never issued. To keep the cluster state from growing huge we only keep thelast `cluster.indices.tombstones.size` deletes, which defaults to 500. You canincrease it if you expect nodes to be absent from the cluster and miss morethan 500 deletes. We think that is rare, thus the default. Tombstones don't takeup much space, but we also think that a number like 50,000 is probably too big.[[cluster-logger]]==== LoggerThe settings which control logging can be updated dynamically with the`logger.` prefix.  For instance, to increase the logging level of the`indices.recovery` module to `DEBUG`, issue this request:[source,console]-------------------------------PUT /_cluster/settings{  "transient": {    "logger.org.elasticsearch.indices.recovery": "DEBUG"  }}-------------------------------[[persistent-tasks-allocation]]==== Persistent Tasks AllocationsPlugins can create a kind of tasks called persistent tasks. Those tasks areusually long-live tasks and are stored in the cluster state, allowing thetasks to be revived after a full cluster restart.Every time a persistent task is created, the master node takes care ofassigning the task to a node of the cluster, and the assigned node will thenpick up the task and execute it locally. The process of assigning persistenttasks to nodes is controlled by the following properties, which can be updateddynamically:`cluster.persistent_tasks.allocation.enable`::+--Enable or disable allocation for persistent tasks:* `all` -             (default) Allows persistent tasks to be assigned to nodes* `none` -            No allocations are allowed for any type of persistent taskThis setting does not affect the persistent tasks that are already being executed.Only newly created persistent tasks, or tasks that must be reassigned (after a nodeleft the cluster, for example), are impacted by this setting.--`cluster.persistent_tasks.allocation.recheck_interval`::     The master node will automatically check whether persistent tasks need to     be assigned when the cluster state changes significantly. However, there     may be other factors, such as memory usage, that affect whether persistent     tasks can be assigned to nodes but do not cause the cluster state to change.     This setting controls how often assignment checks are performed to react to     these factors. The default is 30 seconds. The minimum permitted value is 10     seconds.
 |