123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163 |
- [[misc-cluster]]
- === Miscellaneous cluster settings
- [[cluster-read-only]]
- ==== Metadata
- An entire cluster may be set to read-only with the following _dynamic_ setting:
- `cluster.blocks.read_only`::
- Make the whole cluster read only (indices do not accept write
- operations), metadata is not allowed to be modified (create or delete
- indices).
- `cluster.blocks.read_only_allow_delete`::
- Identical to `cluster.blocks.read_only` but allows to delete indices
- to free up resources.
- WARNING: Don't rely on this setting to prevent changes to your cluster. Any
- user with access to the <<cluster-update-settings,cluster-update-settings>>
- API can make the cluster read-write again.
- [[cluster-shard-limit]]
- ==== Cluster Shard Limit
- There is a soft limit on the number of shards in a cluster, based on the number
- of nodes in the cluster. This is intended to prevent operations which may
- unintentionally destabilize the cluster.
- IMPORTANT: This limit is intended as a safety net, not a sizing recommendation. The
- exact number of shards your cluster can safely support depends on your hardware
- configuration and workload, but should remain well below this limit in almost
- all cases, as the default limit is set quite high.
- If an operation, such as creating a new index, restoring a snapshot of an index,
- or opening a closed index would lead to the number of shards in the cluster
- going over this limit, the operation will fail with an error indicating the
- shard limit.
- If the cluster is already over the limit, due to changes in node membership or
- setting changes, all operations that create or open indices will fail until
- either the limit is increased as described below, or some indices are
- <<indices-open-close,closed>> or <<indices-delete-index,deleted>> to bring the
- number of shards below the limit.
- Replicas count towards this limit, but closed indexes do not. An index with 5
- primary shards and 2 replicas will be counted as 15 shards. Any closed index
- is counted as 0, no matter how many shards and replicas it contains.
- The limit defaults to 1,000 shards per data node, and can be dynamically
- adjusted using the following property:
- `cluster.max_shards_per_node`::
- Controls the number of shards allowed in the cluster per data node.
- For example, a 3-node cluster with the default setting would allow 3,000 shards
- total, across all open indexes. If the above setting is changed to 500, then
- the cluster would allow 1,500 shards total.
- NOTE: If there are no data nodes in the cluster, the limit will not be enforced.
- This allows the creation of indices during cluster creation if dedicated master
- nodes are set up before data nodes.
- [[user-defined-data]]
- ==== User Defined Cluster Metadata
- User-defined metadata can be stored and retrieved using the Cluster Settings API.
- This can be used to store arbitrary, infrequently-changing data about the cluster
- without the need to create an index to store it. This data may be stored using
- any key prefixed with `cluster.metadata.`. For example, to store the email
- address of the administrator of a cluster under the key `cluster.metadata.administrator`,
- issue this request:
- [source,console]
- -------------------------------
- PUT /_cluster/settings
- {
- "persistent": {
- "cluster.metadata.administrator": "sysadmin@example.com"
- }
- }
- -------------------------------
- IMPORTANT: User-defined cluster metadata is not intended to store sensitive or
- confidential information. Any information stored in user-defined cluster
- metadata will be viewable by anyone with access to the
- <<cluster-get-settings,Cluster Get Settings>> API, and is recorded in the
- {es} logs.
- [[cluster-max-tombstones]]
- ==== Index Tombstones
- The cluster state maintains index tombstones to explicitly denote indices that
- have been deleted. The number of tombstones maintained in the cluster state is
- controlled by the following property, which cannot be updated dynamically:
- `cluster.indices.tombstones.size`::
- Index tombstones prevent nodes that are not part of the cluster when a delete
- occurs from joining the cluster and reimporting the index as though the delete
- was never issued. To keep the cluster state from growing huge we only keep the
- last `cluster.indices.tombstones.size` deletes, which defaults to 500. You can
- increase it if you expect nodes to be absent from the cluster and miss more
- than 500 deletes. We think that is rare, thus the default. Tombstones don't take
- up much space, but we also think that a number like 50,000 is probably too big.
- [[cluster-logger]]
- ==== Logger
- The settings which control logging can be updated dynamically with the
- `logger.` prefix. For instance, to increase the logging level of the
- `indices.recovery` module to `DEBUG`, issue this request:
- [source,console]
- -------------------------------
- PUT /_cluster/settings
- {
- "transient": {
- "logger.org.elasticsearch.indices.recovery": "DEBUG"
- }
- }
- -------------------------------
- [[persistent-tasks-allocation]]
- ==== Persistent Tasks Allocations
- Plugins can create a kind of tasks called persistent tasks. Those tasks are
- usually long-live tasks and are stored in the cluster state, allowing the
- tasks to be revived after a full cluster restart.
- Every time a persistent task is created, the master node takes care of
- assigning the task to a node of the cluster, and the assigned node will then
- pick up the task and execute it locally. The process of assigning persistent
- tasks to nodes is controlled by the following properties, which can be updated
- dynamically:
- `cluster.persistent_tasks.allocation.enable`::
- +
- --
- Enable or disable allocation for persistent tasks:
- * `all` - (default) Allows persistent tasks to be assigned to nodes
- * `none` - No allocations are allowed for any type of persistent task
- This setting does not affect the persistent tasks that are already being executed.
- Only newly created persistent tasks, or tasks that must be reassigned (after a node
- left the cluster, for example), are impacted by this setting.
- --
- `cluster.persistent_tasks.allocation.recheck_interval`::
- The master node will automatically check whether persistent tasks need to
- be assigned when the cluster state changes significantly. However, there
- may be other factors, such as memory usage, that affect whether persistent
- tasks can be assigned to nodes but do not cause the cluster state to change.
- This setting controls how often assignment checks are performed to react to
- these factors. The default is 30 seconds. The minimum permitted value is 10
- seconds.
|