123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458 |
- // tag::non-frozen-nodes-cloud[]
- **Use {kib}**
- //tag::kibana-api-ex[]
- . Log in to the {ess-console}[{ecloud} console].
- +
- . On the **Elasticsearch Service** panel, click the name of your deployment.
- +
- NOTE: If the name of your deployment is disabled your {kib} instances might be
- unhealthy, in which case please contact https://support.elastic.co[Elastic Support].
- If your deployment doesn't include {kib}, all you need to do is
- {cloud}/ec-access-kibana.html[enable it first].
- . Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
- and go to **Dev Tools > Console**.
- +
- [role="screenshot"]
- image::images/kibana-console.png[{kib} Console,align="center"]
- +
- . Check the current status of the cluster according the shards capacity indicator:
- +
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- +
- The response will look like this:
- +
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "yellow",
- "symptom": "Cluster is close to reaching the configured maximum number of shards for data nodes.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000, <1>
- "current_used_shards": 988 <2>
- },
- "frozen": {
- "max_shards_in_cluster": 3000,
- "current_used_shards": 0
- }
- },
- "impacts": [
- ...
- ],
- "diagnosis": [
- ...
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- +
- <1> Current value of the setting `cluster.max_shards_per_node`
- <2> Current number of open shards across the cluster
- +
- . Update the <<cluster-max-shards-per-node,`cluster.max_shards_per_node`>> setting with a proper value:
- +
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node": 1200
- }
- }
- ----
- +
- This increase should only be temporary. As a long-term solution, we recommend
- you add nodes to the oversharded data tier or
- <<reduce-cluster-shard-count,reduce your cluster's shard count>> on nodes that do not belong
- to the frozen tier.
- . To verify that the change has fixed the issue, you can get the current
- status of the `shards_capacity` indicator by checking the `data` section of the
- <<health-api-example,health API>>:
- +
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- +
- The response will look like this:
- +
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "green",
- "symptom": "The cluster has enough room to add new shards.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000
- },
- "frozen": {
- "max_shards_in_cluster": 3000
- }
- }
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- . When a long-term solution is in place, we recommend you reset the
- `cluster.max_shards_per_node` limit.
- +
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node": null
- }
- }
- ----
- // end::non-frozen-nodes-cloud[]
- // tag::non-frozen-nodes-self-managed[]
- Check the current status of the cluster according the shards capacity indicator:
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- The response will look like this:
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "yellow",
- "symptom": "Cluster is close to reaching the configured maximum number of shards for data nodes.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000, <1>
- "current_used_shards": 988 <2>
- },
- "frozen": {
- "max_shards_in_cluster": 3000
- }
- },
- "impacts": [
- ...
- ],
- "diagnosis": [
- ...
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- <1> Current value of the setting `cluster.max_shards_per_node`
- <2> Current number of open shards across the cluster
- Using the <<cluster-update-settings,`cluster settings API`>>, update the
- <<cluster-max-shards-per-node,`cluster.max_shards_per_node`>> setting:
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node": 1200
- }
- }
- ----
- This increase should only be temporary. As a long-term solution, we recommend
- you add nodes to the oversharded data tier or
- <<reduce-cluster-shard-count,reduce your cluster's shard count>> on nodes that do not belong
- to the frozen tier. To verify that the change has fixed the issue, you can get the current
- status of the `shards_capacity` indicator by checking the `data` section of the
- <<health-api-example,health API>>:
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- The response will look like this:
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "green",
- "symptom": "The cluster has enough room to add new shards.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1200
- },
- "frozen": {
- "max_shards_in_cluster": 3000
- }
- }
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- When a long-term solution is in place, we recommend you reset the
- `cluster.max_shards_per_node` limit.
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node": null
- }
- }
- ----
- // end::non-frozen-nodes-self-managed[]
- // tag::frozen-nodes-cloud[]
- **Use {kib}**
- //tag::kibana-api-ex[]
- . Log in to the {ess-console}[{ecloud} console].
- +
- . On the **Elasticsearch Service** panel, click the name of your deployment.
- +
- NOTE: If the name of your deployment is disabled your {kib} instances might be
- unhealthy, in which case please contact https://support.elastic.co[Elastic Support].
- If your deployment doesn't include {kib}, all you need to do is
- {cloud}/ec-access-kibana.html[enable it first].
- . Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
- and go to **Dev Tools > Console**.
- +
- [role="screenshot"]
- image::images/kibana-console.png[{kib} Console,align="center"]
- . Check the current status of the cluster according the shards capacity indicator:
- +
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- +
- The response will look like this:
- +
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "yellow",
- "symptom": "Cluster is close to reaching the configured maximum number of shards for frozen nodes.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000
- },
- "frozen": {
- "max_shards_in_cluster": 3000, <1>
- "current_used_shards": 2998 <2>
- }
- },
- "impacts": [
- ...
- ],
- "diagnosis": [
- ...
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- <1> Current value of the setting `cluster.max_shards_per_node.frozen`
- <2> Current number of open shards used by frozen nodes across the cluster
- +
- . Update the <<cluster-max-shards-per-node-frozen,`cluster.max_shards_per_node.frozen`>> setting:
- +
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node.frozen": 3200
- }
- }
- ----
- +
- This increase should only be temporary. As a long-term solution, we recommend
- you add nodes to the oversharded data tier or
- <<reduce-cluster-shard-count,reduce your cluster's shard count>> on nodes that belong
- to the frozen tier.
- . To verify that the change has fixed the issue, you can get the current
- status of the `shards_capacity` indicator by checking the `data` section of the
- <<health-api-example,health API>>:
- +
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- +
- The response will look like this:
- +
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "green",
- "symptom": "The cluster has enough room to add new shards.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000
- },
- "frozen": {
- "max_shards_in_cluster": 3200
- }
- }
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- +
- . When a long-term solution is in place, we recommend you reset the
- `cluster.max_shards_per_node.frozen` limit.
- +
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node.frozen": null
- }
- }
- ----
- // end::frozen-nodes-cloud[]
- // tag::frozen-nodes-self-managed[]
- Check the current status of the cluster according the shards capacity indicator:
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "yellow",
- "symptom": "Cluster is close to reaching the configured maximum number of shards for frozen nodes.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000
- },
- "frozen": {
- "max_shards_in_cluster": 3000, <1>
- "current_used_shards": 2998 <2>
- }
- },
- "impacts": [
- ...
- ],
- "diagnosis": [
- ...
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- <1> Current value of the setting `cluster.max_shards_per_node.frozen`.
- <2> Current number of open shards used by frozen nodes across the cluster.
- Using the <<cluster-update-settings,`cluster settings API`>>, update the
- <<cluster-max-shards-per-node-frozen,`cluster.max_shards_per_node.frozen`>> setting:
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node.frozen": 3200
- }
- }
- ----
- This increase should only be temporary. As a long-term solution, we recommend
- you add nodes to the oversharded data tier or
- <<reduce-cluster-shard-count,reduce your cluster's shard count>> on nodes that belong
- to the frozen tier. To verify that the change has fixed the issue, you can get the current
- status of the `shards_capacity` indicator by checking the `data` section of the
- <<health-api-example,health API>>:
- [source,console]
- ----
- GET _health_report/shards_capacity
- ----
- The response will look like this:
- [source,console-result]
- ----
- {
- "cluster_name": "...",
- "indicators": {
- "shards_capacity": {
- "status": "green",
- "symptom": "The cluster has enough room to add new shards.",
- "details": {
- "data": {
- "max_shards_in_cluster": 1000
- },
- "frozen": {
- "max_shards_in_cluster": 3200
- }
- }
- }
- }
- }
- ----
- // TESTRESPONSE[skip:the result is for illustrating purposes only]
- When a long-term solution is in place, we recommend you reset the
- `cluster.max_shards_per_node.frozen` limit.
- [source,console]
- ----
- PUT _cluster/settings
- {
- "persistent" : {
- "cluster.max_shards_per_node.frozen": null
- }
- }
- ----
- // end::frozen-nodes-self-managed[]
|