Browse Source

How-to docs for increasing the total number of shards per node (#86214)

Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Leaf-Lin <39002973+Leaf-Lin@users.noreply.github.com>
Andrei Dan 3 years ago
parent
commit
21785c9a77

+ 2 - 0
docs/reference/cluster.asciidoc

@@ -82,6 +82,8 @@ include::cluster/get-settings.asciidoc[]
 
 include::cluster/health.asciidoc[]
 
+include::health/health.asciidoc[]
+
 include::cluster/reroute.asciidoc[]
 
 include::cluster/state.asciidoc[]

+ 1 - 1
docs/reference/health/health.asciidoc

@@ -61,7 +61,7 @@ include::{es-repo-dir}/rest-api/common-parms.asciidoc[tag=cluster-health-status]
 `components`::
     (object) Information about the health of the cluster components.
 
-[[cluster-health-api-example]]
+[[health-api-example]]
 ==== {api-examples-title}
 
 [source,console]

+ 1 - 0
docs/reference/how-to.asciidoc

@@ -30,3 +30,4 @@ include::how-to/fix-common-cluster-issues.asciidoc[]
 include::how-to/size-your-shards.asciidoc[]
 
 include::how-to/use-elasticsearch-for-time-series-data.asciidoc[]
+

+ 2 - 0
docs/reference/index.asciidoc

@@ -67,6 +67,8 @@ include::commands/index.asciidoc[]
 
 include::how-to.asciidoc[]
 
+include::troubleshooting.asciidoc[]
+
 include::rest-api/index.asciidoc[]
 
 include::migration/index.asciidoc[]

+ 40 - 0
docs/reference/tab-widgets/troubleshooting/data/increase-cluster-shard-limit-widget.asciidoc

@@ -0,0 +1,40 @@
+++++
+<div class="tabs" data-tab-group="host">
+  <div role="tablist" aria-label="Cluster shards limit">
+    <button role="tab"
+            aria-selected="true"
+            aria-controls="cloud-tab-cluster-total-shards"
+            id="cloud-cluster-total-shards">
+      Elasticsearch Service
+    </button>
+    <button role="tab"
+            aria-selected="false"
+            aria-controls="self-managed-tab-cluster-total-shards"
+            id="self-managed-cluster-total-shards"
+            tabindex="-1">
+      Self-managed
+    </button>
+  </div>
+  <div tabindex="0"
+       role="tabpanel"
+       id="cloud-tab-cluster-total-shards"
+       aria-labelledby="cloud-cluster-total-shards">
+++++
+
+include::increase-cluster-shard-limit.asciidoc[tag=cloud]
+
+++++
+  </div>
+  <div tabindex="0"
+       role="tabpanel"
+       id="self-managed-tab-cluster-total-shards"
+       aria-labelledby="self-managed-cluster-total-shards"
+       hidden="">
+++++
+
+include::increase-cluster-shard-limit.asciidoc[tag=self-managed]
+
+++++
+  </div>
+</div>
+++++

+ 192 - 0
docs/reference/tab-widgets/troubleshooting/data/increase-cluster-shard-limit.asciidoc

@@ -0,0 +1,192 @@
+//////////////////////////
+
+[source,console]
+--------------------------------------------------
+PUT my-index-000001
+
+--------------------------------------------------
+// TESTSETUP
+
+[source,console]
+--------------------------------------------------
+PUT _cluster/settings
+{
+  "persistent" : {
+    "cluster.routing.allocation.total_shards_per_node" : null
+  }
+}
+
+DELETE my-index-000001
+--------------------------------------------------
+// TEARDOWN
+
+//////////////////////////
+
+// tag::cloud[]
+In order to get the shards assigned we'll need to increase the number of shards 
+that can be collocated on a node in the cluster.
+We'll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` 
+<<cluster-get-settings, cluster setting>> and increasing the configured value.
+
+**Use {kib}**
+
+//tag::kibana-api-ex[]
+. Log in to the {ess-console}[{ecloud} console].
++
+
+. On the **Elasticsearch Service** panel, click the name of your deployment. 
++
+
+NOTE:
+If the name of your deployment is disabled your {kib} instances might be
+unhealthy, in which case please contact https://support.elastic.co[Elastic Support],
+or your deployment doesn't include {kib}, in which case all you need to do is 
+{cloud}/ec-access-kibana.html[enable Kibana first].
+
+. Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
+and go to **Dev Tools > Console**.
++
+[role="screenshot"]
+image::images/kibana-console.png[{kib} Console,align="center"]
+
+. Inspect the `cluster.routing.allocation.total_shards_per_node` <<cluster-get-settings, cluster setting>> 
+for the index with unassigned shards:
++
+[source,console]
+----
+GET /_cluster/settings?flat_settings
+----
++
+The response will look like this:
++
+[source,console-result]
+----
+{
+  "persistent": {
+    "cluster.routing.allocation.total_shards_per_node": "300" <1>
+  },
+  "transient": {}
+}
+----
+// TESTRESPONSE[skip:the result is for illustrating purposes only as don't want to change a cluster-wide setting]
+
++
+<1> Represents the current configured value for the total number of shards
+that can reside on one node in the system.
+
+. <<cluster-update-settings,Increase>> the value for the total number of shards 
+that can be assigned on one node to a higher value:
++
+[source,console]
+----
+PUT _cluster/settings
+{
+  "persistent" : {
+    "cluster.routing.allocation.total_shards_per_node" : 400 <1>
+  }
+}
+----
+// TEST[continued]
+
++
+<1> The new value for the system-wide `total_shards_per_node` configuration
+is increased from the previous value of `300` to `400`. 
+The `total_shards_per_node` configuration can also be set to `null`, which 
+represents no upper bound with regards to how many shards can be 
+collocated on one node in the system. 
+
+//end::kibana-api-ex[]
+// end::cloud[]
+
+// tag::self-managed[]
+In order to get the shards assigned you can add more nodes to your {es} cluster 
+and assign the index's target tier <<assign-data-tier, node role>> to the new 
+nodes. 
+
+To inspect which tier is an index targeting for assignment, use the <<indices-get-settings, get index setting>>
+API to retrieve the configured value for the `index.routing.allocation.include._tier_preference`
+setting:
+
+[source,console]
+----
+GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings
+----
+// TEST[continued]
+
+
+The reponse will look like this:
+
+[source,console-result]
+----
+{
+  "my-index-000001": {
+    "settings": {
+      "index.routing.allocation.include._tier_preference": "data_warm,data_hot" <1>
+    }
+  }
+}
+----
+// TESTRESPONSE[skip:the result is for illustrating purposes only]
+
+
+<1> Represents a comma separated list of data tier node roles this index is allowed
+to be allocated on, the first one in the list being the one with the higher priority
+i.e. the tier the index is targeting.
+e.g. in this example the tier preference is `data_warm,data_hot` so the index is
+targeting the `warm` tier and more nodes with the `data_warm` role are needed in
+the {es} cluster.
+
+
+Alternatively, if adding more nodes to the {es} cluster is not desired,
+inspecting the system-wide `cluster.routing.allocation.total_shards_per_node` 
+<<cluster-get-settings, cluster setting>> and increasing the configured value:
+
+
+. Inspect the `cluster.routing.allocation.total_shards_per_node` <<cluster-get-settings, cluster setting>> 
+for the index with unassigned shards:
++
+[source,console]
+----
+GET /_cluster/settings?flat_settings
+----
++
+The response will look like this:
++
+[source,console-result]
+----
+{
+  "persistent": {
+    "cluster.routing.allocation.total_shards_per_node": "300" <1>
+  },
+  "transient": {}
+}
+----
+// TESTRESPONSE[skip:the result is for illustrating purposes only as don't want to change a cluster-wide setting]
+
++
+<1> Represents the current configured value for the total number of shards
+that can reside on one node in the system.
+
+. <<cluster-update-settings,Increase>> the value for the total number of shards 
+that can be assigned on one node to a higher value:
++
+[source,console]
+----
+PUT _cluster/settings
+{
+  "persistent" : {
+    "cluster.routing.allocation.total_shards_per_node" : 400 <1>
+  }
+}
+----
+// TEST[continued]
+
++
+<1> The new value for the system-wide `total_shards_per_node` configuration
+is increased from the previous value of `300` to `400`. 
+The `total_shards_per_node` configuration can also be set to `null`, which 
+represents no upper bound with regards to how many shards can be 
+collocated on one node in the system. 
+
+// end::self-managed[]
+

+ 40 - 0
docs/reference/tab-widgets/troubleshooting/data/total-shards-per-node-widget.asciidoc

@@ -0,0 +1,40 @@
+++++
+<div class="tabs" data-tab-group="host">
+  <div role="tablist" aria-label="Total shards per node">
+    <button role="tab"
+            aria-selected="true"
+            aria-controls="cloud-tab-total-shards"
+            id="cloud-total-shards">
+      Elasticsearch Service
+    </button>
+    <button role="tab"
+            aria-selected="false"
+            aria-controls="self-managed-tab-total-shards"
+            id="self-managed-total-shards"
+            tabindex="-1">
+      Self-managed
+    </button>
+  </div>
+  <div tabindex="0"
+       role="tabpanel"
+       id="cloud-tab-total-shards"
+       aria-labelledby="cloud-total-shards">
+++++
+
+include::total-shards-per-node.asciidoc[tag=cloud]
+
+++++
+  </div>
+  <div tabindex="0"
+       role="tabpanel"
+       id="self-managed-tab-total-shards"
+       aria-labelledby="self-managed-total-shards"
+       hidden="">
+++++
+
+include::total-shards-per-node.asciidoc[tag=self-managed]
+
+++++
+  </div>
+</div>
+++++

+ 186 - 0
docs/reference/tab-widgets/troubleshooting/data/total-shards-per-node.asciidoc

@@ -0,0 +1,186 @@
+//////////////////////////
+
+[source,console]
+--------------------------------------------------
+PUT my-index-000001
+{
+  "settings": {
+    "index.routing.allocation.total_shards_per_node": "1"
+  }
+}
+
+--------------------------------------------------
+// TESTSETUP
+
+[source,console]
+--------------------------------------------------
+DELETE my-index-000001
+--------------------------------------------------
+// TEARDOWN
+
+//////////////////////////
+
+// tag::cloud[]
+In order to get the shards assigned we'll need to increase the number of shards 
+that can be collocated on a node. 
+We'll achieve this by inspecting the configuration for the `index.routing.allocation.total_shards_per_node` 
+<<indices-get-settings, index setting>> and increasing the configured value for the
+indices that have shards unassigned.
+
+
+**Use {kib}**
+
+//tag::kibana-api-ex[]
+. Log in to the {ess-console}[{ecloud} console].
++
+
+. On the **Elasticsearch Service** panel, click the name of your deployment. 
++
+
+NOTE:
+If the name of your deployment is disabled your {kib} instances might be
+unhealthy, in which case please contact https://support.elastic.co[Elastic Support],
+or your deployment doesn't include {kib}, in which case all you need to do is 
+{cloud}/ec-access-kibana.html[enable Kibana first].
+
+. Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
+and go to **Dev Tools > Console**.
++
+[role="screenshot"]
+image::images/kibana-console.png[{kib} Console,align="center"]
+
+. Inspect the `index.routing.allocation.total_shards_per_node` <<indices-get-settings, index setting>> 
+for the index with unassigned shards:
++
+[source,console]
+----
+GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings
+----
++
+The response will look like this:
++
+[source,console-result]
+----
+{
+  "my-index-000001": {
+    "settings": {
+      "index.routing.allocation.total_shards_per_node": "1" <1>
+    }
+  }
+}
+----
++
+<1> Represents the current configured value for the total number of shards
+that can reside on one node for the `my-index-000001` index.
+
+. <<indices-update-settings,Increase>> the value for the total number of shards 
+that can be assigned on one node to a higher value:
++
+[source,console]
+----
+PUT /my-index-000001/_settings
+{
+  "index" : {
+    "routing.allocation.total_shards_per_node" : "2" <1>
+  }
+}
+----
+// TEST[continued]
+
++
+<1> The new value for the `total_shards_per_node` configuration for the `my-index-000001` index
+is increased from the previous value of `1` to `2`. 
+The `total_shards_per_node` configuration can also be set to `-1`, which 
+represents no upper bound with regards to how many shards of the same 
+index can reside on one node.
+
+//end::kibana-api-ex[]
+// end::cloud[]
+
+// tag::self-managed[]
+In order to get the shards assigned you can add more nodes to your {es} cluster 
+and assing the index's target tier <<assign-data-tier, node role>> to the new 
+nodes. 
+
+To inspect which tier is an index targeting for assignment, use the <<indices-get-settings, get index setting>>
+API to retrieve the configured value for the `index.routing.allocation.include._tier_preference`
+setting:
+
+[source,console]
+----
+GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings
+----
+// TEST[continued]
+
+
+The reponse will look like this:
+
+[source,console-result]
+----
+{
+  "my-index-000001": {
+    "settings": {
+      "index.routing.allocation.include._tier_preference": "data_warm,data_hot" <1>
+    }
+  }
+}
+----
+// TESTRESPONSE[skip:the result is for illustrating purposes only]
+
+
+<1> Represents a comma separated list of data tier node roles this index is allowed
+to be allocated on, the first one in the list being the one with the higher priority
+i.e. the tier the index is targeting.
+e.g. in this example the tier preference is `data_warm,data_hot` so the index is
+targeting the `warm` tier and more nodes with the `data_warm` role are needed in
+the {es} cluster.
+
+
+Alternatively, if adding more nodes to the {es} cluster is not desired,
+inspecting the configuration for the `index.routing.allocation.total_shards_per_node` 
+<<indices-get-settings, index setting>> and increasing the configured value will 
+allow more shards to be assigned on the same node.
+
+. Inspect the `index.routing.allocation.total_shards_per_node` <<indices-get-settings, index setting>> 
+for the index with unassigned shards:
++
+[source,console]
+----
+GET /my-index-000001/_settings/index.routing.allocation.total_shards_per_node?flat_settings
+----
+
++
+The response will look like this:
+
++
+[source,console-result]
+----
+{
+  "my-index-000001": {
+    "settings": {
+      "index.routing.allocation.total_shards_per_node": "1" <1>
+    }
+  }
+}
+----
+
++
+<1> Represents the current configured value for the total number of shards
+that can reside on one node for the `my-index-000001` index.
+
+. <<indices-update-settings,Increase>> the total number of shards that can be assigned on one node or
+reset the value to unbounded (`-1`):
++
+[source,console]
+----
+PUT /my-index-000001/_settings
+{
+  "index" : {
+    "routing.allocation.total_shards_per_node" : -1
+  }
+}
+----
+// TEST[continued]
+
+// end::self-managed[]
+

+ 17 - 0
docs/reference/troubleshooting.asciidoc

@@ -0,0 +1,17 @@
+[[troubleshooting]]
+= Troubleshooting
+
+[partintro]
+--
+This section provides a series of troubleshooting guides aimed at helping users
+fix problems an Elasticsearch deployment might encounter.
+
+Reporting and diagnosing the problems presented in this section is assisted
+by the <<health-api,health API>>.
+--
+
+include::troubleshooting/data/increase-shard-limit.asciidoc[]
+
+include::troubleshooting/data/increase-cluster-shard-limit.asciidoc[]
+
+

+ 20 - 0
docs/reference/troubleshooting/data/increase-cluster-shard-limit.asciidoc

@@ -0,0 +1,20 @@
+[[increase-cluster-shard-limit]]
+== Total number of shards per node has been reached
+
+Elasticsearch tries to take advantage of all the available resources by 
+distributing data (index shards) amongst the cluster nodes.
+
+Users might want to influence this data distribution by configuring the 
+<<cluster-total-shards-per-node,`cluster.routing.allocation.total_shards_per_node`>>
+system setting to restrict the number of shards that can be hosted on a single
+node in the system, regardless of the index.
+Various configurations limiting how many shards can be hosted on a single node 
+can lead to shards being unassigned due to the cluster not having enough nodes to
+satisfy the configuration.
+
+In order to fix this follow the next steps:
+
+include::{es-repo-dir}/tab-widgets/troubleshooting/data/increase-cluster-shard-limit-widget.asciidoc[]
+
+
+

+ 18 - 0
docs/reference/troubleshooting/data/increase-shard-limit.asciidoc

@@ -0,0 +1,18 @@
+[[increase-shard-limit]]
+== Total number of shards for an index on a single node exceeded 
+
+Elasticsearch tries to take advantage of all the available resources by 
+distributing data (index shards) among nodes in the cluster.
+
+Users might want to influence this data distribution by configuring the <<total-shards-per-node, index.routing.allocation.total_shards_per_node>> 
+index setting to a custom value (for e.g. `1` in case of a highly trafficked index).
+Various configurations limiting how many shards an index can have located on one node
+can lead to shards being unassigned due to the cluster not having enough nodes to
+satisfy the index configuration.
+
+In order to fix this follow the next steps:
+
+include::{es-repo-dir}/tab-widgets/troubleshooting/data/total-shards-per-node-widget.asciidoc[]
+
+
+