increase-cluster-shard-limit.asciidoc 5.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191
  1. //////////////////////////
  2. [source,console]
  3. --------------------------------------------------
  4. PUT my-index-000001
  5. --------------------------------------------------
  6. // TESTSETUP
  7. [source,console]
  8. --------------------------------------------------
  9. PUT _cluster/settings
  10. {
  11. "persistent" : {
  12. "cluster.routing.allocation.total_shards_per_node" : null
  13. }
  14. }
  15. DELETE my-index-000001
  16. --------------------------------------------------
  17. // TEARDOWN
  18. //////////////////////////
  19. // tag::cloud[]
  20. In order to get the shards assigned we'll need to increase the number of shards
  21. that can be collocated on a node in the cluster.
  22. We'll achieve this by inspecting the system-wide `cluster.routing.allocation.total_shards_per_node`
  23. <<cluster-get-settings, cluster setting>> and increasing the configured value.
  24. **Use {kib}**
  25. //tag::kibana-api-ex[]
  26. . Log in to the {ess-console}[{ecloud} console].
  27. +
  28. . On the **Elasticsearch Service** panel, click the name of your deployment.
  29. +
  30. NOTE:
  31. If the name of your deployment is disabled your {kib} instances might be
  32. unhealthy, in which case please contact https://support.elastic.co[Elastic Support].
  33. If your deployment doesn't include {kib}, all you need to do is
  34. {cloud}/ec-access-kibana.html[enable it first].
  35. . Open your deployment's side navigation menu (placed under the Elastic logo in the upper left corner)
  36. and go to **Dev Tools > Console**.
  37. +
  38. [role="screenshot"]
  39. image::images/kibana-console.png[{kib} Console,align="center"]
  40. . Inspect the `cluster.routing.allocation.total_shards_per_node` <<cluster-get-settings, cluster setting>>:
  41. +
  42. [source,console]
  43. ----
  44. GET /_cluster/settings?flat_settings
  45. ----
  46. +
  47. The response will look like this:
  48. +
  49. [source,console-result]
  50. ----
  51. {
  52. "persistent": {
  53. "cluster.routing.allocation.total_shards_per_node": "300" <1>
  54. },
  55. "transient": {}
  56. }
  57. ----
  58. // TESTRESPONSE[skip:the result is for illustrating purposes only as don't want to change a cluster-wide setting]
  59. +
  60. <1> Represents the current configured value for the total number of shards
  61. that can reside on one node in the system.
  62. . <<cluster-update-settings,Increase>> the value for the total number of shards
  63. that can be assigned on one node to a higher value:
  64. +
  65. [source,console]
  66. ----
  67. PUT _cluster/settings
  68. {
  69. "persistent" : {
  70. "cluster.routing.allocation.total_shards_per_node" : 400 <1>
  71. }
  72. }
  73. ----
  74. // TEST[continued]
  75. +
  76. <1> The new value for the system-wide `total_shards_per_node` configuration
  77. is increased from the previous value of `300` to `400`.
  78. The `total_shards_per_node` configuration can also be set to `null`, which
  79. represents no upper bound with regards to how many shards can be
  80. collocated on one node in the system.
  81. //end::kibana-api-ex[]
  82. // end::cloud[]
  83. // tag::self-managed[]
  84. In order to get the shards assigned you can add more nodes to your {es} cluster
  85. and assign the index's target tier <<assign-data-tier, node role>> to the new
  86. nodes.
  87. To inspect which tier is an index targeting for assignment, use the <<indices-get-settings, get index setting>>
  88. API to retrieve the configured value for the `index.routing.allocation.include._tier_preference`
  89. setting:
  90. [source,console]
  91. ----
  92. GET /my-index-000001/_settings/index.routing.allocation.include._tier_preference?flat_settings
  93. ----
  94. // TEST[continued]
  95. The response will look like this:
  96. [source,console-result]
  97. ----
  98. {
  99. "my-index-000001": {
  100. "settings": {
  101. "index.routing.allocation.include._tier_preference": "data_warm,data_hot" <1>
  102. }
  103. }
  104. }
  105. ----
  106. // TESTRESPONSE[skip:the result is for illustrating purposes only]
  107. <1> Represents a comma separated list of data tier node roles this index is allowed
  108. to be allocated on, the first one in the list being the one with the higher priority
  109. i.e. the tier the index is targeting.
  110. e.g. in this example the tier preference is `data_warm,data_hot` so the index is
  111. targeting the `warm` tier and more nodes with the `data_warm` role are needed in
  112. the {es} cluster.
  113. Alternatively, if adding more nodes to the {es} cluster is not desired,
  114. inspecting the system-wide `cluster.routing.allocation.total_shards_per_node`
  115. <<cluster-get-settings, cluster setting>> and increasing the configured value:
  116. . Inspect the `cluster.routing.allocation.total_shards_per_node` <<cluster-get-settings, cluster setting>>
  117. for the index with unassigned shards:
  118. +
  119. [source,console]
  120. ----
  121. GET /_cluster/settings?flat_settings
  122. ----
  123. +
  124. The response will look like this:
  125. +
  126. [source,console-result]
  127. ----
  128. {
  129. "persistent": {
  130. "cluster.routing.allocation.total_shards_per_node": "300" <1>
  131. },
  132. "transient": {}
  133. }
  134. ----
  135. // TESTRESPONSE[skip:the result is for illustrating purposes only as don't want to change a cluster-wide setting]
  136. +
  137. <1> Represents the current configured value for the total number of shards
  138. that can reside on one node in the system.
  139. . <<cluster-update-settings,Increase>> the value for the total number of shards
  140. that can be assigned on one node to a higher value:
  141. +
  142. [source,console]
  143. ----
  144. PUT _cluster/settings
  145. {
  146. "persistent" : {
  147. "cluster.routing.allocation.total_shards_per_node" : 400 <1>
  148. }
  149. }
  150. ----
  151. // TEST[continued]
  152. +
  153. <1> The new value for the system-wide `total_shards_per_node` configuration
  154. is increased from the previous value of `300` to `400`.
  155. The `total_shards_per_node` configuration can also be set to `null`, which
  156. represents no upper bound with regards to how many shards can be
  157. collocated on one node in the system.
  158. // end::self-managed[]