red-yellow-cluster-status.asciidoc 7.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240
  1. [[red-yellow-cluster-status]]
  2. === Red or yellow cluster status
  3. A red or yellow cluster status indicates one or more shards are missing or
  4. unallocated. These unassigned shards increase your risk of data loss and can
  5. degrade cluster performance.
  6. [discrete]
  7. [[diagnose-cluster-status]]
  8. ==== Diagnose your cluster status
  9. **Check your cluster status**
  10. Use the <<cluster-health,cluster health API>>.
  11. [source,console]
  12. ----
  13. GET _cluster/health?filter_path=status,*_shards
  14. ----
  15. A healthy cluster has a green `status` and zero `unassigned_shards`. A yellow
  16. status means only replicas are unassigned. A red status means one or
  17. more primary shards are unassigned.
  18. **View unassigned shards**
  19. To view unassigned shards, use the <<cat-shards,cat shards API>>.
  20. [source,console]
  21. ----
  22. GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
  23. ----
  24. Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for
  25. primary shards and `r` for replicas.
  26. To understand why an unassigned shard is not being assigned and what action
  27. you must take to allow {es} to assign it, use the
  28. <<cluster-allocation-explain,cluster allocation explanation API>>.
  29. [source,console]
  30. ----
  31. GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
  32. {
  33. "index": "my-index",
  34. "shard": 0,
  35. "primary": false,
  36. "current_node": "my-node"
  37. }
  38. ----
  39. // TEST[s/^/PUT my-index\n/]
  40. // TEST[s/"primary": false,/"primary": false/]
  41. // TEST[s/"current_node": "my-node"//]
  42. [discrete]
  43. [[fix-red-yellow-cluster-status]]
  44. ==== Fix a red or yellow cluster status
  45. A shard can become unassigned for several reasons. The following tips outline the
  46. most common causes and their solutions.
  47. **Re-enable shard allocation**
  48. You typically disable allocation during a <<restart-cluster,restart>> or other
  49. cluster maintenance. If you forgot to re-enable allocation afterward, {es} will
  50. be unable to assign shards. To re-enable allocation, reset the
  51. `cluster.routing.allocation.enable` cluster setting.
  52. [source,console]
  53. ----
  54. PUT _cluster/settings
  55. {
  56. "persistent" : {
  57. "cluster.routing.allocation.enable" : null
  58. }
  59. }
  60. ----
  61. **Recover lost nodes**
  62. Shards often become unassigned when a data node leaves the cluster. This can
  63. occur for several reasons, ranging from connectivity issues to hardware failure.
  64. After you resolve the issue and recover the node, it will rejoin the cluster.
  65. {es} will then automatically allocate any unassigned shards.
  66. To avoid wasting resources on temporary issues, {es} <<delayed-allocation,delays
  67. allocation>> by one minute by default. If you've recovered a node and don’t want
  68. to wait for the delay period, you can call the <<cluster-reroute,cluster reroute
  69. API>> with no arguments to start the allocation process. The process runs
  70. asynchronously in the background.
  71. [source,console]
  72. ----
  73. POST _cluster/reroute?metric=none
  74. ----
  75. **Fix allocation settings**
  76. Misconfigured allocation settings can result in an unassigned primary shard.
  77. These settings include:
  78. * <<shard-allocation-filtering,Shard allocation>> index settings
  79. * <<cluster-shard-allocation-filtering,Allocation filtering>> cluster settings
  80. * <<shard-allocation-awareness,Allocation awareness>> cluster settings
  81. To review your allocation settings, use the <<indices-get-settings,get index
  82. settings>> and <<cluster-get-settings,cluster get settings>> APIs.
  83. [source,console]
  84. ----
  85. GET my-index/_settings?flat_settings=true&include_defaults=true
  86. GET _cluster/settings?flat_settings=true&include_defaults=true
  87. ----
  88. // TEST[s/^/PUT my-index\n/]
  89. You can change the settings using the <<indices-update-settings,update index
  90. settings>> and <<cluster-update-settings,cluster update settings>> APIs.
  91. **Allocate or reduce replicas**
  92. To protect against hardware failure, {es} will not assign a replica to the same
  93. node as its primary shard. If no other data nodes are available to host the
  94. replica, it remains unassigned. To fix this, you can:
  95. * Add a data node to the same tier to host the replica.
  96. * Change the `index.number_of_replicas` index setting to reduce the number of
  97. replicas for each primary shard. We recommend keeping at least one replica per
  98. primary.
  99. [source,console]
  100. ----
  101. PUT _settings
  102. {
  103. "index.number_of_replicas": 1
  104. }
  105. ----
  106. // TEST[s/^/PUT my-index\n/]
  107. **Free up or increase disk space**
  108. {es} uses a <<disk-based-shard-allocation,low disk watermark>> to ensure data
  109. nodes have enough disk space for incoming shards. By default, {es} does not
  110. allocate shards to nodes using more than 85% of disk space.
  111. To check the current disk space of your nodes, use the <<cat-allocation,cat
  112. allocation API>>.
  113. [source,console]
  114. ----
  115. GET _cat/allocation?v=true&h=node,shards,disk.*
  116. ----
  117. If your nodes are running low on disk space, you have a few options:
  118. * Upgrade your nodes to increase disk space.
  119. * Delete unneeded indices to free up space. If you use {ilm-init}, you can
  120. update your lifecycle policy to use <<ilm-searchable-snapshot,searchable
  121. snapshots>> or add a delete phase. If you no longer need to search the data, you
  122. can use a <<snapshot-restore,snapshot>> to store it off-cluster.
  123. * If you no longer write to an index, use the <<indices-forcemerge,force merge
  124. API>> or {ilm-init}'s <<ilm-forcemerge,force merge action>> to merge its
  125. segments into larger ones.
  126. +
  127. [source,console]
  128. ----
  129. POST my-index/_forcemerge
  130. ----
  131. // TEST[s/^/PUT my-index\n/]
  132. * If an index is read-only, use the <<indices-shrink-index,shrink index API>> or
  133. {ilm-init}'s <<ilm-shrink,shrink action>> to reduce its primary shard count.
  134. +
  135. [source,console]
  136. ----
  137. POST my-index/_shrink/my-shrunken-index
  138. ----
  139. // TEST[s/^/PUT my-index\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]
  140. * If your node has a large disk capacity, you can increase the low disk
  141. watermark or set it to an explicit byte value.
  142. +
  143. [source,console]
  144. ----
  145. PUT _cluster/settings
  146. {
  147. "persistent": {
  148. "cluster.routing.allocation.disk.watermark.low": "30gb"
  149. }
  150. }
  151. ----
  152. // TEST[s/"30gb"/null/]
  153. **Reduce JVM memory pressure**
  154. Shard allocation requires JVM heap memory. High JVM memory pressure can trigger
  155. <<circuit-breaker,circuit breakers>> that stop allocation and leave shards
  156. unassigned. See <<high-jvm-memory-pressure>>.
  157. **Recover data for a lost primary shard**
  158. If a node containing a primary shard is lost, {es} can typically replace it
  159. using a replica on another node. If you can't recover the node and replicas
  160. don't exist or are irrecoverable, you'll need to re-add the missing data from a
  161. <<snapshot-restore,snapshot>> or the original data source.
  162. WARNING: Only use this option if node recovery is no longer possible. This
  163. process allocates an empty primary shard. If the node later rejoins the cluster,
  164. {es} will overwrite its primary shard with data from this newer empty shard,
  165. resulting in data loss.
  166. Use the <<cluster-reroute,cluster reroute API>> to manually allocate the
  167. unassigned primary shard to another data node in the same tier. Set
  168. `accept_data_loss` to `true`.
  169. [source,console]
  170. ----
  171. POST _cluster/reroute?metric=none
  172. {
  173. "commands": [
  174. {
  175. "allocate_empty_primary": {
  176. "index": "my-index",
  177. "shard": 0,
  178. "node": "my-node",
  179. "accept_data_loss": "true"
  180. }
  181. }
  182. ]
  183. }
  184. ----
  185. // TEST[s/^/PUT my-index\n/]
  186. // TEST[catch:bad_request]
  187. If you backed up the missing index data to a snapshot, use the
  188. <<restore-snapshot-api,restore snapshot API>> to restore the individual index.
  189. Alternatively, you can index the missing data from the original data source.