allocation-explain.asciidoc 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461
  1. [[cluster-allocation-explain]]
  2. === Cluster allocation explain API
  3. ++++
  4. <titleabbrev>Cluster allocation explain</titleabbrev>
  5. ++++
  6. ..New API reference
  7. [sidebar]
  8. --
  9. For the most up-to-date API details, refer to {api-es}/group/endpoint-cluster[Cluster APIs].
  10. --
  11. Provides an explanation for a shard's current <<index-modules-allocation,allocation>>.
  12. [source,console]
  13. ----
  14. GET _cluster/allocation/explain
  15. {
  16. "index": "my-index-000001",
  17. "shard": 0,
  18. "primary": false,
  19. "current_node": "my-node"
  20. }
  21. ----
  22. // TEST[setup:my_index]
  23. // TEST[s/"primary": false,/"primary": false/]
  24. // TEST[s/"current_node": "my-node"//]
  25. [[cluster-allocation-explain-api-request]]
  26. ==== {api-request-title}
  27. `GET _cluster/allocation/explain`
  28. `POST _cluster/allocation/explain`
  29. [[cluster-allocation-explain-api-prereqs]]
  30. ==== {api-prereq-title}
  31. * If the {es} {security-features} are enabled, you must have the `monitor` or
  32. `manage` <<privileges-list-cluster,cluster privilege>> to use this API.
  33. [[cluster-allocation-explain-api-desc]]
  34. ==== {api-description-title}
  35. The purpose of the cluster allocation explain API is to provide
  36. explanations for shard allocations in the cluster. For unassigned shards,
  37. the explain API provides an explanation for why the shard is unassigned.
  38. For assigned shards, the explain API provides an explanation for why the
  39. shard is remaining on its current node and has not moved or rebalanced to
  40. another node. This API can be very useful when attempting to diagnose why a
  41. shard is unassigned or why a shard continues to remain on its current node when
  42. you might expect otherwise.
  43. [[cluster-allocation-explain-api-query-params]]
  44. ==== {api-query-parms-title}
  45. `include_disk_info`::
  46. (Optional, Boolean) If `true`, returns information about disk usage and
  47. shard sizes. Defaults to `false`.
  48. `include_yes_decisions`::
  49. (Optional, Boolean) If `true`, returns 'YES' decisions in explanation.
  50. Defaults to `false`.
  51. [[cluster-allocation-explain-api-request-body]]
  52. ==== {api-request-body-title}
  53. `current_node`::
  54. (Optional, string) Specifies the node ID or the name of the node currently
  55. holding the shard to explain. To explain an unassigned shard, omit this
  56. parameter.
  57. `index`::
  58. (Optional, string) Specifies the name of the index that you would like an
  59. explanation for.
  60. `primary`::
  61. (Optional, Boolean) If `true`, returns explanation for the primary shard
  62. for the given shard ID.
  63. `shard`::
  64. (Optional, integer) Specifies the ID of the shard that you would like an
  65. explanation for.
  66. [[cluster-allocation-explain-api-examples]]
  67. ==== {api-examples-title}
  68. ===== Unassigned primary shard
  69. ====== Conflicting settings
  70. The following request gets an allocation explanation for an unassigned primary
  71. shard.
  72. ////
  73. [source,console]
  74. ----
  75. PUT my-index-000001?master_timeout=1s&timeout=1s
  76. {
  77. "settings": {
  78. "index.routing.allocation.include._name": "nonexistent_node",
  79. "index.routing.allocation.include._tier_preference": null
  80. }
  81. }
  82. ----
  83. ////
  84. [source,console]
  85. ----
  86. GET _cluster/allocation/explain
  87. {
  88. "index": "my-index-000001",
  89. "shard": 0,
  90. "primary": true
  91. }
  92. ----
  93. // TEST[continued]
  94. The API response indicates the shard can only be allocated to a nonexistent
  95. node.
  96. [source,console-result]
  97. ----
  98. {
  99. "index" : "my-index-000001",
  100. "shard" : 0,
  101. "primary" : true,
  102. "current_state" : "unassigned", <1>
  103. "unassigned_info" : {
  104. "reason" : "INDEX_CREATED", <2>
  105. "at" : "2017-01-04T18:08:16.600Z",
  106. "last_allocation_status" : "no"
  107. },
  108. "can_allocate" : "no", <3>
  109. "allocate_explanation" : "Elasticsearch isn't allowed to allocate this shard to any of the nodes in the cluster. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  110. "node_allocation_decisions" : [
  111. {
  112. "node_id" : "8qt2rY-pT6KNZB3-hGfLnw",
  113. "node_name" : "node-0",
  114. "transport_address" : "127.0.0.1:9401",
  115. "roles" : ["data", "data_cold", "data_content", "data_frozen", "data_hot", "data_warm", "ingest", "master", "ml", "remote_cluster_client", "transform"],
  116. "node_attributes" : {},
  117. "node_decision" : "no", <4>
  118. "weight_ranking" : 1,
  119. "deciders" : [
  120. {
  121. "decider" : "filter", <5>
  122. "decision" : "NO",
  123. "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"nonexistent_node\"]" <6>
  124. }
  125. ]
  126. }
  127. ]
  128. }
  129. ----
  130. // TESTRESPONSE[s/"at" : "[^"]*"/"at" : $body.$_path/]
  131. // TESTRESPONSE[s/"node_id" : "[^"]*"/"node_id" : $body.$_path/]
  132. // TESTRESPONSE[s/"transport_address" : "[^"]*"/"transport_address" : $body.$_path/]
  133. // TESTRESPONSE[s/"roles" : \[("[a-z_]*",)*("[a-z_]*")\]/"roles" : $body.$_path/]
  134. // TESTRESPONSE[s/"node_attributes" : \{\}/"node_attributes" : $body.$_path/]
  135. <1> The current state of the shard.
  136. <2> The reason for the shard originally becoming unassigned.
  137. <3> Whether to allocate the shard.
  138. <4> Whether to allocate the shard to the particular node.
  139. <5> The decider which led to the `no` decision for the node.
  140. <6> An explanation as to why the decider returned a `no` decision, with a helpful hint pointing to the setting that led to the decision. In this example, a newly created index has <<indices-get-settings,an index setting>> that requires that it only be allocated to a node named `nonexistent_node`, which does not exist, so the index is unable to allocate.
  141. See https://www.youtube.com/watch?v=5z3n2VgusLE[this video] for a walkthrough of troubleshooting a node and index setting mismatch.
  142. [[maximum-number-of-retries-exceeded]]
  143. ====== Maximum number of retries exceeded
  144. The following response contains an allocation explanation for an unassigned
  145. primary shard that has reached the maximum number of allocation retry attempts.
  146. [source,js]
  147. ----
  148. {
  149. "index" : "my-index-000001",
  150. "shard" : 0,
  151. "primary" : true,
  152. "current_state" : "unassigned",
  153. "unassigned_info" : {
  154. "at" : "2017-01-04T18:03:28.464Z",
  155. "failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException",
  156. "reason": "ALLOCATION_FAILED",
  157. "failed_allocation_attempts": 5,
  158. "last_allocation_status": "no",
  159. },
  160. "can_allocate": "no",
  161. "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  162. "node_allocation_decisions" : [
  163. {
  164. "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
  165. "node_name" : "node_t0",
  166. "transport_address" : "127.0.0.1:9400",
  167. "roles" : ["data_content", "data_hot"],
  168. "node_decision" : "no",
  169. "store" : {
  170. "matching_size" : "4.2kb",
  171. "matching_size_in_bytes" : 4325
  172. },
  173. "deciders" : [
  174. {
  175. "decider": "max_retry",
  176. "decision" : "NO",
  177. "explanation": "shard has exceeded the maximum number of retries [5] on failed allocation attempts - manually call [POST /_cluster/reroute?retry_failed&metric=none] to retry, [unassigned_info[[reason=ALLOCATION_FAILED], at[2024-07-30T21:04:12.166Z], failed_attempts[5], failed_nodes[[mEKjwwzLT1yJVb8UxT6anw]], delayed=false, details[failed shard on node [mEKjwwzLT1yJVb8UxT6anw]: failed recovery, failure RecoveryFailedException], allocation_status[deciders_no]]]"
  178. }
  179. ]
  180. }
  181. ]
  182. }
  183. ----
  184. // NOTCONSOLE
  185. When Elasticsearch is unable to allocate a shard, it will attempt to retry allocation up to
  186. the maximum number of retries allowed. After this, Elasticsearch will stop attempting to
  187. allocate the shard in order to prevent infinite retries which may impact cluster
  188. performance. Run the <<cluster-reroute,cluster reroute>> API to retry allocation, which
  189. will allocate the shard if the issue preventing allocation has been resolved.
  190. ====== No valid shard copy
  191. The following response contains an allocation explanation for an unassigned
  192. primary shard that was previously allocated.
  193. [source,js]
  194. ----
  195. {
  196. "index" : "my-index-000001",
  197. "shard" : 0,
  198. "primary" : true,
  199. "current_state" : "unassigned",
  200. "unassigned_info" : {
  201. "reason" : "NODE_LEFT",
  202. "at" : "2017-01-04T18:03:28.464Z",
  203. "details" : "node_left[OIWe8UhhThCK0V5XfmdrmQ]",
  204. "last_allocation_status" : "no_valid_shard_copy"
  205. },
  206. "can_allocate" : "no_valid_shard_copy",
  207. "allocate_explanation" : "Elasticsearch can't allocate this shard because there are no copies of its data in the cluster. Elasticsearch will allocate this shard when a node holding a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot."
  208. }
  209. ----
  210. // NOTCONSOLE
  211. If a shard is unassigned with an allocation status of `no_valid_shard_copy`, then you should <<fix-cluster-status-recover-nodes,make sure that all nodes are in the cluster>>. If all the nodes containing in-sync copies of a shard are lost, then you can <<fix-cluster-status-restore,recover the data for the shard>>.
  212. See https://www.youtube.com/watch?v=6OAg9IyXFO4[this video] for a walkthrough of troubleshooting `no_valid_shard_copy`.
  213. ===== Unassigned replica shard
  214. ====== Allocation delayed
  215. The following response contains an allocation explanation for a replica that's
  216. unassigned due to <<delayed-allocation,delayed allocation>>.
  217. [source,js]
  218. ----
  219. {
  220. "index" : "my-index-000001",
  221. "shard" : 0,
  222. "primary" : false,
  223. "current_state" : "unassigned",
  224. "unassigned_info" : {
  225. "reason" : "NODE_LEFT",
  226. "at" : "2017-01-04T18:53:59.498Z",
  227. "details" : "node_left[G92ZwuuaRY-9n8_tc-IzEg]",
  228. "last_allocation_status" : "no_attempt"
  229. },
  230. "can_allocate" : "allocation_delayed",
  231. "allocate_explanation" : "The node containing this shard copy recently left the cluster. Elasticsearch is waiting for it to return. If the node does not return within [%s] then Elasticsearch will allocate this shard to another node. Please wait.",
  232. "configured_delay" : "1m", <1>
  233. "configured_delay_in_millis" : 60000,
  234. "remaining_delay" : "59.8s", <2>
  235. "remaining_delay_in_millis" : 59824,
  236. "node_allocation_decisions" : [
  237. {
  238. "node_id" : "pmnHu_ooQWCPEFobZGbpWw",
  239. "node_name" : "node_t2",
  240. "transport_address" : "127.0.0.1:9402",
  241. "roles" : ["data_content", "data_hot"],
  242. "node_decision" : "yes"
  243. },
  244. {
  245. "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
  246. "node_name" : "node_t0",
  247. "transport_address" : "127.0.0.1:9400",
  248. "roles" : ["data_content", "data_hot"],
  249. "node_decision" : "no",
  250. "store" : { <3>
  251. "matching_size" : "4.2kb",
  252. "matching_size_in_bytes" : 4325
  253. },
  254. "deciders" : [
  255. {
  256. "decider" : "same_shard",
  257. "decision" : "NO",
  258. "explanation" : "a copy of this shard is already allocated to this node [[my-index-000001][0], node[3sULLVJrRneSg0EfBB-2Ew], [P], s[STARTED], a[id=eV9P8BN1QPqRc3B4PLx6cg]]"
  259. }
  260. ]
  261. }
  262. ]
  263. }
  264. ----
  265. // NOTCONSOLE
  266. <1> The configured delay before allocating a replica shard that does not exist due to the node holding it leaving the cluster.
  267. <2> The remaining delay before allocating the replica shard.
  268. <3> Information about the shard data found on a node.
  269. ====== Allocation throttled
  270. The following response contains an allocation explanation for a replica that's
  271. queued to allocate but currently waiting on other queued shards.
  272. [source,js]
  273. ----
  274. {
  275. "index" : "my-index-000001",
  276. "shard" : 0,
  277. "primary" : false,
  278. "current_state" : "unassigned",
  279. "unassigned_info" : {
  280. "reason" : "NODE_LEFT",
  281. "at" : "2017-01-04T18:53:59.498Z",
  282. "details" : "node_left[G92ZwuuaRY-9n8_tc-IzEg]",
  283. "last_allocation_status" : "no_attempt"
  284. },
  285. "can_allocate": "throttled",
  286. "allocate_explanation": "Elasticsearch is currently busy with other activities. It expects to be able to allocate this shard when those activities finish. Please wait.",
  287. "node_allocation_decisions" : [
  288. {
  289. "node_id" : "3sULLVJrRneSg0EfBB-2Ew",
  290. "node_name" : "node_t0",
  291. "transport_address" : "127.0.0.1:9400",
  292. "roles" : ["data_content", "data_hot"],
  293. "node_decision" : "no",
  294. "deciders" : [
  295. {
  296. "decider": "throttling",
  297. "decision": "THROTTLE",
  298. "explanation": "reached the limit of incoming shard recoveries [2], cluster setting [cluster.routing.allocation.node_concurrent_incoming_recoveries=2] (can also be set via [cluster.routing.allocation.node_concurrent_recoveries])"
  299. }
  300. ]
  301. }
  302. ]
  303. }
  304. ----
  305. // NOTCONSOLE
  306. This is a transient message that might appear when a large amount of shards are allocating.
  307. ===== Assigned shard
  308. ====== Cannot remain on current node
  309. The following response contains an allocation explanation for an assigned shard.
  310. The response indicates the shard is not allowed to remain on its current node
  311. and must be reallocated.
  312. [source,js]
  313. ----
  314. {
  315. "index" : "my-index-000001",
  316. "shard" : 0,
  317. "primary" : true,
  318. "current_state" : "started",
  319. "current_node" : {
  320. "id" : "8lWJeJ7tSoui0bxrwuNhTA",
  321. "name" : "node_t1",
  322. "transport_address" : "127.0.0.1:9401",
  323. "roles" : ["data_content", "data_hot"]
  324. },
  325. "can_remain_on_current_node" : "no", <1>
  326. "can_remain_decisions" : [ <2>
  327. {
  328. "decider" : "filter",
  329. "decision" : "NO",
  330. "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"nonexistent_node\"]"
  331. }
  332. ],
  333. "can_move_to_other_node" : "no", <3>
  334. "move_explanation" : "This shard may not remain on its current node, but Elasticsearch isn't allowed to move it to another node. Choose a node to which you expect this shard to be allocated, find this node in the node-by-node explanation, and address the reasons which prevent Elasticsearch from allocating this shard there.",
  335. "node_allocation_decisions" : [
  336. {
  337. "node_id" : "_P8olZS8Twax9u6ioN-GGA",
  338. "node_name" : "node_t0",
  339. "transport_address" : "127.0.0.1:9400",
  340. "roles" : ["data_content", "data_hot"],
  341. "node_decision" : "no",
  342. "weight_ranking" : 1,
  343. "deciders" : [
  344. {
  345. "decider" : "filter",
  346. "decision" : "NO",
  347. "explanation" : "node does not match index setting [index.routing.allocation.include] filters [_name:\"nonexistent_node\"]"
  348. }
  349. ]
  350. }
  351. ]
  352. }
  353. ----
  354. // NOTCONSOLE
  355. <1> Whether the shard is allowed to remain on its current node.
  356. <2> The deciders that factored into the decision of why the shard is not allowed to remain on its current node.
  357. <3> Whether the shard is allowed to be allocated to another node.
  358. ====== Must remain on current node
  359. The following response contains an allocation explanation for a shard that must
  360. remain on its current node. Moving the shard to another node would not improve
  361. cluster balance.
  362. [source,js]
  363. ----
  364. {
  365. "index" : "my-index-000001",
  366. "shard" : 0,
  367. "primary" : true,
  368. "current_state" : "started",
  369. "current_node" : {
  370. "id" : "wLzJm4N4RymDkBYxwWoJsg",
  371. "name" : "node_t0",
  372. "transport_address" : "127.0.0.1:9400",
  373. "roles" : ["data_content", "data_hot"],
  374. "weight_ranking" : 1
  375. },
  376. "can_remain_on_current_node" : "yes",
  377. "can_rebalance_cluster" : "yes", <1>
  378. "can_rebalance_to_other_node" : "no", <2>
  379. "rebalance_explanation" : "Elasticsearch cannot rebalance this shard to another node since there is no node to which allocation is permitted which would improve the cluster balance. If you expect this shard to be rebalanced to another node, find this node in the node-by-node explanation and address the reasons which prevent Elasticsearch from rebalancing this shard there.",
  380. "node_allocation_decisions" : [
  381. {
  382. "node_id" : "oE3EGFc8QN-Tdi5FFEprIA",
  383. "node_name" : "node_t1",
  384. "transport_address" : "127.0.0.1:9401",
  385. "roles" : ["data_content", "data_hot"],
  386. "node_decision" : "worse_balance", <3>
  387. "weight_ranking" : 1
  388. }
  389. ]
  390. }
  391. ----
  392. // NOTCONSOLE
  393. <1> Whether rebalancing is allowed on the cluster.
  394. <2> Whether the shard can be rebalanced to another node.
  395. <3> The reason the shard cannot be rebalanced to the node, in this case indicating that it offers no better balance than the current node.
  396. ===== No arguments
  397. If you call the API with no arguments, {es} retrieves an allocation explanation
  398. for an arbitrary unassigned primary or replica shard, returning any unassigned primary shards first.
  399. [source,console]
  400. ----
  401. GET _cluster/allocation/explain
  402. ----
  403. // TEST[catch:bad_request]
  404. If the cluster contains no unassigned shards, the API returns a `400` error.