fix-common-cluster-issues.asciidoc 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750
  1. [[fix-common-cluster-issues]]
  2. == Fix common cluster issues
  3. This guide describes how to fix common errors and problems with {es} clusters.
  4. [discrete]
  5. === Error: disk usage exceeded flood-stage watermark, index has read-only-allow-delete block
  6. This error indicates a data node is critically low on disk space and has reached
  7. the <<cluster-routing-flood-stage,flood-stage disk usage watermark>>. To prevent
  8. a full disk, when a node reaches this watermark, {es} blocks writes to any index
  9. with a shard on the node. If the block affects related system indices, {kib} and
  10. other {stack} features may become unavailable.
  11. {es} will automatically remove the write block when the affected node's disk
  12. usage goes below the <<cluster-routing-watermark-high,high disk watermark>>. To
  13. achieve this, {es} automatically moves some of the affected node's shards to
  14. other nodes in the same data tier.
  15. To verify that shards are moving off the affected node, use the <<cat-shards,cat
  16. shards API>>.
  17. [source,console]
  18. ----
  19. GET _cat/shards?v=true
  20. ----
  21. If shards remain on the node, use the <<cluster-allocation-explain,cluster
  22. allocation explanation API>> to get an explanation for their allocation status.
  23. [source,console]
  24. ----
  25. GET _cluster/allocation/explain
  26. {
  27. "index": "my-index",
  28. "shard": 0,
  29. "primary": false,
  30. "current_node": "my-node"
  31. }
  32. ----
  33. // TEST[s/^/PUT my-index\n/]
  34. // TEST[s/"primary": false,/"primary": false/]
  35. // TEST[s/"current_node": "my-node"//]
  36. To immediately restore write operations, you can temporarily increase the disk
  37. watermarks and remove the write block.
  38. [source,console]
  39. ----
  40. PUT _cluster/settings
  41. {
  42. "persistent": {
  43. "cluster.routing.allocation.disk.watermark.low": "90%",
  44. "cluster.routing.allocation.disk.watermark.low.max_headroom": "100GB",
  45. "cluster.routing.allocation.disk.watermark.high": "95%",
  46. "cluster.routing.allocation.disk.watermark.high.max_headroom": "20GB",
  47. "cluster.routing.allocation.disk.watermark.flood_stage": "97%",
  48. "cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": "5GB",
  49. "cluster.routing.allocation.disk.watermark.flood_stage.frozen": "97%",
  50. "cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": "5GB"
  51. }
  52. }
  53. PUT */_settings?expand_wildcards=all
  54. {
  55. "index.blocks.read_only_allow_delete": null
  56. }
  57. ----
  58. // TEST[s/^/PUT my-index\n/]
  59. As a long-term solution, we recommend you add nodes to the affected data tiers
  60. or upgrade existing nodes to increase disk space. To free up additional disk
  61. space, you can delete unneeded indices using the <<indices-delete-index,delete
  62. index API>>.
  63. [source,console]
  64. ----
  65. DELETE my-index
  66. ----
  67. // TEST[s/^/PUT my-index\n/]
  68. When a long-term solution is in place, reset or reconfigure the disk watermarks.
  69. [source,console]
  70. ----
  71. PUT _cluster/settings
  72. {
  73. "persistent": {
  74. "cluster.routing.allocation.disk.watermark.low": null,
  75. "cluster.routing.allocation.disk.watermark.low.max_headroom": null,
  76. "cluster.routing.allocation.disk.watermark.high": null,
  77. "cluster.routing.allocation.disk.watermark.high.max_headroom": null,
  78. "cluster.routing.allocation.disk.watermark.flood_stage": null,
  79. "cluster.routing.allocation.disk.watermark.flood_stage.max_headroom": null,
  80. "cluster.routing.allocation.disk.watermark.flood_stage.frozen": null,
  81. "cluster.routing.allocation.disk.watermark.flood_stage.frozen.max_headroom": null
  82. }
  83. }
  84. ----
  85. [discrete]
  86. [[circuit-breaker-errors]]
  87. === Circuit breaker errors
  88. {es} uses <<circuit-breaker,circuit breakers>> to prevent nodes from running out
  89. of JVM heap memory. If Elasticsearch estimates an operation would exceed a
  90. circuit breaker, it stops the operation and returns an error.
  91. By default, the <<parent-circuit-breaker,parent circuit breaker>> triggers at
  92. 95% JVM memory usage. To prevent errors, we recommend taking steps to reduce
  93. memory pressure if usage consistently exceeds 85%.
  94. [discrete]
  95. [[diagnose-circuit-breaker-errors]]
  96. ==== Diagnose circuit breaker errors
  97. **Error messages**
  98. If a request triggers a circuit breaker, {es} returns an error with a `429` HTTP
  99. status code.
  100. [source,js]
  101. ----
  102. {
  103. 'error': {
  104. 'type': 'circuit_breaking_exception',
  105. 'reason': '[parent] Data too large, data for [<http_request>] would be [123848638/118.1mb], which is larger than the limit of [123273216/117.5mb], real usage: [120182112/114.6mb], new bytes reserved: [3666526/3.4mb]',
  106. 'bytes_wanted': 123848638,
  107. 'bytes_limit': 123273216,
  108. 'durability': 'TRANSIENT'
  109. },
  110. 'status': 429
  111. }
  112. ----
  113. // NOTCONSOLE
  114. {es} also writes circuit breaker errors to <<logging,`elasticsearch.log`>>. This
  115. is helpful when automated processes, such as allocation, trigger a circuit
  116. breaker.
  117. [source,txt]
  118. ----
  119. Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [num/numGB], which is larger than the limit of [num/numGB], usages [request=0/0b, fielddata=num/numKB, in_flight_requests=num/numGB, accounting=num/numGB]
  120. ----
  121. **Check JVM memory usage**
  122. If you've enabled Stack Monitoring, you can view JVM memory usage in {kib}. In
  123. the main menu, click **Stack Monitoring**. On the Stack Monitoring **Overview**
  124. page, click **Nodes**. The **JVM Heap** column lists the current memory usage
  125. for each node.
  126. You can also use the <<cat-nodes,cat nodes API>> to get the current
  127. `heap.percent` for each node.
  128. [source,console]
  129. ----
  130. GET _cat/nodes?v=true&h=name,node*,heap*
  131. ----
  132. See <<high-jvm-memory-pressure>> for more details.
  133. To get the JVM memory usage for each circuit breaker, use the
  134. <<cluster-nodes-stats,node stats API>>.
  135. [source,console]
  136. ----
  137. GET _nodes/stats/breaker
  138. ----
  139. [discrete]
  140. [[prevent-circuit-breaker-errors]]
  141. ==== Prevent circuit breaker errors
  142. **Reduce JVM memory pressure**
  143. High JVM memory pressure often causes circuit breaker errors. See
  144. <<high-jvm-memory-pressure>>.
  145. **Avoid using fielddata on `text` fields**
  146. For high-cardinality `text` fields, fielddata can use a large amount of JVM
  147. memory. To avoid this, {es} disables fielddata on `text` fields by default. If
  148. you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
  149. circuit breaker>>, consider disabling it and using a `keyword` field instead.
  150. See <<fielddata-mapping-param>>.
  151. **Clear the fieldata cache**
  152. If you've triggered the fielddata circuit breaker and can't disable fielddata,
  153. use the <<indices-clearcache,clear cache API>> to clear the fielddata cache.
  154. This may disrupt any in-flight searches that use fielddata.
  155. [source,console]
  156. ----
  157. POST _cache/clear?fielddata=true
  158. ----
  159. // TEST[s/^/PUT my-index\n/]
  160. [discrete]
  161. [[high-cpu-usage]]
  162. === High CPU usage
  163. {es} uses <<modules-threadpool,thread pools>> to manage CPU resources for
  164. concurrent operations. High CPU usage typically means one or more thread pools
  165. are running low.
  166. If a thread pool is depleted, {es} will <<rejected-requests,reject requests>>
  167. related to the thread pool. For example, if the `search` thread pool is
  168. depleted, {es} will reject search requests until more threads are available.
  169. [discrete]
  170. [[diagnose-high-cpu-usage]]
  171. ==== Diagnose high CPU usage
  172. **Check CPU usage**
  173. include::{es-repo-dir}/tab-widgets/cpu-usage-widget.asciidoc[]
  174. **Check hot threads**
  175. If a node has high CPU usage, use the <<cluster-nodes-hot-threads,nodes hot
  176. threads API>> to check for resource-intensive threads running on the node.
  177. [source,console]
  178. ----
  179. GET _nodes/my-node,my-other-node/hot_threads
  180. ----
  181. // TEST[s/\/my-node,my-other-node//]
  182. This API returns a breakdown of any hot threads in plain text.
  183. [discrete]
  184. [[reduce-cpu-usage]]
  185. ==== Reduce CPU usage
  186. The following tips outline the most common causes of high CPU usage and their
  187. solutions.
  188. **Scale your cluster**
  189. Heavy indexing and search loads can deplete smaller thread pools. To better
  190. handle heavy workloads, add more nodes to your cluster or upgrade your existing
  191. nodes to increase capacity.
  192. **Spread out bulk requests**
  193. While more efficient than individual requests, large <<docs-bulk,bulk indexing>>
  194. or <<search-multi-search,multi-search>> requests still require CPU resources. If
  195. possible, submit smaller requests and allow more time between them.
  196. **Cancel long-running searches**
  197. Long-running searches can block threads in the `search` thread pool. To check
  198. for these searches, use the <<tasks,task management API>>.
  199. [source,console]
  200. ----
  201. GET _tasks?actions=*search&detailed
  202. ----
  203. The response's `description` contains the search request and its queries.
  204. `running_time_in_nanos` shows how long the search has been running.
  205. [source,console-result]
  206. ----
  207. {
  208. "nodes" : {
  209. "oTUltX4IQMOUUVeiohTt8A" : {
  210. "name" : "my-node",
  211. "transport_address" : "127.0.0.1:9300",
  212. "host" : "127.0.0.1",
  213. "ip" : "127.0.0.1:9300",
  214. "tasks" : {
  215. "oTUltX4IQMOUUVeiohTt8A:464" : {
  216. "node" : "oTUltX4IQMOUUVeiohTt8A",
  217. "id" : 464,
  218. "type" : "transport",
  219. "action" : "indices:data/read/search",
  220. "description" : "indices[my-index], search_type[QUERY_THEN_FETCH], source[{\"query\":...}]",
  221. "start_time_in_millis" : 4081771730000,
  222. "running_time_in_nanos" : 13991383,
  223. "cancellable" : true
  224. }
  225. }
  226. }
  227. }
  228. }
  229. ----
  230. // TESTRESPONSE[skip: no way to get tasks]
  231. To cancel a search and free up resources, use the API's `_cancel` endpoint.
  232. [source,console]
  233. ----
  234. POST _tasks/oTUltX4IQMOUUVeiohTt8A:464/_cancel
  235. ----
  236. For additional tips on how to track and avoid resource-intensive searches, see
  237. <<avoid-expensive-searches,Avoid expensive searches>>.
  238. [discrete]
  239. [[high-jvm-memory-pressure]]
  240. === High JVM memory pressure
  241. High JVM memory usage can degrade cluster performance and trigger
  242. <<circuit-breaker-errors,circuit breaker errors>>. To prevent this, we recommend
  243. taking steps to reduce memory pressure if a node's JVM memory usage consistently
  244. exceeds 85%.
  245. [discrete]
  246. [[diagnose-high-jvm-memory-pressure]]
  247. ==== Diagnose high JVM memory pressure
  248. **Check JVM memory pressure**
  249. include::{es-repo-dir}/tab-widgets/jvm-memory-pressure-widget.asciidoc[]
  250. **Check garbage collection logs**
  251. As memory usage increases, garbage collection becomes more frequent and takes
  252. longer. You can track the frequency and length of garbage collection events in
  253. <<logging,`elasticsearch.log`>>. For example, the following event states {es}
  254. spent more than 50% (21 seconds) of the last 40 seconds performing garbage
  255. collection.
  256. [source,log]
  257. ----
  258. [timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]
  259. ----
  260. [discrete]
  261. [[reduce-jvm-memory-pressure]]
  262. ==== Reduce JVM memory pressure
  263. **Reduce your shard count**
  264. Every shard uses memory. In most cases, a small set of large shards uses fewer
  265. resources than many small shards. For tips on reducing your shard count, see
  266. <<size-your-shards>>.
  267. [[avoid-expensive-searches]]
  268. **Avoid expensive searches**
  269. Expensive searches can use large amounts of memory. To better track expensive
  270. searches on your cluster, enable <<index-modules-slowlog,slow logs>>.
  271. Expensive searches may have a large <<paginate-search-results,`size` argument>>,
  272. use aggregations with a large number of buckets, or include
  273. <<query-dsl-allow-expensive-queries,expensive queries>>. To prevent expensive
  274. searches, consider the following setting changes:
  275. * Lower the `size` limit using the
  276. <<index-max-result-window,`index.max_result_window`>> index setting.
  277. * Decrease the maximum number of allowed aggregation buckets using the
  278. <<search-settings-max-buckets,search.max_buckets>> cluster setting.
  279. * Disable expensive queries using the
  280. <<query-dsl-allow-expensive-queries,`search.allow_expensive_queries`>> cluster
  281. setting.
  282. [source,console]
  283. ----
  284. PUT _settings
  285. {
  286. "index.max_result_window": 5000
  287. }
  288. PUT _cluster/settings
  289. {
  290. "persistent": {
  291. "search.max_buckets": 20000,
  292. "search.allow_expensive_queries": false
  293. }
  294. }
  295. ----
  296. // TEST[s/^/PUT my-index\n/]
  297. **Prevent mapping explosions**
  298. Defining too many fields or nesting fields too deeply can lead to
  299. <<mapping-limit-settings,mapping explosions>> that use large amounts of memory.
  300. To prevent mapping explosions, use the <<mapping-settings-limit,mapping limit
  301. settings>> to limit the number of field mappings.
  302. **Spread out bulk requests**
  303. While more efficient than individual requests, large <<docs-bulk,bulk indexing>>
  304. or <<search-multi-search,multi-search>> requests can still create high JVM
  305. memory pressure. If possible, submit smaller requests and allow more time
  306. between them.
  307. **Upgrade node memory**
  308. Heavy indexing and search loads can cause high JVM memory pressure. To better
  309. handle heavy workloads, upgrade your nodes to increase their memory capacity.
  310. [discrete]
  311. [[red-yellow-cluster-status]]
  312. === Red or yellow cluster status
  313. A red or yellow cluster status indicates one or more shards are missing or
  314. unallocated. These unassigned shards increase your risk of data loss and can
  315. degrade cluster performance.
  316. [discrete]
  317. [[diagnose-cluster-status]]
  318. ==== Diagnose your cluster status
  319. **Check your cluster status**
  320. Use the <<cluster-health,cluster health API>>.
  321. [source,console]
  322. ----
  323. GET _cluster/health?filter_path=status,*_shards
  324. ----
  325. A healthy cluster has a green `status` and zero `unassigned_shards`. A yellow
  326. status means only replicas are unassigned. A red status means one or
  327. more primary shards are unassigned.
  328. **View unassigned shards**
  329. To view unassigned shards, use the <<cat-shards,cat shards API>>.
  330. [source,console]
  331. ----
  332. GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
  333. ----
  334. Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for
  335. primary shards and `r` for replicas.
  336. To understand why an unassigned shard is not being assigned and what action
  337. you must take to allow {es} to assign it, use the
  338. <<cluster-allocation-explain,cluster allocation explanation API>>.
  339. [source,console]
  340. ----
  341. GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
  342. {
  343. "index": "my-index",
  344. "shard": 0,
  345. "primary": false,
  346. "current_node": "my-node"
  347. }
  348. ----
  349. // TEST[s/^/PUT my-index\n/]
  350. // TEST[s/"primary": false,/"primary": false/]
  351. // TEST[s/"current_node": "my-node"//]
  352. [discrete]
  353. [[fix-red-yellow-cluster-status]]
  354. ==== Fix a red or yellow cluster status
  355. A shard can become unassigned for several reasons. The following tips outline the
  356. most common causes and their solutions.
  357. **Re-enable shard allocation**
  358. You typically disable allocation during a <<restart-cluster,restart>> or other
  359. cluster maintenance. If you forgot to re-enable allocation afterward, {es} will
  360. be unable to assign shards. To re-enable allocation, reset the
  361. `cluster.routing.allocation.enable` cluster setting.
  362. [source,console]
  363. ----
  364. PUT _cluster/settings
  365. {
  366. "persistent" : {
  367. "cluster.routing.allocation.enable" : null
  368. }
  369. }
  370. ----
  371. **Recover lost nodes**
  372. Shards often become unassigned when a data node leaves the cluster. This can
  373. occur for several reasons, ranging from connectivity issues to hardware failure.
  374. After you resolve the issue and recover the node, it will rejoin the cluster.
  375. {es} will then automatically allocate any unassigned shards.
  376. To avoid wasting resources on temporary issues, {es} <<delayed-allocation,delays
  377. allocation>> by one minute by default. If you've recovered a node and don’t want
  378. to wait for the delay period, you can call the <<cluster-reroute,cluster reroute
  379. API>> with no arguments to start the allocation process. The process runs
  380. asynchronously in the background.
  381. [source,console]
  382. ----
  383. POST _cluster/reroute?metric=none
  384. ----
  385. **Fix allocation settings**
  386. Misconfigured allocation settings can result in an unassigned primary shard.
  387. These settings include:
  388. * <<shard-allocation-filtering,Shard allocation>> index settings
  389. * <<cluster-shard-allocation-filtering,Allocation filtering>> cluster settings
  390. * <<shard-allocation-awareness,Allocation awareness>> cluster settings
  391. To review your allocation settings, use the <<indices-get-settings,get index
  392. settings>> and <<cluster-get-settings,cluster get settings>> APIs.
  393. [source,console]
  394. ----
  395. GET my-index/_settings?flat_settings=true&include_defaults=true
  396. GET _cluster/settings?flat_settings=true&include_defaults=true
  397. ----
  398. // TEST[s/^/PUT my-index\n/]
  399. You can change the settings using the <<indices-update-settings,update index
  400. settings>> and <<cluster-update-settings,cluster update settings>> APIs.
  401. **Allocate or reduce replicas**
  402. To protect against hardware failure, {es} will not assign a replica to the same
  403. node as its primary shard. If no other data nodes are available to host the
  404. replica, it remains unassigned. To fix this, you can:
  405. * Add a data node to the same tier to host the replica.
  406. * Change the `index.number_of_replicas` index setting to reduce the number of
  407. replicas for each primary shard. We recommend keeping at least one replica per
  408. primary.
  409. [source,console]
  410. ----
  411. PUT _settings
  412. {
  413. "index.number_of_replicas": 1
  414. }
  415. ----
  416. // TEST[s/^/PUT my-index\n/]
  417. **Free up or increase disk space**
  418. {es} uses a <<disk-based-shard-allocation,low disk watermark>> to ensure data
  419. nodes have enough disk space for incoming shards. By default, {es} does not
  420. allocate shards to nodes using more than 85% of disk space.
  421. To check the current disk space of your nodes, use the <<cat-allocation,cat
  422. allocation API>>.
  423. [source,console]
  424. ----
  425. GET _cat/allocation?v=true&h=node,shards,disk.*
  426. ----
  427. If your nodes are running low on disk space, you have a few options:
  428. * Upgrade your nodes to increase disk space.
  429. * Delete unneeded indices to free up space. If you use {ilm-init}, you can
  430. update your lifecycle policy to use <<ilm-searchable-snapshot,searchable
  431. snapshots>> or add a delete phase. If you no longer need to search the data, you
  432. can use a <<snapshot-restore,snapshot>> to store it off-cluster.
  433. * If you no longer write to an index, use the <<indices-forcemerge,force merge
  434. API>> or {ilm-init}'s <<ilm-forcemerge,force merge action>> to merge its
  435. segments into larger ones.
  436. +
  437. [source,console]
  438. ----
  439. POST my-index/_forcemerge
  440. ----
  441. // TEST[s/^/PUT my-index\n/]
  442. * If an index is read-only, use the <<indices-shrink-index,shrink index API>> or
  443. {ilm-init}'s <<ilm-shrink,shrink action>> to reduce its primary shard count.
  444. +
  445. [source,console]
  446. ----
  447. POST my-index/_shrink/my-shrunken-index
  448. ----
  449. // TEST[s/^/PUT my-index\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]
  450. * If your node has a large disk capacity, you can increase the low disk
  451. watermark or set it to an explicit byte value.
  452. +
  453. [source,console]
  454. ----
  455. PUT _cluster/settings
  456. {
  457. "persistent": {
  458. "cluster.routing.allocation.disk.watermark.low": "30gb"
  459. }
  460. }
  461. ----
  462. // TEST[s/"30gb"/null/]
  463. **Reduce JVM memory pressure**
  464. Shard allocation requires JVM heap memory. High JVM memory pressure can trigger
  465. <<circuit-breaker,circuit breakers>> that stop allocation and leave shards
  466. unassigned. See <<high-jvm-memory-pressure>>.
  467. **Recover data for a lost primary shard**
  468. If a node containing a primary shard is lost, {es} can typically replace it
  469. using a replica on another node. If you can't recover the node and replicas
  470. don't exist or are irrecoverable, you'll need to re-add the missing data from a
  471. <<snapshot-restore,snapshot>> or the original data source.
  472. WARNING: Only use this option if node recovery is no longer possible. This
  473. process allocates an empty primary shard. If the node later rejoins the cluster,
  474. {es} will overwrite its primary shard with data from this newer empty shard,
  475. resulting in data loss.
  476. Use the <<cluster-reroute,cluster reroute API>> to manually allocate the
  477. unassigned primary shard to another data node in the same tier. Set
  478. `accept_data_loss` to `true`.
  479. [source,console]
  480. ----
  481. POST _cluster/reroute?metric=none
  482. {
  483. "commands": [
  484. {
  485. "allocate_empty_primary": {
  486. "index": "my-index",
  487. "shard": 0,
  488. "node": "my-node",
  489. "accept_data_loss": "true"
  490. }
  491. }
  492. ]
  493. }
  494. ----
  495. // TEST[s/^/PUT my-index\n/]
  496. // TEST[catch:bad_request]
  497. If you backed up the missing index data to a snapshot, use the
  498. <<restore-snapshot-api,restore snapshot API>> to restore the individual index.
  499. Alternatively, you can index the missing data from the original data source.
  500. [discrete]
  501. [[rejected-requests]]
  502. === Rejected requests
  503. When {es} rejects a request, it stops the operation and returns an error with a
  504. `429` response code. Rejected requests are commonly caused by:
  505. * A <<high-cpu-usage,depleted thread pool>>. A depleted `search` or `write`
  506. thread pool returns a `TOO_MANY_REQUESTS` error message.
  507. * A <<circuit-breaker-errors,circuit breaker error>>.
  508. * High <<index-modules-indexing-pressure,indexing pressure>> that exceeds the
  509. <<memory-limits,`indexing_pressure.memory.limit`>>.
  510. [discrete]
  511. [[check-rejected-tasks]]
  512. ==== Check rejected tasks
  513. To check the number of rejected tasks for each thread pool, use the
  514. <<cat-thread-pool,cat thread pool API>>. A high ratio of `rejected` to
  515. `completed` tasks, particularly in the `search` and `write` thread pools, means
  516. {es} regularly rejects requests.
  517. [source,console]
  518. ----
  519. GET /_cat/thread_pool?v=true&h=id,name,active,rejected,completed
  520. ----
  521. [discrete]
  522. [[prevent-rejected-requests]]
  523. ==== Prevent rejected requests
  524. **Fix high CPU and memory usage**
  525. If {es} regularly rejects requests and other tasks, your cluster likely has high
  526. CPU usage or high JVM memory pressure. For tips, see <<high-cpu-usage>> and
  527. <<high-jvm-memory-pressure>>.
  528. **Prevent circuit breaker errors**
  529. If you regularly trigger circuit breaker errors, see <<circuit-breaker-errors>>
  530. for tips on diagnosing and preventing them.
  531. [discrete]
  532. [[task-queue-backlog]]
  533. === Task queue backlog
  534. A backlogged task queue can prevent tasks from completing and
  535. put the cluster into an unhealthy state.
  536. Resource constraints, a large number of tasks being triggered at once,
  537. and long running tasks can all contribute to a backlogged task queue.
  538. [discrete]
  539. [[diagnose-task-queue-backlog]]
  540. ==== Diagnose a task queue backlog
  541. **Check the thread pool status**
  542. A <<high-cpu-usage,depleted thread pool>> can result in <<rejected-requests,rejected requests>>.
  543. You can use the <<cat-thread-pool,cat thread pool API>> to
  544. see the number of active threads in each thread pool and
  545. how many tasks are queued, how many have been rejected, and how many have completed.
  546. [source,console]
  547. ----
  548. GET /_cat/thread_pool?v&s=t,n&h=type,name,node_name,active,queue,rejected,completed
  549. ----
  550. **Inspect the hot threads on each node**
  551. If a particular thread pool queue is backed up,
  552. you can periodically poll the <<cluster-nodes-hot-threads,Nodes hot threads>> API
  553. to determine if the thread has sufficient
  554. resources to progress and gauge how quickly it is progressing.
  555. [source,console]
  556. ----
  557. GET /_nodes/hot_threads
  558. ----
  559. **Look for long running tasks**
  560. Long-running tasks can also cause a backlog.
  561. You can use the <<tasks,task management>> API to get information about the tasks that are running.
  562. Check the `running_time_in_nanos` to identify tasks that are taking an excessive amount of time to complete.
  563. [source,console]
  564. ----
  565. GET /_tasks?filter_path=nodes.*.tasks
  566. ----
  567. [discrete]
  568. [[resolve-task-queue-backlog]]
  569. ==== Resolve a task queue backlog
  570. **Increase available resources**
  571. If tasks are progressing slowly and the queue is backing up,
  572. you might need to take steps to <<reduce-cpu-usage>>.
  573. In some cases, increasing the thread pool size might help.
  574. For example, the `force_merge` thread pool defaults to a single thread.
  575. Increasing the size to 2 might help reduce a backlog of force merge requests.
  576. **Cancel stuck tasks**
  577. If you find the active task's hot thread isn't progressing and there's a backlog,
  578. consider canceling the task.