circuit-breaker-errors.asciidoc 3.0 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
  1. [[circuit-breaker-errors]]
  2. === Circuit breaker errors
  3. {es} uses <<circuit-breaker,circuit breakers>> to prevent nodes from running out
  4. of JVM heap memory. If Elasticsearch estimates an operation would exceed a
  5. circuit breaker, it stops the operation and returns an error.
  6. By default, the <<parent-circuit-breaker,parent circuit breaker>> triggers at
  7. 95% JVM memory usage. To prevent errors, we recommend taking steps to reduce
  8. memory pressure if usage consistently exceeds 85%.
  9. [discrete]
  10. [[diagnose-circuit-breaker-errors]]
  11. ==== Diagnose circuit breaker errors
  12. **Error messages**
  13. If a request triggers a circuit breaker, {es} returns an error with a `429` HTTP
  14. status code.
  15. [source,js]
  16. ----
  17. {
  18. 'error': {
  19. 'type': 'circuit_breaking_exception',
  20. 'reason': '[parent] Data too large, data for [<http_request>] would be [123848638/118.1mb], which is larger than the limit of [123273216/117.5mb], real usage: [120182112/114.6mb], new bytes reserved: [3666526/3.4mb]',
  21. 'bytes_wanted': 123848638,
  22. 'bytes_limit': 123273216,
  23. 'durability': 'TRANSIENT'
  24. },
  25. 'status': 429
  26. }
  27. ----
  28. // NOTCONSOLE
  29. {es} also writes circuit breaker errors to <<logging,`elasticsearch.log`>>. This
  30. is helpful when automated processes, such as allocation, trigger a circuit
  31. breaker.
  32. [source,txt]
  33. ----
  34. Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [num/numGB], which is larger than the limit of [num/numGB], usages [request=0/0b, fielddata=num/numKB, in_flight_requests=num/numGB, accounting=num/numGB]
  35. ----
  36. **Check JVM memory usage**
  37. If you've enabled Stack Monitoring, you can view JVM memory usage in {kib}. In
  38. the main menu, click **Stack Monitoring**. On the Stack Monitoring **Overview**
  39. page, click **Nodes**. The **JVM Heap** column lists the current memory usage
  40. for each node.
  41. You can also use the <<cat-nodes,cat nodes API>> to get the current
  42. `heap.percent` for each node.
  43. [source,console]
  44. ----
  45. GET _cat/nodes?v=true&h=name,node*,heap*
  46. ----
  47. To get the JVM memory usage for each circuit breaker, use the
  48. <<cluster-nodes-stats,node stats API>>.
  49. [source,console]
  50. ----
  51. GET _nodes/stats/breaker
  52. ----
  53. [discrete]
  54. [[prevent-circuit-breaker-errors]]
  55. ==== Prevent circuit breaker errors
  56. **Reduce JVM memory pressure**
  57. High JVM memory pressure often causes circuit breaker errors. See
  58. <<high-jvm-memory-pressure>>.
  59. **Avoid using fielddata on `text` fields**
  60. For high-cardinality `text` fields, fielddata can use a large amount of JVM
  61. memory. To avoid this, {es} disables fielddata on `text` fields by default. If
  62. you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
  63. circuit breaker>>, consider disabling it and using a `keyword` field instead.
  64. See <<fielddata>>.
  65. **Clear the fieldata cache**
  66. If you've triggered the fielddata circuit breaker and can't disable fielddata,
  67. use the <<indices-clearcache,clear cache API>> to clear the fielddata cache.
  68. This may disrupt any in-flight searches that use fielddata.
  69. [source,console]
  70. ----
  71. POST _cache/clear?fielddata=true
  72. ----
  73. // TEST[s/^/PUT my-index\n/]