fix-common-cluster-issues.asciidoc 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440
  1. [[fix-common-cluster-issues]]
  2. == Fix common cluster issues
  3. This guide describes how to fix common problems with {es} clusters.
  4. [discrete]
  5. [[circuit-breaker-errors]]
  6. === Circuit breaker errors
  7. {es} uses <<circuit-breaker,circuit breakers>> to prevent nodes from running out
  8. of JVM heap memory. If Elasticsearch estimates an operation would exceed a
  9. circuit breaker, it stops the operation and returns an error.
  10. By default, the <<parent-circuit-breaker,parent circuit breaker>> triggers at
  11. 95% JVM memory usage. To prevent errors, we recommend taking steps to reduce
  12. memory pressure if usage consistently exceeds 85%.
  13. [discrete]
  14. [[diagnose-circuit-breaker-errors]]
  15. ==== Diagnose circuit breaker errors
  16. **Error messages**
  17. If a request triggers a circuit breaker, {es} returns an error.
  18. [source,js]
  19. ----
  20. {
  21. 'error': {
  22. 'type': 'circuit_breaking_exception',
  23. 'reason': '[parent] Data too large, data for [<http_request>] would be [123848638/118.1mb], which is larger than the limit of [123273216/117.5mb], real usage: [120182112/114.6mb], new bytes reserved: [3666526/3.4mb]',
  24. 'bytes_wanted': 123848638,
  25. 'bytes_limit': 123273216,
  26. 'durability': 'TRANSIENT'
  27. },
  28. 'status': 429
  29. }
  30. ----
  31. // NOTCONSOLE
  32. {es} also writes circuit breaker errors to <<logging,`elasticsearch.log`>>. This
  33. is helpful when automated processes, such as allocation, trigger a circuit
  34. breaker.
  35. [source,txt]
  36. ----
  37. Caused by: org.elasticsearch.common.breaker.CircuitBreakingException: [parent] Data too large, data for [<transport_request>] would be [num/numGB], which is larger than the limit of [num/numGB], usages [request=0/0b, fielddata=num/numKB, in_flight_requests=num/numGB, accounting=num/numGB]
  38. ----
  39. **Check JVM memory usage**
  40. If you've enabled Stack Monitoring, you can view JVM memory usage in {kib}. In
  41. the main menu, click **Stack Monitoring**. On the Stack Monitoring **Overview**
  42. page, click **Nodes**. The **JVM Heap** column lists the current memory usage
  43. for each node.
  44. You can also use the <<cat-nodes,cat nodes API>> to get the current
  45. `heap.percent` for each node.
  46. [source,console]
  47. ----
  48. GET _cat/nodes?v=true&h=name,node*,heap*
  49. ----
  50. To get the JVM memory usage for each circuit breaker, use the
  51. <<cluster-nodes-stats,node stats API>>.
  52. [source,console]
  53. ----
  54. GET _nodes/stats/breaker
  55. ----
  56. [discrete]
  57. [[prevent-circuit-breaker-errors]]
  58. ==== Prevent circuit breaker errors
  59. **Reduce JVM memory pressure**
  60. High JVM memory pressure often causes circuit breaker errors. See
  61. <<high-jvm-memory-pressure>>.
  62. **Avoid using fielddata on `text` fields**
  63. For high-cardinality `text` fields, fielddata can use a large amount of JVM
  64. memory. To avoid this, {es} disables fielddata on `text` fields by default. If
  65. you've enabled fielddata and triggered the <<fielddata-circuit-breaker,fielddata
  66. circuit breaker>>, consider disabling it and using a `keyword` field instead.
  67. See <<fielddata>>.
  68. **Clear the fieldata cache**
  69. If you've triggered the fielddata circuit breaker and can't disable fielddata,
  70. use the <<indices-clearcache,clear cache API>> to clear the fielddata cache.
  71. This may disrupt any in-flight searches that use fielddata.
  72. [source,console]
  73. ----
  74. POST _all/_cache/clear?fielddata=true
  75. ----
  76. // TEST[s/^/PUT my-index\n/]
  77. [discrete]
  78. [[high-jvm-memory-pressure]]
  79. === High JVM memory pressure
  80. High JVM memory usage can degrade cluster performance and trigger
  81. <<circuit-breaker-errors,circuit breaker errors>>. To prevent this, we recommend
  82. taking steps to reduce memory pressure if a node's JVM memory usage consistently
  83. exceeds 85%.
  84. [discrete]
  85. [[diagnose-high-jvm-memory-pressure]]
  86. ==== Diagnose high JVM memory pressure
  87. **Check JVM memory pressure**
  88. include::{es-repo-dir}/tab-widgets/code.asciidoc[]
  89. include::{es-repo-dir}/tab-widgets/jvm-memory-pressure-widget.asciidoc[]
  90. **Check garbage collection logs**
  91. As memory usage increases, garbage collection becomes more frequent and takes
  92. longer. You can track the frequency and length of garbage collection events in
  93. <<logging,`elasticsearch.log`>>. For example, the following event states {es}
  94. spent more than 50% (21 seconds) of the last 40 seconds performing garbage
  95. collection.
  96. [source,log]
  97. ----
  98. [timestamp_short_interval_from_last][INFO ][o.e.m.j.JvmGcMonitorService] [node_id] [gc][number] overhead, spent [21s] collecting in the last [40s]
  99. ----
  100. [discrete]
  101. [[reduce-jvm-memory-pressure]]
  102. ==== Reduce JVM memory pressure
  103. **Reduce your shard count**
  104. Every shard uses memory. In most cases, a small set of large shards uses fewer
  105. resources than many small shards. For tips on reducing your shard count, see
  106. <<size-your-shards>>.
  107. **Avoid expensive searches**
  108. Expensive searches can use large amounts of memory. To better track expensive
  109. searches on your cluster, enable <<index-modules-slowlog,slow logs>>.
  110. Expensive searches may have a large <<paginate-search-results,`size` argument>>,
  111. use aggregations with a large number of buckets, or include
  112. <<query-dsl-allow-expensive-queries,expensive queries>>. To prevent expensive
  113. searches, consider the following setting changes:
  114. * Lower the `size` limit using the
  115. <<index-max-result-window,`index.max_result_window`>> index setting.
  116. * Decrease the maximum number of allowed aggregation buckets using the
  117. <<search-settings-max-buckets,search.max_buckets>> cluster setting.
  118. * Disable expensive queries using the
  119. <<query-dsl-allow-expensive-queries,`search.allow_expensive_queries`>> cluster
  120. setting.
  121. [source,console]
  122. ----
  123. PUT _all/_settings
  124. {
  125. "index.max_result_window": 5000
  126. }
  127. PUT _cluster/settings
  128. {
  129. "persistent": {
  130. "search.max_buckets": 20000,
  131. "search.allow_expensive_queries": false
  132. }
  133. }
  134. ----
  135. // TEST[s/^/PUT my-index\n/]
  136. **Prevent mapping explosions**
  137. Defining too many fields or nesting fields too deeply can lead to
  138. <<mapping-limit-settings,mapping explosions>> that use large amounts of memory.
  139. To prevent mapping explosions, use the <<mapping-settings-limit,mapping limit
  140. settings>> to limit the number of field mappings.
  141. **Spread out bulk requests**
  142. While more efficient than individual requests, large <<docs-bulk,bulk indexing>>
  143. or <<search-multi-search,multi-search>> requests can still create high JVM
  144. memory pressure. If possible, submit smaller requests and allow more time
  145. between them.
  146. **Upgrade node memory**
  147. Heavy indexing and search loads can cause high JVM memory pressure. To better
  148. handle heavy workloads, upgrade your nodes to increase their memory capacity.
  149. [discrete]
  150. [[red-yellow-cluster-status]]
  151. === Red or yellow cluster status
  152. A red or yellow cluster status indicates one or more shards are missing or
  153. unallocated. These unassigned shards increase your risk of data loss and can
  154. degrade cluster performance.
  155. [discrete]
  156. [[diagnose-cluster-status]]
  157. ==== Diagnose your cluster status
  158. **Check your cluster status**
  159. Use the <<cluster-health,cluster health API>>.
  160. [source,console]
  161. ----
  162. GET _cluster/health?filter_path=status,*_shards
  163. ----
  164. A healthy cluster has a green `status` and zero `unassigned_shards`. A yellow
  165. status means only replicas are unassigned. A red status means one or
  166. more primary shards are unassigned.
  167. **View unassigned shards**
  168. To view unassigned shards, use the <<cat-shards,cat shards API>>.
  169. [source,console]
  170. ----
  171. GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
  172. ----
  173. Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for
  174. primary shards and `r` for replicas. The `unassigned.reason` describes why the
  175. shard remains unassigned.
  176. To get a more in-depth explanation of an unassigned shard's allocation status,
  177. use the <<cluster-allocation-explain,cluster allocation explanation API>>. You
  178. can often use details in the response to resolve the issue.
  179. [source,console]
  180. ----
  181. GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
  182. {
  183. "index": "my-index",
  184. "shard": 0,
  185. "primary": false,
  186. "current_node": "my-node"
  187. }
  188. ----
  189. // TEST[s/^/PUT my-index\n/]
  190. // TEST[s/"primary": false,/"primary": false/]
  191. // TEST[s/"current_node": "my-node"//]
  192. [discrete]
  193. [[fix-red-yellow-cluster-status]]
  194. ==== Fix a red or yellow cluster status
  195. A shard can become unassigned for several reasons. The following tips outline the
  196. most common causes and their solutions.
  197. **Re-enable shard allocation**
  198. You typically disable allocation during a <<restart-cluster,restart>> or other
  199. cluster maintenance. If you forgot to re-enable allocation afterward, {es} will
  200. be unable to assign shards. To re-enable allocation, reset the
  201. `cluster.routing.allocation.enable` cluster setting.
  202. [source,console]
  203. ----
  204. PUT _cluster/settings
  205. {
  206. "persistent" : {
  207. "cluster.routing.allocation.enable" : null
  208. }
  209. }
  210. ----
  211. **Recover lost nodes**
  212. Shards often become unassigned when a data node leaves the cluster. This can
  213. occur for several reasons, ranging from connectivity issues to hardware failure.
  214. After you resolve the issue and recover the node, it will rejoin the cluster.
  215. {es} will then automatically allocate any unassigned shards.
  216. To avoid wasting resources on temporary issues, {es} <<delayed-allocation,delays
  217. allocation>> by one minute by default. If you've recovered a node and don’t want
  218. to wait for the delay period, you can call the <<cluster-reroute,cluster reroute
  219. API>> with no arguments to start the allocation process. The process runs
  220. asynchronously in the background.
  221. [source,console]
  222. ----
  223. POST _cluster/reroute
  224. ----
  225. **Fix allocation settings**
  226. Misconfigured allocation settings can result in an unassigned primary shard.
  227. These settings include:
  228. * <<shard-allocation-filtering,Shard allocation>> index settings
  229. * <<cluster-shard-allocation-filtering,Allocation filtering>> cluster settings
  230. * <<shard-allocation-awareness,Allocation awareness>> cluster settings
  231. To review your allocation settings, use the <<indices-get-settings,get index
  232. settings>> and <<cluster-get-settings,get cluster settings>> APIs.
  233. [source,console]
  234. ----
  235. GET my-index/_settings?flat_settings=true&include_defaults=true
  236. GET _cluster/settings?flat_settings=true&include_defaults=true
  237. ----
  238. // TEST[s/^/PUT my-index\n/]
  239. You can change the settings using the <<indices-update-settings,update index
  240. settings>> and <<cluster-update-settings,update cluster settings>> APIs.
  241. **Allocate or reduce replicas**
  242. To protect against hardware failure, {es} will not assign a replica to the same
  243. node as its primary shard. If no other data nodes are available to host the
  244. replica, it remains unassigned. To fix this, you can:
  245. * Add a data node to the same tier to host the replica.
  246. * Change the `index.number_of_replicas` index setting to reduce the number of
  247. replicas for each primary shard. We recommend keeping at least one replica per
  248. primary.
  249. [source,console]
  250. ----
  251. PUT _all/_settings
  252. {
  253. "index.number_of_replicas": 1
  254. }
  255. ----
  256. // TEST[s/^/PUT my-index\n/]
  257. **Free up or increase disk space**
  258. {es} uses a <<disk-based-shard-allocation,low disk watermark>> to ensure data
  259. nodes have enough disk space for incoming shards. By default, {es} does not
  260. allocate shards to nodes using more than 85% of disk space.
  261. To check the current disk space of your nodes, use the <<cat-allocation,cat
  262. allocation API>>.
  263. [source,console]
  264. ----
  265. GET _cat/allocation?v=true&h=node,shards,disk.*
  266. ----
  267. If your nodes are running low on disk space, you have a few options:
  268. * Upgrade your nodes to increase disk space.
  269. * Delete unneeded indices to free up space. If you use {ilm-init}, you can
  270. update your lifecycle policy to use <<ilm-searchable-snapshot,searchable
  271. snapshots>> or add a delete phase. If you no longer need to search the data, you
  272. can use a <<snapshot-restore,snapshot>> to store it off-cluster.
  273. * If you no longer write to an index, use the <<indices-forcemerge,force merge
  274. API>> or {ilm-init}'s <<ilm-forcemerge,force merge action>> to merge its
  275. segments into larger ones.
  276. +
  277. [source,console]
  278. ----
  279. POST my-index/_forcemerge
  280. ----
  281. // TEST[s/^/PUT my-index\n/]
  282. * If an index is read-only, use the <<indices-shrink-index,shrink index API>> or
  283. {ilm-init}'s <<ilm-shrink,shrink action>> to reduce its primary shard count.
  284. +
  285. [source,console]
  286. ----
  287. POST my-index/_shrink/my-shrunken-index
  288. ----
  289. // TEST[s/^/PUT my-index\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]
  290. * If your node has a large disk capacity, you can increase the low disk
  291. watermark or set it to an explicit byte value.
  292. +
  293. [source,console]
  294. ----
  295. PUT _cluster/settings
  296. {
  297. "persistent": {
  298. "cluster.routing.allocation.disk.watermark.low": "30gb"
  299. }
  300. }
  301. ----
  302. // TEST[s/"30gb"/null/]
  303. **Reduce JVM memory pressure**
  304. Shard allocation requires JVM heap memory. High JVM memory pressure can trigger
  305. <<circuit-breaker,circuit breakers>> that stop allocation and leave shards
  306. unassigned. See <<high-jvm-memory-pressure>>.
  307. **Recover data for a lost primary shard**
  308. If a node containing a primary shard is lost, {es} can typically replace it
  309. using a replica on another node. If you can't recover the node and replicas
  310. don't exist or are irrecoverable, you'll need to re-add the missing data from a
  311. <<snapshot-restore,snapshot>> or the original data source.
  312. WARNING: Only use this option if node recovery is no longer possible. This
  313. process allocates an empty primary shard. If the node later rejoins the cluster,
  314. {es} will overwrite its primary shard with data from this newer empty shard,
  315. resulting in data loss.
  316. Use the <<cluster-reroute,cluster reroute API>> to manually allocate the
  317. unassigned primary shard to another data node in the same tier. Set
  318. `accept_data_loss` to `true`.
  319. [source,console]
  320. ----
  321. POST _cluster/reroute
  322. {
  323. "commands": [
  324. {
  325. "allocate_empty_primary": {
  326. "index": "my-index",
  327. "shard": 0,
  328. "node": "my-node",
  329. "accept_data_loss": "true"
  330. }
  331. }
  332. ]
  333. }
  334. ----
  335. // TEST[s/^/PUT my-index\n/]
  336. // TEST[catch:bad_request]
  337. If you backed up the missing index data to a snapshot, use the
  338. <<restore-snapshot-api,restore snapshot API>> to restore the individual index.
  339. Alternatively, you can index the missing data from the original data source.