fault-detection.asciidoc 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417
  1. [[cluster-fault-detection]]
  2. === Cluster fault detection
  3. The elected master periodically checks each of the nodes in the cluster to
  4. ensure that they are still connected and healthy. Each node in the cluster also
  5. periodically checks the health of the elected master. These checks are known
  6. respectively as _follower checks_ and _leader checks_.
  7. Elasticsearch allows these checks to occasionally fail or timeout without
  8. taking any action. It considers a node to be faulty only after a number of
  9. consecutive checks have failed. You can control fault detection behavior with
  10. <<modules-discovery-settings,`cluster.fault_detection.*` settings>>.
  11. If the elected master detects that a node has disconnected, however, this
  12. situation is treated as an immediate failure. The master bypasses the timeout
  13. and retry setting values and attempts to remove the node from the cluster.
  14. Similarly, if a node detects that the elected master has disconnected, this
  15. situation is treated as an immediate failure. The node bypasses the timeout and
  16. retry settings and restarts its discovery phase to try and find or elect a new
  17. master.
  18. [[cluster-fault-detection-filesystem-health]]
  19. Additionally, each node periodically verifies that its data path is healthy by
  20. writing a small file to disk and then deleting it again. If a node discovers
  21. its data path is unhealthy then it is removed from the cluster until the data
  22. path recovers. You can control this behavior with the
  23. <<modules-discovery-settings,`monitor.fs.health` settings>>.
  24. [[cluster-fault-detection-cluster-state-publishing]] The elected master node
  25. will also remove nodes from the cluster if nodes are unable to apply an updated
  26. cluster state within a reasonable time. The timeout defaults to 2 minutes
  27. starting from the beginning of the cluster state update. Refer to
  28. <<cluster-state-publishing>> for a more detailed description.
  29. [[cluster-fault-detection-troubleshooting]]
  30. ==== Troubleshooting an unstable cluster
  31. Normally, a node will only leave a cluster if deliberately shut down. If a node
  32. leaves the cluster unexpectedly, it's important to address the cause. A cluster
  33. in which nodes leave unexpectedly is unstable and can create several issues.
  34. For instance:
  35. * The cluster health may be yellow or red.
  36. * Some shards will be initializing and other shards may be failing.
  37. * Search, indexing, and monitoring operations may fail and report exceptions in
  38. logs.
  39. * The `.security` index may be unavailable, blocking access to the cluster.
  40. * The master may appear busy due to frequent cluster state updates.
  41. To troubleshoot a cluster in this state, first ensure the cluster has a
  42. <<discovery-troubleshooting,stable master>>. Next, focus on the nodes
  43. unexpectedly leaving the cluster ahead of all other issues. It will not be
  44. possible to solve other issues until the cluster has a stable master node and
  45. stable node membership.
  46. Diagnostics and statistics are usually not useful in an unstable cluster. These
  47. tools only offer a view of the state of the cluster at a single point in time.
  48. Instead, look at the cluster logs to see the pattern of behaviour over time.
  49. Focus particularly on logs from the elected master. When a node leaves the
  50. cluster, logs for the elected master include a message like this (with line
  51. breaks added to make it easier to read):
  52. [source,text]
  53. ----
  54. [2022-03-21T11:02:35,513][INFO ][o.e.c.c.NodeLeftExecutor] [instance-0000000000]
  55. node-left: [{instance-0000000004}{bfcMDTiDRkietFb9v_di7w}{aNlyORLASam1ammv2DzYXA}{172.27.47.21}{172.27.47.21:19054}{m}]
  56. with reason [disconnected]
  57. ----
  58. This message says that the `NodeLeftExecutor` on the elected master
  59. (`instance-0000000000`) processed a `node-left` task, identifying the node that
  60. was removed and the reason for its removal. When the node joins the cluster
  61. again, logs for the elected master will include a message like this (with line
  62. breaks added to make it easier to read):
  63. [source,text]
  64. ----
  65. [2022-03-21T11:02:59,892][INFO ][o.e.c.c.NodeJoinExecutor] [instance-0000000000]
  66. node-join: [{instance-0000000004}{bfcMDTiDRkietFb9v_di7w}{UNw_RuazQCSBskWZV8ID_w}{172.27.47.21}{172.27.47.21:19054}{m}]
  67. with reason [joining after restart, removed [24s] ago with reason [disconnected]]
  68. ----
  69. This message says that the `NodeJoinExecutor` on the elected master
  70. (`instance-0000000000`) processed a `node-join` task, identifying the node that
  71. was added to the cluster and the reason for the task.
  72. Other nodes may log similar messages, but report fewer details:
  73. [source,text]
  74. ----
  75. [2020-01-29T11:02:36,985][INFO ][o.e.c.s.ClusterApplierService]
  76. [instance-0000000001] removed {
  77. {instance-0000000004}{bfcMDTiDRkietFb9v_di7w}{aNlyORLASam1ammv2DzYXA}{172.27.47.21}{172.27.47.21:19054}{m}
  78. {tiebreaker-0000000003}{UNw_RuazQCSBskWZV8ID_w}{bltyVOQ-RNu20OQfTHSLtA}{172.27.161.154}{172.27.161.154:19251}{mv}
  79. }, term: 14, version: 1653415, reason: Publication{term=14, version=1653415}
  80. ----
  81. These messages are not especially useful for troubleshooting, so focus on the
  82. ones from the `NodeLeftExecutor` and `NodeJoinExecutor` which are only emitted
  83. on the elected master and which contain more details. If you don't see the
  84. messages from the `NodeLeftExecutor` and `NodeJoinExecutor`, check that:
  85. * You're looking at the logs for the elected master node.
  86. * The logs cover the correct time period.
  87. * Logging is enabled at `INFO` level.
  88. Nodes will also log a message containing `master node changed` whenever they
  89. start or stop following the elected master. You can use these messages to
  90. determine each node's view of the state of the master over time.
  91. If a node restarts, it will leave the cluster and then join the cluster again.
  92. When it rejoins, the `NodeJoinExecutor` will log that it processed a
  93. `node-join` task indicating that the node is `joining after restart`. If a node
  94. is unexpectedly restarting, look at the node's logs to see why it is shutting
  95. down.
  96. If the node did not restart then you should look at the reason for its
  97. departure more closely. Each reason has different troubleshooting steps,
  98. described below. There are three possible reasons:
  99. * `disconnected`: The connection from the master node to the removed node was
  100. closed.
  101. * `lagging`: The master published a cluster state update, but the removed node
  102. did not apply it within the permitted timeout. By default, this timeout is 2
  103. minutes. Refer to <<modules-discovery-settings>> for information about the
  104. settings which control this mechanism.
  105. * `followers check retry count exceeded`: The master sent a number of
  106. consecutive health checks to the removed node. These checks were rejected or
  107. timed out. By default, each health check times out after 10 seconds and {es}
  108. removes the node removed after three consecutively failed health checks. Refer
  109. to <<modules-discovery-settings>> for information about the settings which
  110. control this mechanism.
  111. ===== Diagnosing `disconnected` nodes
  112. Nodes typically leave the cluster with reason `disconnected` when they shut
  113. down, but if they rejoin the cluster without restarting then there is some
  114. other problem.
  115. {es} is designed to run on a fairly reliable network. It opens a number of TCP
  116. connections between nodes and expects these connections to remain open forever.
  117. If a connection is closed then {es} will try and reconnect, so the occasional
  118. blip should have limited impact on the cluster even if the affected node
  119. briefly leaves the cluster. In contrast, repeatedly-dropped connections will
  120. severely affect its operation.
  121. The connections from the elected master node to every other node in the cluster
  122. are particularly important. The elected master never spontaneously closes its
  123. outbound connections to other nodes. Similarly, once a connection is fully
  124. established, a node never spontaneously close its inbound connections unless
  125. the node is shutting down.
  126. If you see a node unexpectedly leave the cluster with the `disconnected`
  127. reason, something other than {es} likely caused the connection to close. A
  128. common cause is a misconfigured firewall with an improper timeout or another
  129. policy that's <<long-lived-connections,incompatible with {es}>>. It could also
  130. be caused by general connectivity issues, such as packet loss due to faulty
  131. hardware or network congestion. If you're an advanced user, you can get more
  132. detailed information about network exceptions by configuring the following
  133. loggers:
  134. [source,yaml]
  135. ----
  136. logger.org.elasticsearch.transport.TcpTransport: DEBUG
  137. logger.org.elasticsearch.xpack.core.security.transport.netty4.SecurityNetty4Transport: DEBUG
  138. ----
  139. In extreme cases, you may need to take packet captures using `tcpdump` to
  140. determine whether messages between nodes are being dropped or rejected by some
  141. other device on the network.
  142. ===== Diagnosing `lagging` nodes
  143. {es} needs every node to process cluster state updates reasonably quickly. If a
  144. node takes too long to process a cluster state update, it can be harmful to the
  145. cluster. The master will remove these nodes with the `lagging` reason. Refer to
  146. <<modules-discovery-settings>> for information about the settings which control
  147. this mechanism.
  148. Lagging is typically caused by performance issues on the removed node. However,
  149. a node may also lag due to severe network delays. To rule out network delays,
  150. ensure that `net.ipv4.tcp_retries2` is <<system-config-tcpretries,configured
  151. properly>>. Log messages that contain `warn threshold` may provide more
  152. information about the root cause.
  153. If you're an advanced user, you can get more detailed information about what
  154. the node was doing when it was removed by configuring the following logger:
  155. [source,yaml]
  156. ----
  157. logger.org.elasticsearch.cluster.coordination.LagDetector: DEBUG
  158. ----
  159. When this logger is enabled, {es} will attempt to run the
  160. <<cluster-nodes-hot-threads>> API on the faulty node and report the results in
  161. the logs on the elected master. The results are compressed, encoded, and split
  162. into chunks to avoid truncation:
  163. [source,text]
  164. ----
  165. [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 1]: H4sIAAAAAAAA/x...
  166. [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 2]: p7x3w1hmOQVtuV...
  167. [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 3]: v7uTboMGDbyOy+...
  168. [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] [part 4]: 4tse0RnPnLeDNN...
  169. [DEBUG][o.e.c.c.LagDetector ] [master] hot threads from node [{node}{g3cCUaMDQJmQ2ZLtjr-3dg}{10.0.0.1:9300}] lagging at version [183619] despite commit of cluster state version [183620] (gzip compressed, base64-encoded, and split into 4 parts on preceding log lines)
  170. ----
  171. To reconstruct the output, base64-decode the data and decompress it using
  172. `gzip`. For instance, on Unix-like systems:
  173. [source,sh]
  174. ----
  175. cat lagdetector.log | sed -e 's/.*://' | base64 --decode | gzip --decompress
  176. ----
  177. ===== Diagnosing `follower check retry count exceeded` nodes
  178. Nodes sometimes leave the cluster with reason `follower check retry count
  179. exceeded` when they shut down, but if they rejoin the cluster without
  180. restarting then there is some other problem.
  181. {es} needs every node to respond to network messages successfully and
  182. reasonably quickly. If a node rejects requests or does not respond at all then
  183. it can be harmful to the cluster. If enough consecutive checks fail then the
  184. master will remove the node with reason `follower check retry count exceeded`
  185. and will indicate in the `node-left` message how many of the consecutive
  186. unsuccessful checks failed and how many of them timed out. Refer to
  187. <<modules-discovery-settings>> for information about the settings which control
  188. this mechanism.
  189. Timeouts and failures may be due to network delays or performance problems on
  190. the affected nodes. Ensure that `net.ipv4.tcp_retries2` is
  191. <<system-config-tcpretries,configured properly>> to eliminate network delays as
  192. a possible cause for this kind of instability. Log messages containing
  193. `warn threshold` may give further clues about the cause of the instability.
  194. If the last check failed with an exception then the exception is reported, and
  195. typically indicates the problem that needs to be addressed. If any of the
  196. checks timed out, it may be necessary to understand the detailed sequence of
  197. steps involved in a successful check. Here is an example of such a sequence:
  198. . The master's `FollowerChecker`, running on thread
  199. `elasticsearch[master][scheduler][T#1]`, tells the `TransportService` to send
  200. the check request message to a follower node.
  201. . The master's `TransportService` running on thread
  202. `elasticsearch[master][transport_worker][T#2]` passes the check request message
  203. onto the operating system.
  204. . The operating system on the master converts the message into one or more
  205. packets and sends them out over the network.
  206. . Miscellaneous routers, firewalls, and other devices between the master node
  207. and the follower node forward the packets, possibly fragmenting or
  208. defragmenting them on the way.
  209. . The operating system on the follower node receives the packets and notifies
  210. {es} that they've been received.
  211. . The follower's `TransportService`, running on thread
  212. `elasticsearch[follower][transport_worker][T#3]`, reads the incoming packets.
  213. It then reconstructs and processes the check request. Usually, the check
  214. quickly succeeds. If so, the same thread immediately constructs a response and
  215. passes it back to the operating system.
  216. . If the check doesn't immediately succeed (for example, an election started
  217. recently) then:
  218. .. The follower's `FollowerChecker`, running on thread
  219. `elasticsearch[follower][cluster_coordination][T#4]`, processes the request. It
  220. constructs a response and tells the `TransportService` to send the response
  221. back to the master.
  222. .. The follower's `TransportService`, running on thread
  223. `elasticsearch[follower][transport_worker][T#3]`, passes the response to the
  224. operating system.
  225. . The operating system on the follower converts the response into one or more
  226. packets and sends them out over the network.
  227. . Miscellaneous routers, firewalls, and other devices between master and
  228. follower forward the packets, possibly fragmenting or defragmenting them on the
  229. way.
  230. . The operating system on the master receives the packets and notifies {es}
  231. that they've been received.
  232. . The master's `TransportService`, running on thread
  233. `elasticsearch[master][transport_worker][T#2]`, reads the incoming packets,
  234. reconstructs the check response, and processes it as long as the check didn't
  235. already time out.
  236. There are a lot of different things that can delay the completion of a check
  237. and cause it to time out. Here are some examples for each step:
  238. . There may be a long garbage collection (GC) or virtual machine (VM) pause
  239. after passing the check request to the `TransportService`.
  240. . There may be a long wait for the specific `transport_worker` thread to become
  241. available, or there may be a long GC or VM pause before passing the check
  242. request onto the operating system.
  243. . A system fault (for example, a broken network card) on the master may delay
  244. sending the message over the network, possibly indefinitely.
  245. . Intermediate devices may delay, drop, or corrupt packets along the way. The
  246. operating system for the master will wait and retransmit any unacknowledged or
  247. corrupted packets up to `net.ipv4.tcp_retries2` times. We recommend
  248. <<system-config-tcpretries,reducing this value>> since the default represents a
  249. very long delay.
  250. . A system fault (for example, a broken network card) on the follower may delay
  251. receiving the message from the network.
  252. . There may be a long wait for the specific `transport_worker` thread to become
  253. available, or there may be a long GC or VM pause during the processing of the
  254. request on the follower.
  255. . There may be a long wait for the `cluster_coordination` thread to become
  256. available, or for the specific `transport_worker` thread to become available
  257. again. There may also be a long GC or VM pause during the processing of the
  258. request.
  259. . A system fault (for example, a broken network card) on the follower may delay
  260. sending the response from the network.
  261. . Intermediate devices may delay, drop, or corrupt packets along the way again,
  262. causing retransmissions.
  263. . A system fault (for example, a broken network card) on the master may delay
  264. receiving the message from the network.
  265. . There may be a long wait for the specific `transport_worker` thread to become
  266. available to process the response, or a long GC or VM pause.
  267. To determine why follower checks are timing out, we can narrow down the reason
  268. for the delay as follows:
  269. * GC pauses are recorded in the GC logs that {es} emits by default, and also
  270. usually by the `JvmMonitorService` in the main node logs. Use these logs to
  271. confirm whether or not GC is resulting in delays.
  272. * VM pauses also affect other processes on the same host. A VM pause also
  273. typically causes a discontinuity in the system clock, which {es} will report in
  274. its logs.
  275. * Packet captures will reveal system-level and network-level faults, especially
  276. if you capture the network traffic simultaneously at the elected master and the
  277. faulty node. The connection used for follower checks is not used for any other
  278. traffic so it can be easily identified from the flow pattern alone, even if TLS
  279. is in use: almost exactly every second there will be a few hundred bytes sent
  280. each way, first the request by the master and then the response by the
  281. follower. You should be able to observe any retransmissions, packet loss, or
  282. other delays on such a connection.
  283. * Long waits for particular threads to be available can be identified by taking
  284. stack dumps (for example, using `jstack`) or a profiling trace (for example,
  285. using Java Flight Recorder) in the few seconds leading up to a node departure.
  286. +
  287. By default the follower checks will time out after 30s, so if node departures
  288. are unpredictable then capture stack dumps every 15s to be sure that at least
  289. one stack dump was taken at the right time.
  290. +
  291. The <<cluster-nodes-hot-threads>> API sometimes yields useful information, but
  292. bear in mind that this API also requires a number of `transport_worker` and
  293. `generic` threads across all the nodes in the cluster. The API may be affected
  294. by the very problem you're trying to diagnose. `jstack` is much more reliable
  295. since it doesn't require any JVM threads.
  296. +
  297. The threads involved in the follower checks are `transport_worker` and
  298. `cluster_coordination` threads, for which there should never be a long wait.
  299. There may also be evidence of long waits for threads in the {es} logs. See
  300. <<modules-network-threading-model>> for more information.
  301. ===== Diagnosing `ShardLockObtainFailedException` failures
  302. If a node leaves and rejoins the cluster then {es} will usually shut down and
  303. re-initialize its shards. If the shards do not shut down quickly enough then
  304. {es} may fail to re-initialize them due to a `ShardLockObtainFailedException`.
  305. To gather more information about the reason for shards shutting down slowly,
  306. configure the following logger:
  307. [source,yaml]
  308. ----
  309. logger.org.elasticsearch.env.NodeEnvironment: DEBUG
  310. ----
  311. When this logger is enabled, {es} will attempt to run the
  312. <<cluster-nodes-hot-threads>> API whenever it encounters a
  313. `ShardLockObtainFailedException`. The results are compressed, encoded, and
  314. split into chunks to avoid truncation:
  315. [source,text]
  316. ----
  317. [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 1]: H4sIAAAAAAAA/x...
  318. [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 2]: p7x3w1hmOQVtuV...
  319. [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 3]: v7uTboMGDbyOy+...
  320. [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] [part 4]: 4tse0RnPnLeDNN...
  321. [DEBUG][o.e.e.NodeEnvironment ] [master] hot threads while failing to obtain shard lock for [index][0] (gzip compressed, base64-encoded, and split into 4 parts on preceding log lines)
  322. ----
  323. To reconstruct the output, base64-decode the data and decompress it using
  324. `gzip`. For instance, on Unix-like systems:
  325. [source,sh]
  326. ----
  327. cat shardlock.log | sed -e 's/.*://' | base64 --decode | gzip --decompress
  328. ----