cluster-design.asciidoc 21 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382
  1. [[high-availability-cluster-design]]
  2. == Designing for resilience
  3. Distributed systems like {es} are designed to keep working even if some of
  4. their components have failed. As long as there are enough well-connected
  5. nodes to take over their responsibilities, an {es} cluster can continue
  6. operating normally if some of its nodes are unavailable or disconnected.
  7. There is a limit to how small a resilient cluster can be. All {es} clusters
  8. require:
  9. - One <<modules-discovery-quorums,elected master node>> node
  10. - At least one node for each <<modules-node,role>>.
  11. - At least one copy of every <<scalability,shard>>.
  12. A resilient cluster requires redundancy for every required cluster component.
  13. This means a resilient cluster must have:
  14. - At least three master-eligible nodes
  15. - At least two nodes of each role
  16. - At least two copies of each shard (one primary and one or more replicas,
  17. unless the index is a <<searchable-snapshots,searchable snapshot index>>)
  18. A resilient cluster needs three master-eligible nodes so that if one of
  19. them fails then the remaining two still form a majority and can hold a
  20. successful election.
  21. Similarly, redundancy of nodes of each role means that if a node for a
  22. particular role fails, another node can take on its responsibilities.
  23. Finally, a resilient cluster should have at least two copies of each shard. If
  24. one copy fails then there should be another good copy to take over. {es}
  25. automatically rebuilds any failed shard copies on the remaining nodes in order
  26. to restore the cluster to full health after a failure.
  27. Failures temporarily reduce the total capacity of your cluster. In addition,
  28. after a failure the cluster must perform additional background activities to
  29. restore itself to health. You should make sure that your cluster has the
  30. capacity to handle your workload even if some nodes fail.
  31. Depending on your needs and budget, an {es} cluster can consist of a single
  32. node, hundreds of nodes, or any number in between. When designing a smaller
  33. cluster, you should typically focus on making it resilient to single-node
  34. failures. Designers of larger clusters must also consider cases where multiple
  35. nodes fail at the same time. The following pages give some recommendations for
  36. building resilient clusters of various sizes:
  37. - <<high-availability-cluster-small-clusters>>
  38. - <<high-availability-cluster-design-large-clusters>>
  39. [[high-availability-cluster-small-clusters]]
  40. === Resilience in small clusters
  41. In smaller clusters, it is most important to be resilient to single-node
  42. failures. This section gives some guidance on making your cluster as resilient
  43. as possible to the failure of an individual node.
  44. [[high-availability-cluster-design-one-node]]
  45. ==== One-node clusters
  46. If your cluster consists of one node, that single node must do everything.
  47. To accommodate this, {es} assigns nodes every role by default.
  48. A single node cluster is not resilient. If the node fails, the cluster will
  49. stop working. Because there are no replicas in a one-node cluster, you cannot
  50. store your data redundantly. However, by default at least one replica is
  51. required for a <<cluster-health,`green` cluster health status>>. To ensure your
  52. cluster can report a `green` status, override the default by setting
  53. <<dynamic-index-settings,`index.number_of_replicas`>> to `0` on every index.
  54. If the node fails, you may need to restore an older copy of any lost indices
  55. from a <<modules-snapshots,snapshot>>.
  56. Because they are not resilient to any failures, we do not recommend using
  57. one-node clusters in production.
  58. [[high-availability-cluster-design-two-nodes]]
  59. ==== Two-node clusters
  60. If you have two nodes, we recommend they both be data nodes. You should also
  61. ensure every shard is stored redundantly on both nodes by setting
  62. <<dynamic-index-settings,`index.number_of_replicas`>> to `1` on every index
  63. that is not a <<searchable-snapshots,searchable snapshot index>>. This is the
  64. default behaviour but may be overridden by an <<index-templates,index
  65. template>>. <<dynamic-index-settings,Auto-expand replicas>> can also achieve
  66. the same thing, but it's not necessary to use this feature in such a small
  67. cluster.
  68. We recommend you set `node.master: false` on one of your two nodes so that it is
  69. not <<master-node,master-eligible>>. This means you can be certain which of your
  70. nodes is the elected master of the cluster. The cluster can tolerate the loss of
  71. the other master-ineligible node. If you don't set `node.master: false` on one
  72. node, both nodes are master-eligible. This means both nodes are required for a
  73. master election. Since the election will fail if either node is unavailable,
  74. your cluster cannot reliably tolerate the loss of either node.
  75. By default, each node is assigned every role. We recommend you assign both nodes
  76. all other roles except master eligibility. If one node fails, the other node can
  77. handle its tasks.
  78. You should avoid sending client requests to just one of your nodes. If you do
  79. and this node fails, such requests will not receive responses even if the
  80. remaining node is a healthy cluster on its own. Ideally, you should balance your
  81. client requests across both nodes. A good way to do this is to specify the
  82. addresses of both nodes when configuring the client to connect to your cluster.
  83. Alternatively, you can use a resilient load balancer to balance client requests
  84. across the nodes in your cluster.
  85. Because it's not resilient to failures, we do not recommend deploying a two-node
  86. cluster in production.
  87. [[high-availability-cluster-design-two-nodes-plus]]
  88. ==== Two-node clusters with a tiebreaker
  89. Because master elections are majority-based, the two-node cluster described
  90. above is tolerant to the loss of one of its nodes but not the
  91. other one. You cannot configure a two-node cluster so that it can tolerate
  92. the loss of _either_ node because this is theoretically impossible. You might
  93. expect that if either node fails then {es} can elect the remaining node as the
  94. master, but it is impossible to tell the difference between the failure of a
  95. remote node and a mere loss of connectivity between the nodes. If both nodes
  96. were capable of running independent elections, a loss of connectivity would
  97. lead to a {wikipedia}/Split-brain_(computing)[split-brain
  98. problem] and therefore data loss. {es} avoids this and
  99. protects your data by electing neither node as master until that node can be
  100. sure that it has the latest cluster state and that there is no other master in
  101. the cluster. This could result in the cluster having no master until
  102. connectivity is restored.
  103. You can solve this problem by adding a third node and making all three nodes
  104. master-eligible. A <<modules-discovery-quorums,master election>> requires only
  105. two of the three master-eligible nodes. This means the cluster can tolerate the
  106. loss of any single node. This third node acts as a tiebreaker in cases where the
  107. two original nodes are disconnected from each other. You can reduce the resource
  108. requirements of this extra node by making it a <<voting-only-node,dedicated
  109. voting-only master-eligible node>>, also known as a dedicated tiebreaker.
  110. Because it has no other roles, a dedicated tiebreaker does not need to be as
  111. powerful as the other two nodes. It will not perform any searches nor coordinate
  112. any client requests and cannot be elected as the master of the cluster.
  113. The two original nodes should not be voting-only master-eligible nodes since a
  114. resilient cluster requires at least three master-eligible nodes, at least two
  115. of which are not voting-only master-eligible nodes. If two of your three nodes
  116. are voting-only master-eligible nodes then the elected master must be the third
  117. node. This node then becomes a single point of failure.
  118. We recommend assigning both non-tiebreaker nodes all other roles. This creates
  119. redundancy by ensuring any task in the cluster can be handled by either node.
  120. You should not send any client requests to the dedicated tiebreaker node.
  121. You should also avoid sending client requests to just one of the other two
  122. nodes. If you do, and this node fails, then any requests will not
  123. receive responses, even if the remaining nodes form a healthy cluster. Ideally,
  124. you should balance your client requests across both of the non-tiebreaker
  125. nodes. You can do this by specifying the address of both nodes
  126. when configuring your client to connect to your cluster. Alternatively, you can
  127. use a resilient load balancer to balance client requests across the appropriate
  128. nodes in your cluster. The {ess-trial}[Elastic Cloud] service
  129. provides such a load balancer.
  130. A two-node cluster with an additional tiebreaker node is the smallest possible
  131. cluster that is suitable for production deployments.
  132. [[high-availability-cluster-design-three-nodes]]
  133. ==== Three-node clusters
  134. If you have three nodes, we recommend they all be <<data-node,data nodes>> and
  135. every index that is not a <<searchable-snapshots,searchable snapshot index>>
  136. should have at least one replica. Nodes are data nodes by default. You may
  137. prefer for some indices to have two replicas so that each node has a copy of
  138. each shard in those indices. You should also configure each node to be
  139. <<master-node,master-eligible>> so that any two of them can hold a master
  140. election without needing to communicate with the third node. Nodes are
  141. master-eligible by default. This cluster will be resilient to the loss of any
  142. single node.
  143. You should avoid sending client requests to just one of your nodes. If you do,
  144. and this node fails, then any requests will not receive responses even if the
  145. remaining two nodes form a healthy cluster. Ideally, you should balance your
  146. client requests across all three nodes. You can do this by specifying the
  147. address of multiple nodes when configuring your client to connect to your
  148. cluster. Alternatively you can use a resilient load balancer to balance client
  149. requests across your cluster. The {ess-trial}[Elastic Cloud]
  150. service provides such a load balancer.
  151. [[high-availability-cluster-design-three-plus-nodes]]
  152. ==== Clusters with more than three nodes
  153. Once your cluster grows to more than three nodes, you can start to specialise
  154. these nodes according to their responsibilities, allowing you to scale their
  155. resources independently as needed. You can have as many <<data-node,data
  156. nodes>>, <<ingest,ingest nodes>>, <<ml-node,{ml} nodes>>, etc. as needed to
  157. support your workload. As your cluster grows larger, we recommend using
  158. dedicated nodes for each role. This allows you to independently scale resources
  159. for each task.
  160. However, it is good practice to limit the number of master-eligible nodes in
  161. the cluster to three. Master nodes do not scale like other node types since
  162. the cluster always elects just one of them as the master of the cluster. If
  163. there are too many master-eligible nodes then master elections may take a
  164. longer time to complete. In larger clusters, we recommend you
  165. configure some of your nodes as dedicated master-eligible nodes and avoid
  166. sending any client requests to these dedicated nodes. Your cluster may become
  167. unstable if the master-eligible nodes are overwhelmed with unnecessary extra
  168. work that could be handled by one of the other nodes.
  169. You may configure one of your master-eligible nodes to be a
  170. <<voting-only-node,voting-only node>> so that it can never be elected as the
  171. master node. For instance, you may have two dedicated master nodes and a third
  172. node that is both a data node and a voting-only master-eligible node. This
  173. third voting-only node will act as a tiebreaker in master elections but will
  174. never become the master itself.
  175. [[high-availability-cluster-design-small-cluster-summary]]
  176. ==== Summary
  177. The cluster will be resilient to the loss of any node as long as:
  178. - The <<cluster-health,cluster health status>> is `green`.
  179. - There are at least two data nodes.
  180. - Every index that is not a <<searchable-snapshots,searchable snapshot index>>
  181. has at least one replica of each shard, in addition to the primary.
  182. - The cluster has at least three master-eligible nodes, as long as at least two
  183. of these nodes are not voting-only master-eligible nodes.
  184. - Clients are configured to send their requests to more than one node or are
  185. configured to use a load balancer that balances the requests across an
  186. appropriate set of nodes. The {ess-trial}[Elastic Cloud] service provides such
  187. a load balancer.
  188. [[high-availability-cluster-design-large-clusters]]
  189. === Resilience in larger clusters
  190. It's not unusual for nodes to share common infrastructure, such as network
  191. interconnects or a power supply. If so, you should plan for the failure of this
  192. infrastructure and ensure that such a failure would not affect too many of your
  193. nodes. It is common practice to group all the nodes sharing some infrastructure
  194. into _zones_ and to plan for the failure of any whole zone at once.
  195. {es} expects node-to-node connections to be reliable, have low latency, and
  196. have adequate bandwidth. Many {es} tasks require multiple round-trips between
  197. nodes. A slow or unreliable interconnect may have a significant effect on the
  198. performance and stability of your cluster.
  199. For example, a few milliseconds of latency added to each round-trip can quickly
  200. accumulate into a noticeable performance penalty. An unreliable network may
  201. have frequent network partitions. {es} will automatically recover from a
  202. network partition as quickly as it can but your cluster may be partly
  203. unavailable during a partition and will need to spend time and resources to
  204. resynchronize any missing data and rebalance itself once the partition heals.
  205. Recovering from a failure may involve copying a large amount of data between
  206. nodes so the recovery time is often determined by the available bandwidth.
  207. If you've divided your cluster into zones, the network connections within each
  208. zone are typically of higher quality than the connections between the zones.
  209. Ensure the network connections between zones are of sufficiently high quality.
  210. You will see the best results by locating all your zones within a single data
  211. center with each zone having its own independent power supply and other
  212. supporting infrastructure. You can also _stretch_ your cluster across nearby
  213. data centers as long as the network interconnection between each pair of data
  214. centers is good enough.
  215. [[high-availability-cluster-design-min-network-perf]]
  216. There is no specific minimum network performance required to run a healthy {es}
  217. cluster. In theory, a cluster will work correctly even if the round-trip
  218. latency between nodes is several hundred milliseconds. In practice, if your
  219. network is that slow then the cluster performance will be very poor. In
  220. addition, slow networks are often unreliable enough to cause network partitions
  221. that lead to periods of unavailability.
  222. If you want your data to be available in multiple data centers that are further
  223. apart or not well connected, deploy a separate cluster in each data center and
  224. use <<modules-cross-cluster-search,{ccs}>> or <<xpack-ccr,{ccr}>> to link the
  225. clusters together. These features are designed to perform well even if the
  226. cluster-to-cluster connections are less reliable or performant than the network
  227. within each cluster.
  228. After losing a whole zone's worth of nodes, a properly-designed cluster may be
  229. functional but running with significantly reduced capacity. You may need
  230. to provision extra nodes to restore acceptable performance in your
  231. cluster when handling such a failure.
  232. For resilience against whole-zone failures, it is important that there is a copy
  233. of each shard in more than one zone, which can be achieved by placing data
  234. nodes in multiple zones and configuring <<allocation-awareness,shard allocation
  235. awareness>>. You should also ensure that client requests are sent to nodes in
  236. more than one zone.
  237. You should consider all node roles and ensure that each role is split
  238. redundantly across two or more zones. For instance, if you are using
  239. <<ingest,ingest pipelines>> or {ml}, you should have ingest or {ml} nodes in two
  240. or more zones. However, the placement of master-eligible nodes requires a little
  241. more care because a resilient cluster needs at least two of the three
  242. master-eligible nodes in order to function. The following sections explore the
  243. options for placing master-eligible nodes across multiple zones.
  244. [[high-availability-cluster-design-two-zones]]
  245. ==== Two-zone clusters
  246. If you have two zones, you should have a different number of
  247. master-eligible nodes in each zone so that the zone with more nodes will
  248. contain a majority of them and will be able to survive the loss of the other
  249. zone. For instance, if you have three master-eligible nodes then you may put
  250. all of them in one zone or you may put two in one zone and the third in the
  251. other zone. You should not place an equal number of master-eligible nodes in
  252. each zone. If you place the same number of master-eligible nodes in each zone,
  253. neither zone has a majority of its own. Therefore, the cluster may not survive
  254. the loss of either zone.
  255. [[high-availability-cluster-design-two-zones-plus]]
  256. ==== Two-zone clusters with a tiebreaker
  257. The two-zone deployment described above is tolerant to the loss of one of its
  258. zones but not to the loss of the other one because master elections are
  259. majority-based. You cannot configure a two-zone cluster so that it can tolerate
  260. the loss of _either_ zone because this is theoretically impossible. You might
  261. expect that if either zone fails then {es} can elect a node from the remaining
  262. zone as the master but it is impossible to tell the difference between the
  263. failure of a remote zone and a mere loss of connectivity between the zones. If
  264. both zones were capable of running independent elections then a loss of
  265. connectivity would lead to a
  266. {wikipedia}/Split-brain_(computing)[split-brain problem] and
  267. therefore data loss. {es} avoids this and protects your data by not electing
  268. a node from either zone as master until that node can be sure that it has the
  269. latest cluster state and that there is no other master in the cluster. This may
  270. mean there is no master at all until connectivity is restored.
  271. You can solve this by placing one master-eligible node in each of your two
  272. zones and adding a single extra master-eligible node in an independent third
  273. zone. The extra master-eligible node acts as a tiebreaker in cases
  274. where the two original zones are disconnected from each other. The extra
  275. tiebreaker node should be a <<voting-only-node,dedicated voting-only
  276. master-eligible node>>, also known as a dedicated tiebreaker. A dedicated
  277. tiebreaker need not be as powerful as the other two nodes since it has no other
  278. roles and will not perform any searches nor coordinate any client requests nor
  279. be elected as the master of the cluster.
  280. You should use <<allocation-awareness,shard allocation awareness>> to ensure
  281. that there is a copy of each shard in each zone. This means either zone remains
  282. fully available if the other zone fails.
  283. All master-eligible nodes, including voting-only nodes, are on the critical path
  284. for publishing cluster state updates. Because of this, these nodes require
  285. reasonably fast persistent storage and a reliable, low-latency network
  286. connection to the rest of the cluster. If you add a tiebreaker node in a third
  287. independent zone then you must make sure it has adequate resources and good
  288. connectivity to the rest of the cluster.
  289. [[high-availability-cluster-design-three-zones]]
  290. ==== Clusters with three or more zones
  291. If you have three zones then you should have one master-eligible node in each
  292. zone. If you have more than three zones then you should choose three of the
  293. zones and put a master-eligible node in each of these three zones. This will
  294. mean that the cluster can still elect a master even if one of the zones fails.
  295. As always, your indices should have at least one replica in case a node fails,
  296. unless they are <<searchable-snapshots,searchable snapshot indices>>. You
  297. should also use <<allocation-awareness,shard allocation awareness>> to limit
  298. the number of copies of each shard in each zone. For instance, if you have an
  299. index with one or two replicas configured then allocation awareness will ensure
  300. that the replicas of the shard are in a different zone from the primary. This
  301. means that a copy of every shard will still be available if one zone fails. The
  302. availability of this shard will not be affected by such a failure.
  303. [[high-availability-cluster-design-large-cluster-summary]]
  304. ==== Summary
  305. The cluster will be resilient to the loss of any zone as long as:
  306. - The <<cluster-health,cluster health status>> is `green`.
  307. - There are at least two zones containing data nodes.
  308. - Every index that is not a <<searchable-snapshots,searchable snapshot index>>
  309. has at least one replica of each shard, in addition to the primary.
  310. - Shard allocation awareness is configured to avoid concentrating all copies of
  311. a shard within a single zone.
  312. - The cluster has at least three master-eligible nodes. At least two of these
  313. nodes are not voting-only master-eligible nodes, and they are spread evenly
  314. across at least three zones.
  315. - Clients are configured to send their requests to nodes in more than one zone
  316. or are configured to use a load balancer that balances the requests across an
  317. appropriate set of nodes. The {ess-trial}[Elastic Cloud] service provides such
  318. a load balancer.