publishing.asciidoc 3.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960
  1. [[cluster-state-publishing]]
  2. === Publishing the cluster state
  3. The elected master node is the only node in a cluster that can make changes to
  4. the cluster state. The elected master node processes one batch of cluster state
  5. updates at a time, computing the required changes and publishing the updated
  6. cluster state to all the other nodes in the cluster. Each publication starts
  7. with the elected master broadcasting the updated cluster state to all nodes in
  8. the cluster. Each node responds with an acknowledgement but does not yet apply
  9. the newly-received state. Once the elected master has collected
  10. acknowledgements from enough master-eligible nodes, the new cluster state is
  11. said to be _committed_ and the master broadcasts another message instructing
  12. nodes to apply the now-committed state. Each node receives this message,
  13. applies the updated state, and then sends a second acknowledgement back to the
  14. master.
  15. The elected master allows a limited amount of time for each cluster state
  16. update to be completely published to all nodes. It is defined by the
  17. `cluster.publish.timeout` setting, which defaults to `30s`, measured from the
  18. time the publication started. If this time is reached before the new cluster
  19. state is committed then the cluster state change is rejected and the elected
  20. master considers itself to have failed. It stands down and starts trying to
  21. elect a new master node.
  22. If the new cluster state is committed before `cluster.publish.timeout` has
  23. elapsed, the elected master node considers the change to have succeeded. It
  24. waits until the timeout has elapsed or until it has received acknowledgements
  25. that each node in the cluster has applied the updated state, and then starts
  26. processing and publishing the next cluster state update. If some
  27. acknowledgements have not been received (i.e. some nodes have not yet confirmed
  28. that they have applied the current update), these nodes are said to be
  29. _lagging_ since their cluster states have fallen behind the elected master's
  30. latest state. The elected master waits for the lagging nodes to catch up for a
  31. further time, `cluster.follower_lag.timeout`, which defaults to `90s`. If a
  32. node has still not successfully applied the cluster state update within this
  33. time then it is considered to have failed and the elected master removes it
  34. from the cluster.
  35. Cluster state updates are typically published as diffs to the previous cluster
  36. state, which reduces the time and network bandwidth needed to publish a cluster
  37. state update. For example, when updating the mappings for only a subset of the
  38. indices in the cluster state, only the updates for those indices need to be
  39. published to the nodes in the cluster, as long as those nodes have the previous
  40. cluster state. If a node is missing the previous cluster state, for example
  41. when rejoining a cluster, the elected master will publish the full cluster
  42. state to that node so that it can receive future updates as diffs.
  43. NOTE: {es} is a peer to peer based system, in which nodes communicate with one
  44. another directly. The high-throughput APIs (index, delete, search) do not
  45. normally interact with the elected master node. The responsibility of the
  46. elected master node is to maintain the global cluster state which includes
  47. reassigning shards when nodes join or leave the cluster. Each time the cluster
  48. state is changed, the new state is published to all nodes in the cluster as
  49. described above.
  50. The performance characteristics of cluster state updates are a function of the
  51. speed of the storage on each master-eligible node, as well as the reliability
  52. and latency of the network interconnections between all nodes in the cluster.
  53. You must therefore ensure that the storage and networking available to the
  54. nodes in your cluster are good enough to meet your performance goals.