cluster_restart.asciidoc 4.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
  1. [[restart-upgrade]]
  2. === Full cluster restart upgrade
  3. Elasticsearch requires a full cluster restart when upgrading across major
  4. versions. Rolling upgrades are not supported across major versions. Consult
  5. this <<setup-upgrade,table>> to verify that a full cluster restart is
  6. required.
  7. The process to perform an upgrade with a full cluster restart is as follows:
  8. ==== Step 1: Disable shard allocation
  9. When you shut down a node, the allocation process will immediately try to
  10. replicate the shards that were on that node to other nodes in the cluster,
  11. causing a lot of wasted I/O. This can be avoided by disabling allocation
  12. before shutting down a node:
  13. [source,js]
  14. --------------------------------------------------
  15. PUT _cluster/settings
  16. {
  17. "persistent": {
  18. "cluster.routing.allocation.enable": "none"
  19. }
  20. }
  21. --------------------------------------------------
  22. // CONSOLE
  23. // TEST[skip:indexes don't assign]
  24. ==== Step 2: Perform a synced flush
  25. Shard recovery will be much faster if you stop indexing and issue a
  26. <<indices-synced-flush, synced-flush>> request:
  27. [source,sh]
  28. --------------------------------------------------
  29. POST _flush/synced
  30. --------------------------------------------------
  31. // CONSOLE
  32. A synced flush request is a ``best effort'' operation. It will fail if there
  33. are any pending indexing operations, but it is safe to reissue the request
  34. multiple times if necessary.
  35. ==== Step 3: Shutdown and upgrade all nodes
  36. Stop all Elasticsearch services on all nodes in the cluster. Each node can be
  37. upgraded following the same procedure described in <<upgrade-node>>.
  38. ==== Step 4: Upgrade any plugins
  39. Elasticsearch plugins must be upgraded when upgrading a node. Use the
  40. `elasticsearch-plugin` script to install the correct version of any plugins
  41. that you need.
  42. ==== Step 5: Start the cluster
  43. If you have dedicated master nodes -- nodes with `node.master` set to
  44. `true`(the default) and `node.data` set to `false` -- then it is a good idea
  45. to start them first. Wait for them to form a cluster and to elect a master
  46. before proceeding with the data nodes. You can check progress by looking at the
  47. logs.
  48. As soon as the <<master-election,minimum number of master-eligible nodes>>
  49. have discovered each other, they will form a cluster and elect a master. From
  50. that point on, the <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>>
  51. APIs can be used to monitor nodes joining the cluster:
  52. [source,sh]
  53. --------------------------------------------------
  54. GET _cat/health
  55. GET _cat/nodes
  56. --------------------------------------------------
  57. // CONSOLE
  58. Use these APIs to check that all nodes have successfully joined the cluster.
  59. ==== Step 6: Wait for yellow
  60. As soon as each node has joined the cluster, it will start to recover any
  61. primary shards that are stored locally. Initially, the
  62. <<cat-health,`_cat/health`>> request will report a `status` of `red`, meaning
  63. that not all primary shards have been allocated.
  64. Once each node has recovered its local shards, the `status` will become
  65. `yellow`, meaning all primary shards have been recovered, but not all replica
  66. shards are allocated. This is to be expected because allocation is still
  67. disabled.
  68. ==== Step 7: Reenable allocation
  69. Delaying the allocation of replicas until all nodes have joined the cluster
  70. allows the master to allocate replicas to nodes which already have local shard
  71. copies. At this point, with all the nodes in the cluster, it is safe to
  72. reenable shard allocation:
  73. [source,js]
  74. ------------------------------------------------------
  75. PUT _cluster/settings
  76. {
  77. "persistent": {
  78. "cluster.routing.allocation.enable": "all"
  79. }
  80. }
  81. ------------------------------------------------------
  82. // CONSOLE
  83. The cluster will now start allocating replica shards to all data nodes. At this
  84. point it is safe to resume indexing and searching, but your cluster will
  85. recover more quickly if you can delay indexing and searching until all shards
  86. have recovered.
  87. You can monitor progress with the <<cat-health,`_cat/health`>> and
  88. <<cat-recovery,`_cat/recovery`>> APIs:
  89. [source,sh]
  90. --------------------------------------------------
  91. GET _cat/health
  92. GET _cat/recovery
  93. --------------------------------------------------
  94. // CONSOLE
  95. Once the `status` column in the `_cat/health` output has reached `green`, all
  96. primary and replica shards have been successfully allocated.