rolling_upgrade.asciidoc 6.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199
  1. [[rolling-upgrades]]
  2. === Rolling upgrades
  3. A rolling upgrade allows the Elasticsearch cluster to be upgraded one node at
  4. a time, with no downtime for end users. Running multiple versions of
  5. Elasticsearch in the same cluster for any length of time beyond that required
  6. for an upgrade is not supported, as shards will not be replicated from the
  7. more recent version to the older version.
  8. Consult this <<setup-upgrade,table>> to verify that rolling upgrades are
  9. supported for your version of Elasticsearch.
  10. To perform a rolling upgrade:
  11. . *Disable shard allocation*
  12. +
  13. --
  14. When you shut down a node, the allocation process will wait for one minute
  15. before starting to replicate the shards that were on that node to other nodes
  16. in the cluster, causing a lot of wasted I/O. This can be avoided by disabling
  17. allocation before shutting down a node:
  18. [source,js]
  19. --------------------------------------------------
  20. PUT _cluster/settings
  21. {
  22. "transient": {
  23. "cluster.routing.allocation.enable": "none"
  24. }
  25. }
  26. --------------------------------------------------
  27. // CONSOLE
  28. // TEST[skip:indexes don't assign]
  29. --
  30. . *Stop non-essential indexing and perform a synced flush (Optional)*
  31. +
  32. --
  33. You may happily continue indexing during the upgrade. However, shard recovery
  34. will be much faster if you temporarily stop non-essential indexing and issue a
  35. <<indices-synced-flush, synced-flush>> request:
  36. [source,js]
  37. --------------------------------------------------
  38. POST _flush/synced
  39. --------------------------------------------------
  40. // CONSOLE
  41. A synced flush request is a ``best effort'' operation. It will fail if there
  42. are any pending indexing operations, but it is safe to reissue the request
  43. multiple times if necessary.
  44. --
  45. . [[upgrade-node]] *Stop and upgrade a single node*
  46. +
  47. --
  48. Shut down one of the nodes in the cluster *before* starting the upgrade.
  49. [TIP]
  50. ================================================
  51. When using the zip or tarball packages, the `config`, `data`, `logs` and
  52. `plugins` directories are placed within the Elasticsearch home directory by
  53. default.
  54. It is a good idea to place these directories in a different location so that
  55. there is no chance of deleting them when upgrading Elasticsearch. These custom
  56. paths can be <<path-settings,configured>> with the `CONF_DIR` environment
  57. variable, and the `path.logs`, and `path.data` settings.
  58. The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the
  59. appropriate place for each operating system.
  60. ================================================
  61. To upgrade using a <<deb,Debian>> or <<rpm,RPM>> package:
  62. * Use `rpm` or `dpkg` to install the new package. All files should be
  63. placed in their proper locations, and config files should not be
  64. overwritten.
  65. To upgrade using a zip or compressed tarball:
  66. * Extract the zip or tarball to a new directory, to be sure that you don't
  67. overwrite the `config` or `data` directories.
  68. * Either copy the files in the `config` directory from your old installation
  69. to your new installation, or set the environment variable `ES_JVM_OPTIONS`
  70. to the location of the `jvm.options` file and use the `-E path.conf=`
  71. option on the command line to point to an external config directory.
  72. * Either copy the files in the `data` directory from your old installation
  73. to your new installation, or configure the location of the data directory
  74. in the `config/elasticsearch.yml` file, with the `path.data` setting.
  75. --
  76. . *Upgrade any plugins*
  77. +
  78. --
  79. Elasticsearch plugins must be upgraded when upgrading a node. Use the
  80. `elasticsearch-plugin` script to install the correct version of any plugins
  81. that you need.
  82. --
  83. . *Start the upgraded node*
  84. +
  85. --
  86. Start the now upgraded node and confirm that it joins the cluster by checking
  87. the log file or by checking the output of this request:
  88. [source,sh]
  89. --------------------------------------------------
  90. GET _cat/nodes
  91. --------------------------------------------------
  92. // CONSOLE
  93. --
  94. . *Reenable shard allocation*
  95. +
  96. --
  97. Once the node has joined the cluster, reenable shard allocation to start using
  98. the node:
  99. [source,js]
  100. --------------------------------------------------
  101. PUT _cluster/settings
  102. {
  103. "transient": {
  104. "cluster.routing.allocation.enable": "all"
  105. }
  106. }
  107. --------------------------------------------------
  108. // CONSOLE
  109. --
  110. . *Wait for the node to recover*
  111. +
  112. --
  113. You should wait for the cluster to finish shard allocation before upgrading
  114. the next node. You can check on progress with the <<cat-health,`_cat/health`>>
  115. request:
  116. [source,sh]
  117. --------------------------------------------------
  118. GET _cat/health
  119. --------------------------------------------------
  120. // CONSOLE
  121. Wait for the `status` column to move from `yellow` to `green`. Status `green`
  122. means that all primary and replica shards have been allocated.
  123. [IMPORTANT]
  124. ====================================================
  125. During a rolling upgrade, primary shards assigned to a node with the higher
  126. version will never have their replicas assigned to a node with the lower
  127. version, because the newer version may have a different data format which is
  128. not understood by the older version.
  129. If it is not possible to assign the replica shards to another node with the
  130. higher version -- e.g. if there is only one node with the higher version in
  131. the cluster -- then the replica shards will remain unassigned and the
  132. cluster health will remain status `yellow`.
  133. In this case, check that there are no initializing or relocating shards (the
  134. `init` and `relo` columns) before proceding.
  135. As soon as another node is upgraded, the replicas should be assigned and the
  136. cluster health will reach status `green`.
  137. ====================================================
  138. Shards that have not been <<indices-synced-flush,sync-flushed>> may take some time to
  139. recover. The recovery status of individual shards can be monitored with the
  140. <<cat-recovery,`_cat/recovery`>> request:
  141. [source,sh]
  142. --------------------------------------------------
  143. GET _cat/recovery
  144. --------------------------------------------------
  145. // CONSOLE
  146. If you stopped indexing, then it is safe to resume indexing as soon as
  147. recovery has completed.
  148. --
  149. . *Repeat*
  150. +
  151. --
  152. When the cluster is stable and the node has recovered, repeat the above steps
  153. for all remaining nodes.
  154. --