cluster_restart.asciidoc 4.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
  1. [[restart-upgrade]]
  2. == Full cluster restart upgrade
  3. To upgrade directly to {es} {version} from versions 6.0-6.6, you must shut down
  4. all nodes in the cluster, upgrade each node to {version}, and restart the cluster.
  5. NOTE: If you are running a version prior to 6.0,
  6. {stack-ref-68}/upgrading-elastic-stack.html[upgrade to 6.8]
  7. and reindex your old indices or bring up a new {version} cluster and
  8. <<reindex-upgrade-remote, reindex from remote>>.
  9. include::preparing_to_upgrade.asciidoc[]
  10. [discrete]
  11. === Upgrading your cluster
  12. To perform a full cluster restart upgrade to {version}:
  13. . *Disable shard allocation.*
  14. +
  15. --
  16. include::disable-shard-alloc.asciidoc[]
  17. --
  18. . *Stop indexing and perform a flush.*
  19. +
  20. --
  21. Performing a <<indices-flush, flush>> speeds up shard recovery.
  22. [source,console]
  23. --------------------------------------------------
  24. POST /_flush
  25. --------------------------------------------------
  26. --
  27. . *Temporarily stop the tasks associated with active {ml} jobs and {dfeeds}.* (Optional)
  28. +
  29. --
  30. include::close-ml.asciidoc[]
  31. --
  32. . *Shutdown all nodes.*
  33. +
  34. --
  35. include::shut-down-node.asciidoc[]
  36. --
  37. . *Upgrade all nodes.*
  38. +
  39. --
  40. include::remove-xpack.asciidoc[]
  41. --
  42. +
  43. --
  44. include::upgrade-node.asciidoc[]
  45. --
  46. +
  47. --
  48. include::set-paths-tip.asciidoc[]
  49. --
  50. If upgrading from a 6.x cluster, you must also
  51. <<modules-discovery-bootstrap-cluster,configure cluster bootstrapping>> by
  52. setting the <<initial_master_nodes,`cluster.initial_master_nodes` setting>> on
  53. the master-eligible nodes.
  54. . *Upgrade any plugins.*
  55. +
  56. Use the `elasticsearch-plugin` script to install the upgraded version of each
  57. installed {es} plugin. All plugins must be upgraded when you upgrade
  58. a node.
  59. . If you use {es} {security-features} to define realms, verify that your realm
  60. settings are up-to-date. The format of realm settings changed in version 7.0, in
  61. particular, the placement of the realm type changed. See
  62. <<realm-settings,Realm settings>>.
  63. . *Start each upgraded node.*
  64. +
  65. --
  66. If you have dedicated master nodes, start them first and wait for them to
  67. form a cluster and elect a master before proceeding with your data nodes.
  68. You can check progress by looking at the logs.
  69. As soon as enough master-eligible nodes have discovered each other, they form a
  70. cluster and elect a master. At that point, you can use
  71. <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>> to monitor nodes
  72. joining the cluster:
  73. [source,console]
  74. --------------------------------------------------
  75. GET _cat/health
  76. GET _cat/nodes
  77. --------------------------------------------------
  78. The `status` column returned by `_cat/health` shows the health of each node
  79. in the cluster: `red`, `yellow`, or `green`.
  80. --
  81. . *Wait for all nodes to join the cluster and report a status of yellow.*
  82. +
  83. --
  84. When a node joins the cluster, it begins to recover any primary shards that
  85. are stored locally. The <<cat-health,`_cat/health`>> API initially reports
  86. a `status` of `red`, indicating that not all primary shards have been allocated.
  87. Once a node recovers its local shards, the cluster `status` switches to `yellow`,
  88. indicating that all primary shards have been recovered, but not all replica
  89. shards are allocated. This is to be expected because you have not yet
  90. reenabled allocation. Delaying the allocation of replicas until all nodes
  91. are `yellow` allows the master to allocate replicas to nodes that
  92. already have local shard copies.
  93. --
  94. . *Reenable allocation.*
  95. +
  96. --
  97. When all nodes have joined the cluster and recovered their primary shards,
  98. reenable allocation by restoring `cluster.routing.allocation.enable` to its
  99. default:
  100. [source,console]
  101. ------------------------------------------------------
  102. PUT _cluster/settings
  103. {
  104. "persistent": {
  105. "cluster.routing.allocation.enable": null
  106. }
  107. }
  108. ------------------------------------------------------
  109. Once allocation is reenabled, the cluster starts allocating replica shards to
  110. the data nodes. At this point it is safe to resume indexing and searching,
  111. but your cluster will recover more quickly if you can wait until all primary
  112. and replica shards have been successfully allocated and the status of all nodes
  113. is `green`.
  114. You can monitor progress with the <<cat-health,`_cat/health`>> and
  115. <<cat-recovery,`_cat/recovery`>> APIs:
  116. [source,console]
  117. --------------------------------------------------
  118. GET _cat/health
  119. GET _cat/recovery
  120. --------------------------------------------------
  121. --
  122. . *Restart machine learning jobs.*
  123. +
  124. --
  125. include::open-ml.asciidoc[]
  126. --