node.asciidoc 6.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160
  1. [discrete]
  2. [[breaking_80_node_changes]]
  3. ==== Node changes
  4. //NOTE: The notable-breaking-changes tagged regions are re-used in the
  5. //Installation and Upgrade Guide
  6. //tag::notable-breaking-changes[]
  7. // end::notable-breaking-changes[]
  8. .The `node.max_local_storage_nodes` setting has been removed.
  9. [%collapsible]
  10. ====
  11. *Details* +
  12. The `node.max_local_storage_nodes` setting was deprecated in 7.x and
  13. has been removed in 8.0. Nodes should be run on separate data paths
  14. to ensure that each node is consistently assigned to the same data path.
  15. *Impact* +
  16. Discontinue use of the `node.max_local_storage_nodes` setting. Specifying this
  17. setting in `elasticsearch.yml` will result in an error on startup.
  18. ====
  19. .The layout of the data folder has changed.
  20. [%collapsible]
  21. ====
  22. *Details* +
  23. Each node's data is now stored directly in the data directory set by the
  24. `path.data` setting, rather than in `${path.data}/nodes/0`, because the removal
  25. of the `node.max_local_storage_nodes` setting means that nodes may no longer
  26. share a data path.
  27. *Impact* +
  28. At startup, {es} will automatically migrate the data path to the new layout.
  29. This automatic migration will not proceed if the data path contains data for
  30. more than one node. You should move to a configuration in which each node has
  31. its own data path before upgrading.
  32. If you try to upgrade a configuration in which there is data for more than one
  33. node in a data path then the automatic migration will fail and {es}
  34. will refuse to start. To resolve this you will need to perform the migration
  35. manually. The data for the extra nodes are stored in folders named
  36. `${path.data}/nodes/1`, `${path.data}/nodes/2` and so on, and you should move
  37. each of these folders to an appropriate location and then configure the
  38. corresponding node to use this location for its data path. If your nodes each
  39. have more than one data path in their `path.data` settings then you should move
  40. all the corresponding subfolders in parallel. Each node uses the same subfolder
  41. (e.g. `nodes/2`) across all its data paths.
  42. ====
  43. .Support for multiple data paths has been removed.
  44. [%collapsible]
  45. ====
  46. *Details* +
  47. In earlier versions the `path.data` setting accepted a list of data paths, but
  48. if you specified multiple paths then the behaviour was unintuitive and usually
  49. did not give the desired outcomes. Support for multiple data paths is now
  50. removed.
  51. *Impact* +
  52. Specify a single path in `path.data`. If needed, you can create a filesystem
  53. which spans multiple disks with a hardware virtualisation layer such as RAID,
  54. or a software virtualisation layer such as Logical Volume Manager (LVM) on
  55. Linux or Storage Spaces on Windows. If you wish to use multiple data paths on a
  56. single machine then you must run one node for each data path.
  57. If you currently use multiple data paths in a
  58. {ref}/high-availability-cluster-design.html[highly available cluster] then you
  59. can migrate to a setup that uses a single path for each node without downtime
  60. using a process similar to a
  61. {ref}/restart-cluster.html#restart-cluster-rolling[rolling restart]: shut each
  62. node down in turn and replace it with one or more nodes each configured to use
  63. a single data path. In more detail, for each node that currently has multiple
  64. data paths you should follow the following process. In principle you can
  65. perform this migration during a rolling upgrade to 8.0, but we recommend
  66. migrating to a single-data-path setup before starting to upgrade.
  67. 1. Take a snapshot to protect your data in case of disaster.
  68. 2. Optionally, migrate the data away from the target node by using an
  69. {ref}/modules-cluster.html#cluster-shard-allocation-filtering[allocation filter]:
  70. +
  71. [source,console]
  72. --------------------------------------------------
  73. PUT _cluster/settings
  74. {
  75. "transient": {
  76. "cluster.routing.allocation.exclude._name": "target-node-name"
  77. }
  78. }
  79. --------------------------------------------------
  80. +
  81. You can use the {ref}/cat-allocation.html[cat allocation API] to track progress
  82. of this data migration. If some shards do not migrate then the
  83. {ref}/cluster-allocation-explain.html[cluster allocation explain API] will help
  84. you to determine why.
  85. 3. Follow the steps in the
  86. {ref}/restart-cluster.html#restart-cluster-rolling[rolling restart process]
  87. up to and including shutting the target node down.
  88. 4. Ensure your cluster health is `yellow` or `green`, so that there is a copy
  89. of every shard assigned to at least one of the other nodes in your cluster.
  90. 5. If applicable, remove the allocation filter applied in the earlier step.
  91. +
  92. [source,console]
  93. --------------------------------------------------
  94. PUT _cluster/settings
  95. {
  96. "transient": {
  97. "cluster.routing.allocation.exclude._name": null
  98. }
  99. }
  100. --------------------------------------------------
  101. 6. Discard the data held by the stopped node by deleting the contents of its
  102. data paths.
  103. 7. Reconfigure your storage. For instance, combine your disks into a single
  104. filesystem using LVM or Storage Spaces. Ensure that your reconfigured storage
  105. has sufficient space for the data that it will hold.
  106. 8. Reconfigure your node by adjusting the `path.data` setting in its
  107. `elasticsearch.yml` file. If needed, install more nodes each with their own
  108. `path.data` setting pointing at a separate data path.
  109. 9. Start the new nodes and follow the rest of the
  110. {ref}/restart-cluster.html#restart-cluster-rolling[rolling restart process] for
  111. them.
  112. 10. Ensure your cluster health is `green`, so that every shard has been
  113. assigned.
  114. You can alternatively add some number of single-data-path nodes to your
  115. cluster, migrate all your data over to these new nodes using
  116. {ref}/modules-cluster.html#cluster-shard-allocation-filtering[allocation filters],
  117. and then remove the old nodes from the cluster. This approach will temporarily
  118. double the size of your cluster so it will only work if you have the capacity to
  119. expand your cluster like this.
  120. If you currently use multiple data paths but your cluster is not highly
  121. available then the you can migrate to a non-deprecated configuration by taking
  122. a snapshot, creating a new cluster with the desired configuration and restoring
  123. the snapshot into it.
  124. ====
  125. .Closed indices created in {es} 6.x and earlier versions are not supported.
  126. [%collapsible]
  127. ====
  128. *Details* +
  129. In earlier versions a node would start up even if it had data from indices
  130. created in a version before the previous major version, as long as those
  131. indices were closed. {es} now ensures that it is compatible with every index,
  132. open or closed, at startup time.
  133. *Impact* +
  134. Reindex closed indices created in {es} 6.x or before with {es} 7.x if they need
  135. to be carried forward to {es} 8.x.
  136. ====