node-tool.asciidoc 23 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587
  1. [[node-tool]]
  2. == elasticsearch-node
  3. The `elasticsearch-node` command enables you to perform certain unsafe
  4. operations on a node that are only possible while it is shut down. This command
  5. allows you to adjust the <<modules-node,role>> of a node, unsafely edit cluster
  6. settings and may be able to recover some data after a disaster or start a node
  7. even if it is incompatible with the data on disk.
  8. [discrete]
  9. === Synopsis
  10. [source,shell]
  11. --------------------------------------------------
  12. bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-version
  13. [-E <KeyValuePair>]
  14. [-h, --help] ([-s, --silent] | [-v, --verbose])
  15. --------------------------------------------------
  16. [discrete]
  17. === Description
  18. This tool has a number of modes:
  19. * `elasticsearch-node repurpose` can be used to delete unwanted data from a
  20. node if it used to be a <<data-node,data node>> or a
  21. <<master-node,master-eligible node>> but has been repurposed not to have one
  22. or other of these roles.
  23. * `elasticsearch-node remove-settings` can be used to remove persistent settings
  24. from the cluster state in case where it contains incompatible settings that
  25. prevent the cluster from forming.
  26. * `elasticsearch-node remove-customs` can be used to remove custom metadata
  27. from the cluster state in case where it contains broken metadata that
  28. prevents the cluster state from being loaded.
  29. * `elasticsearch-node unsafe-bootstrap` can be used to perform _unsafe cluster
  30. bootstrapping_. It forces one of the nodes to form a brand-new cluster on
  31. its own, using its local copy of the cluster metadata.
  32. * `elasticsearch-node detach-cluster` enables you to move nodes from one
  33. cluster to another. This can be used to move nodes into a new cluster
  34. created with the `elasticsearch-node unsafe-bootstrap` command. If unsafe
  35. cluster bootstrapping was not possible, it also enables you to move nodes
  36. into a brand-new cluster.
  37. * `elasticsearch-node override-version` enables you to start up a node
  38. even if the data in the data path was written by an incompatible version of
  39. {es}. This may sometimes allow you to downgrade to an earlier version of
  40. {es}.
  41. [[node-tool-repurpose]]
  42. [discrete]
  43. ==== Changing the role of a node
  44. There may be situations where you want to repurpose a node without following
  45. the <<change-node-role,proper repurposing processes>>. The `elasticsearch-node
  46. repurpose` tool allows you to delete any excess on-disk data and start a node
  47. after repurposing it.
  48. The intended use is:
  49. * Stop the node
  50. * Update `elasticsearch.yml` by setting `node.roles` as desired.
  51. * Run `elasticsearch-node repurpose` on the node
  52. * Start the node
  53. If you run `elasticsearch-node repurpose` on a node without the `data` role and
  54. with the `master` role then it will delete any remaining shard data on that
  55. node, but it will leave the index and cluster metadata alone. If you run
  56. `elasticsearch-node repurpose` on a node without the `data` and `master` roles
  57. then it will delete any remaining shard data and index metadata, but it will
  58. leave the cluster metadata alone.
  59. [WARNING]
  60. Running this command can lead to data loss for the indices mentioned if the
  61. data contained is not available on other nodes in the cluster. Only run this
  62. tool if you understand and accept the possible consequences, and only after
  63. determining that the node cannot be repurposed cleanly.
  64. The tool provides a summary of the data to be deleted and asks for confirmation
  65. before making any changes. You can get detailed information about the affected
  66. indices and shards by passing the verbose (`-v`) option.
  67. [discrete]
  68. ==== Removing persistent cluster settings
  69. There may be situations where a node contains persistent cluster
  70. settings that prevent the cluster from forming. Since the cluster cannot form,
  71. it is not possible to remove these settings using the
  72. <<cluster-update-settings>> API.
  73. The `elasticsearch-node remove-settings` tool allows you to forcefully remove
  74. those persistent settings from the on-disk cluster state. The tool takes a
  75. list of settings as parameters that should be removed, and also supports
  76. wildcard patterns.
  77. The intended use is:
  78. * Stop the node
  79. * Run `elasticsearch-node remove-settings name-of-setting-to-remove` on the node
  80. * Repeat for all other master-eligible nodes
  81. * Start the nodes
  82. [discrete]
  83. ==== Removing custom metadata from the cluster state
  84. There may be situations where a node contains custom metadata, typically
  85. provided by plugins, that prevent the node from starting up and loading
  86. the cluster from disk.
  87. The `elasticsearch-node remove-customs` tool allows you to forcefully remove
  88. the problematic custom metadata. The tool takes a list of custom metadata names
  89. as parameters that should be removed, and also supports wildcard patterns.
  90. The intended use is:
  91. * Stop the node
  92. * Run `elasticsearch-node remove-customs name-of-custom-to-remove` on the node
  93. * Repeat for all other master-eligible nodes
  94. * Start the nodes
  95. [discrete]
  96. ==== Recovering data after a disaster
  97. Sometimes {es} nodes are temporarily stopped, perhaps because of the need to
  98. perform some maintenance activity or perhaps because of a hardware failure.
  99. After you resolve the temporary condition and restart the node,
  100. it will rejoin the cluster and continue normally. Depending on your
  101. configuration, your cluster may be able to remain completely available even
  102. while one or more of its nodes are stopped.
  103. Sometimes it might not be possible to restart a node after it has stopped. For
  104. example, the node's host may suffer from a hardware problem that cannot be
  105. repaired. If the cluster is still available then you can start up a fresh node
  106. on another host and {es} will bring this node into the cluster in place of the
  107. failed node.
  108. Each node stores its data in the data directories defined by the
  109. <<path-settings,`path.data` setting>>. This means that in a disaster you can
  110. also restart a node by moving its data directories to another host, presuming
  111. that those data directories can be recovered from the faulty host.
  112. {es} <<modules-discovery-quorums,requires a response from a majority of the
  113. master-eligible nodes>> in order to elect a master and to update the cluster
  114. state. This means that if you have three master-eligible nodes then the cluster
  115. will remain available even if one of them has failed. However if two of the
  116. three master-eligible nodes fail then the cluster will be unavailable until at
  117. least one of them is restarted.
  118. In very rare circumstances it may not be possible to restart enough nodes to
  119. restore the cluster's availability. If such a disaster occurs, you should
  120. build a new cluster from a recent snapshot and re-import any data that was
  121. ingested since that snapshot was taken.
  122. However, if the disaster is serious enough then it may not be possible to
  123. recover from a recent snapshot either. Unfortunately in this case there is no
  124. way forward that does not risk data loss, but it may be possible to use the
  125. `elasticsearch-node` tool to construct a new cluster that contains some of the
  126. data from the failed cluster.
  127. [[node-tool-override-version]]
  128. [discrete]
  129. ==== Bypassing version checks
  130. The data that {es} writes to disk is designed to be read by the current version
  131. and a limited set of future versions. It cannot generally be read by older
  132. versions, nor by versions that are more than one major version newer. The data
  133. stored on disk includes the version of the node that wrote it, and {es} checks
  134. that it is compatible with this version when starting up.
  135. In rare circumstances it may be desirable to bypass this check and start up an
  136. {es} node using data that was written by an incompatible version. This may not
  137. work if the format of the stored data has changed, and it is a risky process
  138. because it is possible for the format to change in ways that {es} may
  139. misinterpret, silently leading to data loss.
  140. To bypass this check, you can use the `elasticsearch-node override-version`
  141. tool to overwrite the version number stored in the data path with the current
  142. version, causing {es} to believe that it is compatible with the on-disk data.
  143. [[node-tool-unsafe-bootstrap]]
  144. [discrete]
  145. ===== Unsafe cluster bootstrapping
  146. If there is at least one remaining master-eligible node, but it is not possible
  147. to restart a majority of them, then the `elasticsearch-node unsafe-bootstrap`
  148. command will unsafely override the cluster's <<modules-discovery-voting,voting
  149. configuration>> as if performing another
  150. <<modules-discovery-bootstrap-cluster,cluster bootstrapping process>>.
  151. The target node can then form a new cluster on its own by using
  152. the cluster metadata held locally on the target node.
  153. [WARNING]
  154. These steps can lead to arbitrary data loss since the target node may not hold the latest cluster
  155. metadata, and this out-of-date metadata may make it impossible to use some or
  156. all of the indices in the cluster.
  157. Since unsafe bootstrapping forms a new cluster containing a single node, once
  158. you have run it you must use the <<node-tool-detach-cluster,`elasticsearch-node
  159. detach-cluster` tool>> to migrate any other surviving nodes from the failed
  160. cluster into this new cluster.
  161. When you run the `elasticsearch-node unsafe-bootstrap` tool it will analyse the
  162. state of the node and ask for confirmation before taking any action. Before
  163. asking for confirmation it reports the term and version of the cluster state on
  164. the node on which it runs as follows:
  165. [source,txt]
  166. ----
  167. Current node cluster state (term, version) pair is (4, 12)
  168. ----
  169. If you have a choice of nodes on which to run this tool then you should choose
  170. one with a term that is as large as possible. If there is more than one
  171. node with the same term, pick the one with the largest version.
  172. This information identifies the node with the freshest cluster state, which minimizes the
  173. quantity of data that might be lost. For example, if the first node reports
  174. `(4, 12)` and a second node reports `(5, 3)`, then the second node is preferred
  175. since its term is larger. However if the second node reports `(3, 17)` then
  176. the first node is preferred since its term is larger. If the second node
  177. reports `(4, 10)` then it has the same term as the first node, but has a
  178. smaller version, so the first node is preferred.
  179. [WARNING]
  180. Running this command can lead to arbitrary data loss. Only run this tool if you
  181. understand and accept the possible consequences and have exhausted all other
  182. possibilities for recovery of your cluster.
  183. The sequence of operations for using this tool are as follows:
  184. 1. Make sure you have really lost access to at least half of the
  185. master-eligible nodes in the cluster, and they cannot be repaired or recovered
  186. by moving their data paths to healthy hardware.
  187. 2. Stop **all** remaining nodes.
  188. 3. Choose one of the remaining master-eligible nodes to become the new elected
  189. master as described above.
  190. 4. On this node, run the `elasticsearch-node unsafe-bootstrap` command as shown
  191. below. Verify that the tool reported `Master node was successfully
  192. bootstrapped`.
  193. 5. Start this node and verify that it is elected as the master node.
  194. 6. Run the <<node-tool-detach-cluster,`elasticsearch-node detach-cluster`
  195. tool>>, described below, on every other node in the cluster.
  196. 7. Start all other nodes and verify that each one joins the cluster.
  197. 8. Investigate the data in the cluster to discover if any was lost during this
  198. process.
  199. When you run the tool it will make sure that the node that is being used to
  200. bootstrap the cluster is not running. It is important that all other
  201. master-eligible nodes are also stopped while this tool is running, but the tool
  202. does not check this.
  203. The message `Master node was successfully bootstrapped` does not mean that
  204. there has been no data loss, it just means that tool was able to complete its
  205. job.
  206. [[node-tool-detach-cluster]]
  207. [discrete]
  208. ===== Detaching nodes from their cluster
  209. It is unsafe for nodes to move between clusters, because different clusters
  210. have completely different cluster metadata. There is no way to safely merge the
  211. metadata from two clusters together.
  212. To protect against inadvertently joining the wrong cluster, each cluster
  213. creates a unique identifier, known as the _cluster UUID_, when it first starts
  214. up. Every node records the UUID of its cluster and refuses to join a
  215. cluster with a different UUID.
  216. However, if a node's cluster has permanently failed then it may be desirable to
  217. try and move it into a new cluster. The `elasticsearch-node detach-cluster`
  218. command lets you detach a node from its cluster by resetting its cluster UUID.
  219. It can then join another cluster with a different UUID.
  220. For example, after unsafe cluster bootstrapping you will need to detach all the
  221. other surviving nodes from their old cluster so they can join the new,
  222. unsafely-bootstrapped cluster.
  223. Unsafe cluster bootstrapping is only possible if there is at least one
  224. surviving master-eligible node. If there are no remaining master-eligible nodes
  225. then the cluster metadata is completely lost. However, the individual data
  226. nodes also contain a copy of the index metadata corresponding with their
  227. shards. This sometimes allows a new cluster to import these shards as
  228. <<modules-gateway-dangling-indices,dangling indices>>. You can sometimes
  229. recover some indices after the loss of all master-eligible nodes in a cluster
  230. by creating a new cluster and then using the `elasticsearch-node
  231. detach-cluster` command to move any surviving nodes into this new cluster.
  232. There is a risk of data loss when importing a dangling index because data nodes
  233. may not have the most recent copy of the index metadata and do not have any
  234. information about <<docs-replication,which shard copies are in-sync>>. This
  235. means that a stale shard copy may be selected to be the primary, and some of
  236. the shards may be incompatible with the imported mapping.
  237. [WARNING]
  238. Execution of this command can lead to arbitrary data loss. Only run this tool
  239. if you understand and accept the possible consequences and have exhausted all
  240. other possibilities for recovery of your cluster.
  241. The sequence of operations for using this tool are as follows:
  242. 1. Make sure you have really lost access to every one of the master-eligible
  243. nodes in the cluster, and they cannot be repaired or recovered by moving their
  244. data paths to healthy hardware.
  245. 2. Start a new cluster and verify that it is healthy. This cluster may comprise
  246. one or more brand-new master-eligible nodes, or may be an unsafely-bootstrapped
  247. cluster formed as described above.
  248. 3. Stop **all** remaining data nodes.
  249. 4. On each data node, run the `elasticsearch-node detach-cluster` tool as shown
  250. below. Verify that the tool reported `Node was successfully detached from the
  251. cluster`.
  252. 5. If necessary, configure each data node to
  253. <<modules-discovery-hosts-providers,discover the new cluster>>.
  254. 6. Start each data node and verify that it has joined the new cluster.
  255. 7. Wait for all recoveries to have completed, and investigate the data in the
  256. cluster to discover if any was lost during this process.
  257. The message `Node was successfully detached from the cluster` does not mean
  258. that there has been no data loss, it just means that tool was able to complete
  259. its job.
  260. [discrete]
  261. === Parameters
  262. `repurpose`:: Delete excess data when a node's roles are changed.
  263. `unsafe-bootstrap`:: Specifies to unsafely bootstrap this node as a new
  264. one-node cluster.
  265. `detach-cluster`:: Specifies to unsafely detach this node from its cluster so
  266. it can join a different cluster.
  267. `override-version`:: Overwrites the version number stored in the data path so
  268. that a node can start despite being incompatible with the on-disk data.
  269. `remove-settings`:: Forcefully removes the provided persistent cluster settings
  270. from the on-disk cluster state.
  271. `-E <KeyValuePair>`:: Configures a setting.
  272. `-h, --help`:: Returns all of the command parameters.
  273. `-s, --silent`:: Shows minimal output.
  274. `-v, --verbose`:: Shows verbose output.
  275. [discrete]
  276. === Examples
  277. [discrete]
  278. ==== Repurposing a node as a dedicated master node
  279. In this example, a former data node is repurposed as a dedicated master node.
  280. First update the node's settings to `node.roles: [ "master" ]` in its
  281. `elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose`
  282. command to find and remove excess shard data:
  283. [source,txt]
  284. ----
  285. node$ ./bin/elasticsearch-node repurpose
  286. WARNING: Elasticsearch MUST be stopped before running this tool.
  287. Found 2 shards in 2 indices to clean up
  288. Use -v to see list of paths and indices affected
  289. Node is being re-purposed as master and no-data. Clean-up of shard data will be performed.
  290. Do you want to proceed?
  291. Confirm [y/N] y
  292. Node successfully repurposed to master and no-data.
  293. ----
  294. [discrete]
  295. ==== Repurposing a node as a coordinating-only node
  296. In this example, a node that previously held data is repurposed as a
  297. coordinating-only node. First update the node's settings to `node.roles: []` in
  298. its `elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose`
  299. command to find and remove excess shard data and index metadata:
  300. [source,txt]
  301. ----
  302. node$./bin/elasticsearch-node repurpose
  303. WARNING: Elasticsearch MUST be stopped before running this tool.
  304. Found 2 indices (2 shards and 2 index meta data) to clean up
  305. Use -v to see list of paths and indices affected
  306. Node is being re-purposed as no-master and no-data. Clean-up of index data will be performed.
  307. Do you want to proceed?
  308. Confirm [y/N] y
  309. Node successfully repurposed to no-master and no-data.
  310. ----
  311. [discrete]
  312. ==== Removing persistent cluster settings
  313. If your nodes contain persistent cluster settings that prevent the cluster
  314. from forming, i.e., can't be removed using the <<cluster-update-settings>> API,
  315. you can run the following commands to remove one or more cluster settings.
  316. [source,txt]
  317. ----
  318. node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.exporters.my_exporter.host
  319. WARNING: Elasticsearch MUST be stopped before running this tool.
  320. The following settings will be removed:
  321. xpack.monitoring.exporters.my_exporter.host: "10.1.2.3"
  322. You should only run this tool if you have incompatible settings in the
  323. cluster state that prevent the cluster from forming.
  324. This tool can cause data loss and its use should be your last resort.
  325. Do you want to proceed?
  326. Confirm [y/N] y
  327. Settings were successfully removed from the cluster state
  328. ----
  329. You can also use wildcards to remove multiple settings, for example using
  330. [source,txt]
  331. ----
  332. node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.*
  333. ----
  334. [discrete]
  335. ==== Removing custom metadata from the cluster state
  336. If the on-disk cluster state contains custom metadata that prevents the node
  337. from starting up and loading the cluster state, you can run the following
  338. commands to remove this custom metadata.
  339. [source,txt]
  340. ----
  341. node$ ./bin/elasticsearch-node remove-customs snapshot_lifecycle
  342. WARNING: Elasticsearch MUST be stopped before running this tool.
  343. The following customs will be removed:
  344. snapshot_lifecycle
  345. You should only run this tool if you have broken custom metadata in the
  346. cluster state that prevents the cluster state from being loaded.
  347. This tool can cause data loss and its use should be your last resort.
  348. Do you want to proceed?
  349. Confirm [y/N] y
  350. Customs were successfully removed from the cluster state
  351. ----
  352. [discrete]
  353. ==== Unsafe cluster bootstrapping
  354. Suppose your cluster had five master-eligible nodes and you have permanently
  355. lost three of them, leaving two nodes remaining.
  356. * Run the tool on the first remaining node, but answer `n` at the confirmation
  357. step.
  358. [source,txt]
  359. ----
  360. node_1$ ./bin/elasticsearch-node unsafe-bootstrap
  361. WARNING: Elasticsearch MUST be stopped before running this tool.
  362. Current node cluster state (term, version) pair is (4, 12)
  363. You should only run this tool if you have permanently lost half or more
  364. of the master-eligible nodes in this cluster, and you cannot restore the
  365. cluster from a snapshot. This tool can cause arbitrary data loss and its
  366. use should be your last resort. If you have multiple surviving master
  367. eligible nodes, you should run this tool on the node with the highest
  368. cluster state (term, version) pair.
  369. Do you want to proceed?
  370. Confirm [y/N] n
  371. ----
  372. * Run the tool on the second remaining node, and again answer `n` at the
  373. confirmation step.
  374. [source,txt]
  375. ----
  376. node_2$ ./bin/elasticsearch-node unsafe-bootstrap
  377. WARNING: Elasticsearch MUST be stopped before running this tool.
  378. Current node cluster state (term, version) pair is (5, 3)
  379. You should only run this tool if you have permanently lost half or more
  380. of the master-eligible nodes in this cluster, and you cannot restore the
  381. cluster from a snapshot. This tool can cause arbitrary data loss and its
  382. use should be your last resort. If you have multiple surviving master
  383. eligible nodes, you should run this tool on the node with the highest
  384. cluster state (term, version) pair.
  385. Do you want to proceed?
  386. Confirm [y/N] n
  387. ----
  388. * Since the second node has a greater term it has a fresher cluster state, so
  389. it is better to unsafely bootstrap the cluster using this node:
  390. [source,txt]
  391. ----
  392. node_2$ ./bin/elasticsearch-node unsafe-bootstrap
  393. WARNING: Elasticsearch MUST be stopped before running this tool.
  394. Current node cluster state (term, version) pair is (5, 3)
  395. You should only run this tool if you have permanently lost half or more
  396. of the master-eligible nodes in this cluster, and you cannot restore the
  397. cluster from a snapshot. This tool can cause arbitrary data loss and its
  398. use should be your last resort. If you have multiple surviving master
  399. eligible nodes, you should run this tool on the node with the highest
  400. cluster state (term, version) pair.
  401. Do you want to proceed?
  402. Confirm [y/N] y
  403. Master node was successfully bootstrapped
  404. ----
  405. [discrete]
  406. ==== Detaching nodes from their cluster
  407. After unsafely bootstrapping a new cluster, run the `elasticsearch-node
  408. detach-cluster` command to detach all remaining nodes from the failed cluster
  409. so they can join the new cluster:
  410. [source, txt]
  411. ----
  412. node_3$ ./bin/elasticsearch-node detach-cluster
  413. WARNING: Elasticsearch MUST be stopped before running this tool.
  414. You should only run this tool if you have permanently lost all of the
  415. master-eligible nodes in this cluster and you cannot restore the cluster
  416. from a snapshot, or you have already unsafely bootstrapped a new cluster
  417. by running `elasticsearch-node unsafe-bootstrap` on a master-eligible
  418. node that belonged to the same cluster as this node. This tool can cause
  419. arbitrary data loss and its use should be your last resort.
  420. Do you want to proceed?
  421. Confirm [y/N] y
  422. Node was successfully detached from the cluster
  423. ----
  424. [discrete]
  425. ==== Bypassing version checks
  426. Run the `elasticsearch-node override-version` command to overwrite the version
  427. stored in the data path so that a node can start despite being incompatible
  428. with the data stored in the data path:
  429. [source, txt]
  430. ----
  431. node$ ./bin/elasticsearch-node override-version
  432. WARNING: Elasticsearch MUST be stopped before running this tool.
  433. This data path was last written by Elasticsearch version [x.x.x] and may no
  434. longer be compatible with Elasticsearch version [y.y.y]. This tool will bypass
  435. this compatibility check, allowing a version [y.y.y] node to start on this data
  436. path, but a version [y.y.y] node may not be able to read this data or may read
  437. it incorrectly leading to data loss.
  438. You should not use this tool. Instead, continue to use a version [x.x.x] node
  439. on this data path. If necessary, you can use reindex-from-remote to copy the
  440. data from here into an older cluster.
  441. Do you want to proceed?
  442. Confirm [y/N] y
  443. Successfully overwrote this node's metadata to bypass its version compatibility checks.
  444. ----