| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588 | [[node-tool]]== elasticsearch-nodeThe `elasticsearch-node` command enables you to perform certain unsafeoperations on a node that are only possible while it is shut down. This commandallows you to adjust the <<modules-node,role>> of a node, unsafely edit clustersettings and may be able to recover some data after a disaster or start a nodeeven if it is incompatible with the data on disk.[discrete]=== Synopsis[source,shell]--------------------------------------------------bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-version  [-E <KeyValuePair>]  [-h, --help] ([-s, --silent] | [-v, --verbose])--------------------------------------------------[discrete]=== DescriptionThis tool has a number of modes:* `elasticsearch-node repurpose` can be used to delete unwanted data from a  node if it used to be a <<data-node,data node>> or a  <<master-node,master-eligible node>> but has been repurposed not to have one  or other of these roles.* `elasticsearch-node remove-settings` can be used to remove persistent settings  from the cluster state in case where it contains incompatible settings that  prevent the cluster from forming.* `elasticsearch-node remove-customs` can be used to remove custom metadata  from the cluster state in case where it contains broken metadata that  prevents the cluster state from being loaded.* `elasticsearch-node unsafe-bootstrap` can be used to perform _unsafe cluster  bootstrapping_.  It forces one of the nodes to form a brand-new cluster on  its own, using its local copy of the cluster metadata.* `elasticsearch-node detach-cluster` enables you to move nodes from one  cluster to another.  This can be used to move nodes into a new cluster  created with the `elasticsearch-node unsafe-bootstrap` command. If unsafe  cluster bootstrapping was not possible, it also enables you to move nodes  into a brand-new cluster.* `elasticsearch-node override-version` enables you to start up a node  even if the data in the data path was written by an incompatible version of  {es}. This may sometimes allow you to downgrade to an earlier version of  {es}.[[node-tool-repurpose]][discrete]==== Changing the role of a nodeThere may be situations where you want to repurpose a node without followingthe <<change-node-role,proper repurposing processes>>. The `elasticsearch-noderepurpose` tool allows you to delete any excess on-disk data and start a nodeafter repurposing it.The intended use is:* Stop the node* Update `elasticsearch.yml` by setting `node.roles` as desired.* Run `elasticsearch-node repurpose` on the node* Start the nodeIf you run `elasticsearch-node repurpose` on a node without the `data` role andwith the `master` role then it will delete any remaining shard data on thatnode, but it will leave the index and cluster metadata alone. If you run`elasticsearch-node repurpose` on a node without the `data` and `master` rolesthen it will delete any remaining shard data and index metadata, but it willleave the cluster metadata alone.[WARNING]Running this command can lead to data loss for the indices mentioned if thedata contained is not available on other nodes in the cluster. Only run thistool if you understand and accept the possible consequences, and only afterdetermining that the node cannot be repurposed cleanly.The tool provides a summary of the data to be deleted and asks for confirmationbefore making any changes. You can get detailed information about the affectedindices and shards by passing the verbose (`-v`) option.[discrete]==== Removing persistent cluster settingsThere may be situations where a node contains persistent clustersettings that prevent the cluster from forming. Since the cluster cannot form,it is not possible to remove these settings using the<<cluster-update-settings>> API.The `elasticsearch-node remove-settings` tool allows you to forcefully removethose persistent settings from the on-disk cluster state. The tool takes alist of settings as parameters that should be removed, and also supportswildcard patterns.The intended use is:* Stop the node* Run `elasticsearch-node remove-settings name-of-setting-to-remove` on the node* Repeat for all other master-eligible nodes* Start the nodes[discrete]==== Removing custom metadata from the cluster stateThere may be situations where a node contains custom metadata, typicallyprovided by plugins, that prevent the node from starting up and loadingthe cluster from disk.The `elasticsearch-node remove-customs` tool allows you to forcefully removethe problematic custom metadata. The tool takes a list of custom metadata namesas parameters that should be removed, and also supports wildcard patterns.The intended use is:* Stop the node* Run `elasticsearch-node remove-customs name-of-custom-to-remove` on the node* Repeat for all other master-eligible nodes* Start the nodes[discrete]==== Recovering data after a disasterSometimes {es} nodes are temporarily stopped, perhaps because of the need toperform some maintenance activity or perhaps because of a hardware failure.After you resolve the temporary condition and restart the node,it will rejoin the cluster and continue normally. Depending on yourconfiguration, your cluster may be able to remain completely available evenwhile one or more of its nodes are stopped.Sometimes it might not be possible to restart a node after it has stopped. Forexample, the node's host may suffer from a hardware problem that cannot berepaired. If the cluster is still available then you can start up a fresh nodeon another host and {es} will bring this node into the cluster in place of thefailed node.Each node stores its data in the data directories defined by the<<path-settings,`path.data` setting>>. This means that in a disaster you canalso restart a node by moving its data directories to another host, presumingthat those data directories can be recovered from the faulty host.{es} <<modules-discovery-quorums,requires a response from a majority of themaster-eligible nodes>> in order to elect a master and to update the clusterstate. This means that if you have three master-eligible nodes then the clusterwill remain available even if one of them has failed. However if two of thethree master-eligible nodes fail then the cluster will be unavailable until atleast one of them is restarted.In very rare circumstances it may not be possible to restart enough nodes torestore the cluster's availability. If such a disaster occurs, you shouldbuild a new cluster from a recent snapshot and re-import any data that wasingested since that snapshot was taken.However, if the disaster is serious enough then it may not be possible torecover from a recent snapshot either. Unfortunately in this case there is noway forward that does not risk data loss, but it may be possible to use the`elasticsearch-node` tool to construct a new cluster that contains some of thedata from the failed cluster.[[node-tool-override-version]][discrete]==== Bypassing version checksThe data that {es} writes to disk is designed to be read by the current versionand a limited set of future versions. It cannot generally be read by olderversions, nor by versions that are more than one major version newer. The datastored on disk includes the version of the node that wrote it, and {es} checksthat it is compatible with this version when starting up.In rare circumstances it may be desirable to bypass this check and start up an{es} node using data that was written by an incompatible version. This may notwork if the format of the stored data has changed, and it is a risky processbecause it is possible for the format to change in ways that {es} maymisinterpret, silently leading to data loss.To bypass this check, you can use the `elasticsearch-node override-version`tool to overwrite the version number stored in the data path with the currentversion, causing {es} to believe that it is compatible with the on-disk data.[[node-tool-unsafe-bootstrap]][discrete]===== Unsafe cluster bootstrappingIf there is at least one remaining master-eligible node, but it is not possibleto restart a majority of them, then the `elasticsearch-node unsafe-bootstrap`command will unsafely override the cluster's <<modules-discovery-voting,votingconfiguration>> as if performing another<<modules-discovery-bootstrap-cluster,cluster bootstrapping process>>.The target node can then form a new cluster on its own by usingthe cluster metadata held locally on the target node.[WARNING]These steps can lead to arbitrary data loss since the target node may not hold the latest clustermetadata, and this out-of-date metadata may make it impossible to use some orall of the indices in the cluster.Since unsafe bootstrapping forms a new cluster containing a single node, onceyou have run it you must use the <<node-tool-detach-cluster,`elasticsearch-nodedetach-cluster` tool>> to migrate any other surviving nodes from the failedcluster into this new cluster.When you run the `elasticsearch-node unsafe-bootstrap` tool it will analyse thestate of the node and ask for confirmation before taking any action. Beforeasking for confirmation it reports the term and version of the cluster state onthe node on which it runs as follows:[source,txt]----Current node cluster state (term, version) pair is (4, 12)----If you have a choice of nodes on which to run this tool then you should chooseone with a term that is as large as possible. If there is more than onenode with the same term, pick the one with the largest version.This information identifies the node with the freshest cluster state, which minimizes thequantity of data that might be lost. For example, if the first node reports`(4, 12)` and a second node reports `(5, 3)`, then the second node is preferredsince its term is larger.  However if the second node reports `(3, 17)` thenthe first node is preferred since its term is larger. If the second nodereports `(4, 10)` then it has the same term as the first node, but has asmaller version, so the first node is preferred.[WARNING]Running this command can lead to arbitrary data loss. Only run this tool if youunderstand and accept the possible consequences and have exhausted all otherpossibilities for recovery of your cluster.The sequence of operations for using this tool are as follows:1. Make sure you have really lost access to at least half of themaster-eligible nodes in the cluster, and they cannot be repaired or recoveredby moving their data paths to healthy hardware.2. Stop **all** remaining nodes.3. Choose one of the remaining master-eligible nodes to become the new electedmaster as described above.4. On this node, run the `elasticsearch-node unsafe-bootstrap` command as shownbelow. Verify that the tool reported `Master node was successfullybootstrapped`.5. Start this node and verify that it is elected as the master node.6. Run the <<node-tool-detach-cluster,`elasticsearch-node detach-cluster`tool>>, described below, on every other node in the cluster.7. Start all other nodes and verify that each one joins the cluster.8. Investigate the data in the cluster to discover if any was lost during thisprocess.When you run the tool it will make sure that the node that is being used tobootstrap the cluster is not running. It is important that all othermaster-eligible nodes are also stopped while this tool is running, but the tooldoes not check this.The message `Master node was successfully bootstrapped` does not mean thatthere has been no data loss, it just means that tool was able to complete itsjob.[[node-tool-detach-cluster]][discrete]===== Detaching nodes from their clusterIt is unsafe for nodes to move between clusters, because different clustershave completely different cluster metadata. There is no way to safely merge themetadata from two clusters together.To protect against inadvertently joining the wrong cluster, each clustercreates a unique identifier, known as the _cluster UUID_, when it first startsup. Every node records the UUID of its cluster and refuses to join acluster with a different UUID.However, if a node's cluster has permanently failed then it may be desirable totry and move it into a new cluster. The `elasticsearch-node detach-cluster`command lets you detach a node from its cluster by resetting its cluster UUID.It can then join another cluster with a different UUID.For example, after unsafe cluster bootstrapping you will need to detach all theother surviving nodes from their old cluster so they can join the new,unsafely-bootstrapped cluster.Unsafe cluster bootstrapping is only possible if there is at least onesurviving master-eligible node. If there are no remaining master-eligible nodesthen the cluster metadata is completely lost. However, the individual datanodes also contain a copy of the index metadata corresponding with theirshards. This sometimes allows a new cluster to import these shards as<<modules-gateway-dangling-indices,dangling indices>>. You can sometimesrecover some indices after the loss of all master-eligible nodes in a clusterby creating a new cluster and then using the `elasticsearch-nodedetach-cluster` command to move any surviving nodes into this new cluster.There is a risk of data loss when importing a dangling index because data nodesmay not have the most recent copy of the index metadata and do not have anyinformation about <<docs-replication,which shard copies are in-sync>>. Thismeans that a stale shard copy may be selected to be the primary, and some ofthe shards may be incompatible with the imported mapping.[WARNING]Execution of this command can lead to arbitrary data loss. Only run this toolif you understand and accept the possible consequences and have exhausted allother possibilities for recovery of your cluster.The sequence of operations for using this tool are as follows:1. Make sure you have really lost access to every one of the master-eligiblenodes in the cluster, and they cannot be repaired or recovered by moving theirdata paths to healthy hardware.2. Start a new cluster and verify that it is healthy. This cluster may compriseone or more brand-new master-eligible nodes, or may be an unsafely-bootstrappedcluster formed as described above.3. Stop **all** remaining data nodes.4. On each data node, run the `elasticsearch-node detach-cluster` tool as shownbelow. Verify that the tool reported `Node was successfully detached from thecluster`.5. If necessary, configure each data node to<<modules-discovery-hosts-providers,discover the new cluster>>.6. Start each data node and verify that it has joined the new cluster.7. Wait for all recoveries to have completed, and investigate the data in thecluster to discover if any was lost during this process.The message `Node was successfully detached from the cluster` does not meanthat there has been no data loss, it just means that tool was able to completeits job.[discrete][[node-tool-parameters]]=== Parameters`repurpose`:: Delete excess data when a node's roles are changed.`unsafe-bootstrap`:: Specifies to unsafely bootstrap this node as a newone-node cluster.`detach-cluster`:: Specifies to unsafely detach this node from its cluster soit can join a different cluster.`override-version`:: Overwrites the version number stored in the data path sothat a node can start despite being incompatible with the on-disk data.`remove-settings`:: Forcefully removes the provided persistent cluster settingsfrom the on-disk cluster state.`-E <KeyValuePair>`:: Configures a setting.`-h, --help`:: Returns all of the command parameters.`-s, --silent`:: Shows minimal output.`-v, --verbose`:: Shows verbose output.[discrete]=== Examples[discrete]==== Repurposing a node as a dedicated master nodeIn this example, a former data node is repurposed as a dedicated master node.First update the node's settings to `node.roles: [ "master" ]` in its`elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose`command to find and remove excess shard data:[source,txt]----node$ ./bin/elasticsearch-node repurpose    WARNING: Elasticsearch MUST be stopped before running this tool.Found 2 shards in 2 indices to clean upUse -v to see list of paths and indices affectedNode is being re-purposed as master and no-data. Clean-up of shard data will be performed.Do you want to proceed?Confirm [y/N] yNode successfully repurposed to master and no-data.----[discrete]==== Repurposing a node as a coordinating-only nodeIn this example, a node that previously held data is repurposed as acoordinating-only node. First update the node's settings to `node.roles: []` inits `elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose`command to find and remove excess shard data and index metadata:[source,txt]----node$./bin/elasticsearch-node repurpose    WARNING: Elasticsearch MUST be stopped before running this tool.Found 2 indices (2 shards and 2 index meta data) to clean upUse -v to see list of paths and indices affectedNode is being re-purposed as no-master and no-data. Clean-up of index data will be performed.Do you want to proceed?Confirm [y/N] yNode successfully repurposed to no-master and no-data.----[discrete]==== Removing persistent cluster settingsIf your nodes contain persistent cluster settings that prevent the clusterfrom forming, i.e., can't be removed using the <<cluster-update-settings>> API,you can run the following commands to remove one or more cluster settings.[source,txt]----node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.exporters.my_exporter.host    WARNING: Elasticsearch MUST be stopped before running this tool.The following settings will be removed:xpack.monitoring.exporters.my_exporter.host: "10.1.2.3"You should only run this tool if you have incompatible settings in thecluster state that prevent the cluster from forming.This tool can cause data loss and its use should be your last resort.Do you want to proceed?Confirm [y/N] ySettings were successfully removed from the cluster state----You can also use wildcards to remove multiple settings, for example using[source,txt]----node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.*----[discrete]==== Removing custom metadata from the cluster stateIf the on-disk cluster state contains custom metadata that prevents the nodefrom starting up and loading the cluster state, you can run the followingcommands to remove this custom metadata.[source,txt]----node$ ./bin/elasticsearch-node remove-customs snapshot_lifecycle    WARNING: Elasticsearch MUST be stopped before running this tool.The following customs will be removed:snapshot_lifecycleYou should only run this tool if you have broken custom metadata in thecluster state that prevents the cluster state from being loaded.This tool can cause data loss and its use should be your last resort.Do you want to proceed?Confirm [y/N] yCustoms were successfully removed from the cluster state----[discrete]==== Unsafe cluster bootstrappingSuppose your cluster had five master-eligible nodes and you have permanentlylost three of them, leaving two nodes remaining.* Run the tool on the first remaining node, but answer `n` at the confirmation  step.[source,txt]----node_1$ ./bin/elasticsearch-node unsafe-bootstrap    WARNING: Elasticsearch MUST be stopped before running this tool.Current node cluster state (term, version) pair is (4, 12)You should only run this tool if you have permanently lost half or moreof the master-eligible nodes in this cluster, and you cannot restore thecluster from a snapshot. This tool can cause arbitrary data loss and itsuse should be your last resort. If you have multiple surviving mastereligible nodes, you should run this tool on the node with the highestcluster state (term, version) pair.Do you want to proceed?Confirm [y/N] n----* Run the tool on the second remaining node, and again answer `n` at the  confirmation step.[source,txt]----node_2$ ./bin/elasticsearch-node unsafe-bootstrap    WARNING: Elasticsearch MUST be stopped before running this tool.Current node cluster state (term, version) pair is (5, 3)You should only run this tool if you have permanently lost half or moreof the master-eligible nodes in this cluster, and you cannot restore thecluster from a snapshot. This tool can cause arbitrary data loss and itsuse should be your last resort. If you have multiple surviving mastereligible nodes, you should run this tool on the node with the highestcluster state (term, version) pair.Do you want to proceed?Confirm [y/N] n----* Since the second node has a greater term it has a fresher cluster state, so  it is better to unsafely bootstrap the cluster using this node:[source,txt]----node_2$ ./bin/elasticsearch-node unsafe-bootstrap    WARNING: Elasticsearch MUST be stopped before running this tool.Current node cluster state (term, version) pair is (5, 3)You should only run this tool if you have permanently lost half or moreof the master-eligible nodes in this cluster, and you cannot restore thecluster from a snapshot. This tool can cause arbitrary data loss and itsuse should be your last resort. If you have multiple surviving mastereligible nodes, you should run this tool on the node with the highestcluster state (term, version) pair.Do you want to proceed?Confirm [y/N] yMaster node was successfully bootstrapped----[discrete]==== Detaching nodes from their clusterAfter unsafely bootstrapping a new cluster, run the `elasticsearch-nodedetach-cluster` command to detach all remaining nodes from the failed clusterso they can join the new cluster:[source, txt]----node_3$ ./bin/elasticsearch-node detach-cluster    WARNING: Elasticsearch MUST be stopped before running this tool.You should only run this tool if you have permanently lost all of themaster-eligible nodes in this cluster and you cannot restore the clusterfrom a snapshot, or you have already unsafely bootstrapped a new clusterby running `elasticsearch-node unsafe-bootstrap` on a master-eligiblenode that belonged to the same cluster as this node. This tool can causearbitrary data loss and its use should be your last resort.Do you want to proceed?Confirm [y/N] yNode was successfully detached from the cluster----[discrete]==== Bypassing version checksRun the `elasticsearch-node override-version` command to overwrite the versionstored in the data path so that a node can start despite being incompatiblewith the data stored in the data path:[source, txt]----node$ ./bin/elasticsearch-node override-version    WARNING: Elasticsearch MUST be stopped before running this tool.This data path was last written by Elasticsearch version [x.x.x] and may nolonger be compatible with Elasticsearch version [y.y.y]. This tool will bypassthis compatibility check, allowing a version [y.y.y] node to start on this datapath, but a version [y.y.y] node may not be able to read this data or may readit incorrectly leading to data loss.You should not use this tool. Instead, continue to use a version [x.x.x] nodeon this data path. If necessary, you can use reindex-from-remote to copy thedata from here into an older cluster.Do you want to proceed?Confirm [y/N] ySuccessfully overwrote this node's metadata to bypass its version compatibility checks.----
 |