reroute.asciidoc 4.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105
  1. [[cluster-reroute]]
  2. == Cluster Reroute
  3. The reroute command allows to explicitly execute a cluster reroute
  4. allocation command including specific commands. For example, a shard can
  5. be moved from one node to another explicitly, an allocation can be
  6. canceled, or an unassigned shard can be explicitly allocated on a
  7. specific node.
  8. Here is a short example of how a simple reroute API call:
  9. [source,js]
  10. --------------------------------------------------
  11. curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
  12. "commands" : [ {
  13. "move" :
  14. {
  15. "index" : "test", "shard" : 0,
  16. "from_node" : "node1", "to_node" : "node2"
  17. }
  18. },
  19. {
  20. "allocate_replica" : {
  21. "index" : "test", "shard" : 1, "node" : "node3"
  22. }
  23. }
  24. ]
  25. }'
  26. --------------------------------------------------
  27. An important aspect to remember is the fact that once when an allocation
  28. occurs, the cluster will aim at re-balancing its state back to an even
  29. state. For example, if the allocation includes moving a shard from
  30. `node1` to `node2`, in an `even` state, then another shard will be moved
  31. from `node2` to `node1` to even things out.
  32. The cluster can be set to disable allocations, which means that only the
  33. explicitly allocations will be performed. Obviously, only once all
  34. commands has been applied, the cluster will aim to be re-balance its
  35. state.
  36. Another option is to run the commands in `dry_run` (as a URI flag, or in
  37. the request body). This will cause the commands to apply to the current
  38. cluster state, and return the resulting cluster after the commands (and
  39. re-balancing) has been applied.
  40. If the `explain` parameter is specified, a detailed explanation of why the
  41. commands could or could not be executed is returned.
  42. The commands supported are:
  43. `move`::
  44. Move a started shard from one node to another node. Accepts
  45. `index` and `shard` for index name and shard number, `from_node` for the
  46. node to move the shard `from`, and `to_node` for the node to move the
  47. shard to.
  48. `cancel`::
  49. Cancel allocation of a shard (or recovery). Accepts `index`
  50. and `shard` for index name and shard number, and `node` for the node to
  51. cancel the shard allocation on. It also accepts `allow_primary` flag to
  52. explicitly specify that it is allowed to cancel allocation for a primary
  53. shard. This can be used to force resynchronization of existing replicas
  54. from the primary shard by cancelling them and allowing them to be
  55. reinitialized through the standard reallocation process.
  56. `allocate_replica`::
  57. Allocate an unassigned replica shard to a node. Accepts the
  58. `index` and `shard` for index name and shard number, and `node` to
  59. allocate the shard to. Takes <<modules-cluster,allocation deciders>> into account.
  60. Two more commands are available that allow the allocation of a primary shard
  61. to a node. These commands should however be used with extreme care, as primary
  62. shard allocation is usually fully automatically handled by Elasticsearch.
  63. Reasons why a primary shard cannot be automatically allocated include the following:
  64. - A new index was created but there is no node which satisfies the allocation deciders.
  65. - An up-to-date shard copy of the data cannot be found on the current data nodes in
  66. the cluster. To prevent data loss, the system does not automatically promote a stale
  67. shard copy to primary.
  68. As a manual override, two commands to forcefully allocate primary shards
  69. are available:
  70. `allocate_stale_primary`::
  71. Allocate a primary shard to a node that holds a stale copy. Accepts the
  72. `index` and `shard` for index name and shard number, and `node` to
  73. allocate the shard to. Using this command may lead to data loss
  74. for the provided shard id. If a node which has the good copy of the
  75. data rejoins the cluster later on, that data will be overwritten with
  76. the data of the stale copy that was forcefully allocated with this
  77. command. To ensure that these implications are well-understood,
  78. this command requires the special field `accept_data_loss` to be
  79. explicitly set to `true` for it to work.
  80. `allocate_empty_primary`::
  81. Allocate an empty primary shard to a node. Accepts the
  82. `index` and `shard` for index name and shard number, and `node` to
  83. allocate the shard to. Using this command leads to a complete loss
  84. of all data that was indexed into this shard, if it was previously
  85. started. If a node which has a copy of the
  86. data rejoins the cluster later on, that data will be deleted!
  87. To ensure that these implications are well-understood,
  88. this command requires the special field `accept_data_loss` to be
  89. explicitly set to `true` for it to work.