allocation_awareness.asciidoc 5.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114
  1. [[allocation-awareness]]
  2. === Shard Allocation Awareness
  3. When running nodes on multiple VMs on the same physical server, on multiple
  4. racks, or across multiple zones or domains, it is more likely that two nodes on
  5. the same physical server, in the same rack, or in the same zone or domain will
  6. crash at the same time, rather than two unrelated nodes crashing
  7. simultaneously.
  8. If Elasticsearch is _aware_ of the physical configuration of your hardware, it
  9. can ensure that the primary shard and its replica shards are spread across
  10. different physical servers, racks, or zones, to minimise the risk of losing
  11. all shard copies at the same time.
  12. The shard allocation awareness settings allow you to tell Elasticsearch about
  13. your hardware configuration.
  14. As an example, let's assume we have several racks. When we start a node, we
  15. can tell it which rack it is in by assigning it an arbitrary metadata
  16. attribute called `rack_id` -- we could use any attribute name. For example:
  17. [source,sh]
  18. ----------------------
  19. ./bin/elasticsearch -Enode.attr.rack_id=rack_one <1>
  20. ----------------------
  21. <1> This setting could also be specified in the `elasticsearch.yml` config file.
  22. Now, we need to set up _shard allocation awareness_ by telling Elasticsearch
  23. which attributes to use. This can be configured in the `elasticsearch.yml`
  24. file on *all* master-eligible nodes, or it can be set (and changed) with the
  25. <<cluster-update-settings,cluster-update-settings>> API.
  26. For our example, we'll set the value in the config file:
  27. [source,yaml]
  28. --------------------------------------------------------
  29. cluster.routing.allocation.awareness.attributes: rack_id
  30. --------------------------------------------------------
  31. With this config in place, let's say we start two nodes with
  32. `node.attr.rack_id` set to `rack_one`, and we create an index with 5 primary
  33. shards and 1 replica of each primary. All primaries and replicas are
  34. allocated across the two nodes.
  35. Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`,
  36. Elasticsearch will move shards across to the new nodes, ensuring (if possible)
  37. that no two copies of the same shard will be in the same rack. However if
  38. `rack_two` were to fail, taking down both of its nodes, Elasticsearch will
  39. still allocate the lost shard copies to nodes in `rack_one`.
  40. .Prefer local shards
  41. *********************************************
  42. When executing search or GET requests, with shard awareness enabled,
  43. Elasticsearch will prefer using local shards -- shards in the same awareness
  44. group -- to execute the request. This is usually faster than crossing between
  45. racks or across zone boundaries.
  46. *********************************************
  47. Multiple awareness attributes can be specified, in which case each attribute
  48. is considered separately when deciding where to allocate the shards.
  49. [source,yaml]
  50. -------------------------------------------------------------
  51. cluster.routing.allocation.awareness.attributes: rack_id,zone
  52. -------------------------------------------------------------
  53. NOTE: When using awareness attributes, shards will not be allocated to nodes
  54. that don't have values set for those attributes.
  55. NOTE: Number of primary/replica of a shard allocated on a specific group of
  56. nodes with the same awareness attribute value is determined by the number of
  57. attribute values. When the number of nodes in groups is unbalanced and there
  58. are many replicas, replica shards may be left unassigned.
  59. [float]
  60. [[forced-awareness]]
  61. === Forced Awareness
  62. Imagine that you have two zones and enough hardware across the two zones to
  63. host all of your primary and replica shards. But perhaps the hardware in a
  64. single zone, while sufficient to host half the shards, would be unable to host
  65. *ALL* the shards.
  66. With ordinary awareness, if one zone lost contact with the other zone,
  67. Elasticsearch would assign all of the missing replica shards to a single zone.
  68. But in this example, this sudden extra load would cause the hardware in the
  69. remaining zone to be overloaded.
  70. Forced awareness solves this problem by *NEVER* allowing copies of the same
  71. shard to be allocated to the same zone.
  72. For example, lets say we have an awareness attribute called `zone`, and we
  73. know we are going to have two zones, `zone1` and `zone2`. Here is how we can
  74. force awareness on a node:
  75. [source,yaml]
  76. -------------------------------------------------------------------
  77. cluster.routing.allocation.awareness.force.zone.values: zone1,zone2 <1>
  78. cluster.routing.allocation.awareness.attributes: zone
  79. -------------------------------------------------------------------
  80. <1> We must list all possible values that the `zone` attribute can have.
  81. Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an
  82. index with 5 shards and 1 replica. The index will be created, but only the 5
  83. primary shards will be allocated (with no replicas). Only when we start more
  84. nodes with `node.attr.zone` set to `zone2` will the replicas be allocated.
  85. The `cluster.routing.allocation.awareness.*` settings can all be updated
  86. dynamically on a live cluster with the
  87. <<cluster-update-settings,cluster-update-settings>> API.