shadow-replicas.asciidoc 5.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122
  1. [[indices-shadow-replicas]]
  2. == Shadow replica indices
  3. experimental[]
  4. If you would like to use a shared filesystem, you can use the shadow replicas
  5. settings to choose where on disk the data for an index should be kept, as well
  6. as how Elasticsearch should replay operations on all the replica shards of an
  7. index.
  8. In order to fully utilize the `index.data_path` and `index.shadow_replicas`
  9. settings, you need to allow Elasticsearch to use the same data directory for
  10. multiple instances by setting `node.add_id_to_custom_path` to false in
  11. elasticsearch.yml:
  12. [source,yaml]
  13. --------------------------------------------------
  14. node.add_id_to_custom_path: false
  15. --------------------------------------------------
  16. You will also need to indicate to the security manager where the custom indices
  17. will be, so that the correct permissions can be applied. You can do this by
  18. setting the `path.shared_data` setting in elasticsearch.yml:
  19. [source,yaml]
  20. --------------------------------------------------
  21. path.shared_data: /opt/data
  22. --------------------------------------------------
  23. This means that Elasticsearch can read and write to files in any subdirectory of
  24. the `path.shared_data` setting.
  25. You can then create an index with a custom data path, where each node will use
  26. this path for the data:
  27. [WARNING]
  28. ========================
  29. Because shadow replicas do not index the document on replica shards, it's
  30. possible for the replica's known mapping to be behind the index's known mapping
  31. if the latest cluster state has not yet been processed on the node containing
  32. the replica. Because of this, it is highly recommended to use pre-defined
  33. mappings when using shadow replicas.
  34. ========================
  35. [source,js]
  36. --------------------------------------------------
  37. curl -XPUT 'localhost:9200/my_index' -d '
  38. {
  39. "index" : {
  40. "number_of_shards" : 1,
  41. "number_of_replicas" : 4,
  42. "data_path": "/opt/data/my_index",
  43. "shadow_replicas": true
  44. }
  45. }'
  46. --------------------------------------------------
  47. [WARNING]
  48. ========================
  49. In the above example, the "/opt/data/my_index" path is a shared filesystem that
  50. must be available on every node in the Elasticsearch cluster. You must also
  51. ensure that the Elasticsearch process has the correct permissions to read from
  52. and write to the directory used in the `index.data_path` setting.
  53. ========================
  54. The `data_path` does not have to contain the index name, in this case,
  55. "my_index" was used but it could easily also have been "/opt/data/"
  56. An index that has been created with the `index.shadow_replicas` setting set to
  57. "true" will not replicate document operations to any of the replica shards,
  58. instead, it will only continually refresh. Once segments are available on the
  59. filesystem where the shadow replica resides (after an Elasticsearch "flush"), a
  60. regular refresh (governed by the `index.refresh_interval`) can be used to make
  61. the new data searchable.
  62. NOTE: Since documents are only indexed on the primary shard, realtime GET
  63. requests could fail to return a document if executed on the replica shard,
  64. therefore, GET API requests automatically have the `?preference=_primary` flag
  65. set if there is no preference flag already set.
  66. In order to ensure the data is being synchronized in a fast enough manner, you
  67. may need to tune the flush threshold for the index to a desired number. A flush
  68. is needed to fsync segment files to disk, so they will be visible to all other
  69. replica nodes. Users should test what flush threshold levels they are
  70. comfortable with, as increased flushing can impact indexing performance.
  71. The Elasticsearch cluster will still detect the loss of a primary shard, and
  72. transform the replica into a primary in this situation. This transformation will
  73. take slightly longer, since no `IndexWriter` is maintained for each shadow
  74. replica.
  75. Below is the list of settings that can be changed using the update
  76. settings API:
  77. `index.data_path` (string)::
  78. Path to use for the index's data. Note that by default Elasticsearch will
  79. append the node ordinal by default to the path to ensure multiple instances
  80. of Elasticsearch on the same machine do not share a data directory.
  81. `index.shadow_replicas`::
  82. Boolean value indicating this index should use shadow replicas. Defaults to
  83. `false`.
  84. `index.shared_filesystem`::
  85. Boolean value indicating this index uses a shared filesystem. Defaults to
  86. the `true` if `index.shadow_replicas` is set to true, `false` otherwise.
  87. `index.shared_filesystem.recover_on_any_node`::
  88. Boolean value indicating whether the primary shards for the index should be
  89. allowed to recover on any node in the cluster. If a node holding a copy of
  90. the shard is found, recovery prefers that node. Defaults to `false`.
  91. === Node level settings related to shadow replicas
  92. These are non-dynamic settings that need to be configured in `elasticsearch.yml`
  93. `node.add_id_to_custom_path`::
  94. Boolean setting indicating whether Elasticsearch should append the node's
  95. ordinal to the custom data path. For example, if this is enabled and a path
  96. of "/tmp/foo" is used, the first locally-running node will use "/tmp/foo/0",
  97. the second will use "/tmp/foo/1", the third "/tmp/foo/2", etc. Defaults to
  98. `true`.