tribe.asciidoc 3.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120
  1. [[modules-tribe]]
  2. == Tribe node
  3. deprecated[5.4.0,The `tribe` node is deprecated in favour of <<modules-cross-cluster-search>> and will be removed in Elasticsearch 7.0.]
  4. The _tribes_ feature allows a _tribe node_ to act as a federated client across
  5. multiple clusters.
  6. The tribe node works by retrieving the cluster state from all connected
  7. clusters and merging them into a global cluster state. With this information
  8. at hand, it is able to perform read and write operations against the nodes in
  9. all clusters as if they were local. Note that a tribe node needs to be able
  10. to connect to each single node in every configured cluster.
  11. The `elasticsearch.yml` config file for a tribe node just needs to list the
  12. clusters that should be joined, for instance:
  13. [source,yaml]
  14. --------------------------------
  15. tribe:
  16. t1: <1>
  17. cluster.name: cluster_one
  18. t2: <1>
  19. cluster.name: cluster_two
  20. --------------------------------
  21. <1> `t1` and `t2` are arbitrary names representing the connection to each
  22. cluster.
  23. The example above configures connections to two clusters, name `t1` and `t2`
  24. respectively. The tribe node will create a <<modules-node,node client>> to
  25. connect each cluster using <<unicast,unicast discovery>> by default. Any
  26. other settings for the connection can be configured under `tribe.{name}`, just
  27. like the `cluster.name` in the example.
  28. The merged global cluster state means that almost all operations work in the
  29. same way as a single cluster: distributed search, suggest, percolation,
  30. indexing, etc.
  31. However, there are a few exceptions:
  32. * The merged view cannot handle indices with the same name in multiple
  33. clusters. By default it will pick one of them, see later for on_conflict options.
  34. * Master level read operations (eg <<cluster-state>>, <<cluster-health>>)
  35. will automatically execute with a local flag set to true since there is
  36. no master.
  37. * Master level write operations (eg <<indices-create-index>>) are not
  38. allowed. These should be performed on a single cluster.
  39. The tribe node can be configured to block all write operations and all
  40. metadata operations with:
  41. [source,yaml]
  42. --------------------------------
  43. tribe:
  44. blocks:
  45. write: true
  46. metadata: true
  47. --------------------------------
  48. The tribe node can also configure blocks on selected indices:
  49. [source,yaml]
  50. --------------------------------
  51. tribe:
  52. blocks:
  53. write.indices: hk*,ldn*
  54. metadata.indices: hk*,ldn*
  55. --------------------------------
  56. When there is a conflict and multiple clusters hold the same index, by default
  57. the tribe node will pick one of them. This can be configured using the `tribe.on_conflict`
  58. setting. It defaults to `any`, but can be set to `drop` (drop indices that have
  59. a conflict), or `prefer_[tribeName]` to prefer the index from a specific tribe.
  60. [float]
  61. === Tribe node settings
  62. The tribe node starts a node client for each listed cluster. The following
  63. configuration options are passed down from the tribe node to each node client:
  64. * `node.name` (used to derive the `node.name` for each node client)
  65. * `network.host`
  66. * `network.bind_host`
  67. * `network.publish_host`
  68. * `transport.host`
  69. * `transport.bind_host`
  70. * `transport.publish_host`
  71. * `path.home`
  72. * `path.logs`
  73. * `shield.*`
  74. Almost any setting (except for `path.*`) may be configured at the node client
  75. level itself, in which case it will override any passed through setting from
  76. the tribe node. Settings you may want to set at the node client level
  77. include:
  78. * `network.host`
  79. * `network.bind_host`
  80. * `network.publish_host`
  81. * `transport.host`
  82. * `transport.bind_host`
  83. * `transport.publish_host`
  84. * `cluster.name`
  85. * `discovery.zen.ping.unicast.hosts`
  86. [source,yaml]
  87. ------------------------
  88. network.host: 192.168.1.5 <1>
  89. tribe:
  90. t1:
  91. cluster.name: cluster_one
  92. t2:
  93. cluster.name: cluster_two
  94. network.host: 10.1.2.3 <2>
  95. ------------------------
  96. <1> The `network.host` setting is inherited by `t1`.
  97. <2> The `t3` node client overrides the inherited from the tribe node.