index.asciidoc 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347
  1. [role="xpack"]
  2. [[xpack-ccr]]
  3. == {ccr-cap}
  4. With {ccr}, you can replicate indices across clusters to:
  5. * Continue handling search requests in the event of a datacenter outage
  6. * Prevent search volume from impacting indexing throughput
  7. * Reduce search latency by processing search requests in geo-proximity to the
  8. user
  9. {ccr-cap} uses an active-passive model. You index to a _leader_ index, and the
  10. data is replicated to one or more read-only _follower_ indices. Before you can add a follower index to a cluster, you must configure the _remote cluster_ that contains the leader index.
  11. When the leader index receives writes, the follower indices pull changes from
  12. the leader index on the remote cluster. You can manually create follower
  13. indices, or configure auto-follow patterns to automatically create follower
  14. indices for new time series indices.
  15. You configure {ccr} clusters in a uni-directional or bi-directional setup:
  16. * In a uni-directional configuration, one cluster contains only
  17. leader indices, and the other cluster contains only follower indices.
  18. * In a bi-directional configuration, each cluster contains both leader and
  19. follower indices.
  20. In a uni-directional configuration, the cluster containing follower indices
  21. must be running **the same or newer** version of {es} as the remote cluster.
  22. If newer, the versions must also be compatible as outlined in the following matrix.
  23. [%collapsible]
  24. [[ccr-version-compatibility]]
  25. .Version compatibility matrix
  26. ====
  27. include::../modules/remote-clusters-shared.asciidoc[tag=remote-cluster-compatibility-matrix]
  28. ====
  29. [discrete]
  30. [[ccr-multi-cluster-architectures]]
  31. === Multi-cluster architectures
  32. Use {ccr} to construct several multi-cluster architectures within the Elastic
  33. Stack:
  34. * <<ccr-disaster-recovery,Disaster recovery>> in case a primary cluster fails,
  35. with a secondary cluster serving as a hot backup
  36. * <<ccr-data-locality,Data locality>> to maintain multiple copies of the
  37. dataset close to the application servers (and users), and reduce costly latency
  38. * <<ccr-centralized-reporting,Centralized reporting>> for minimizing network
  39. traffic and latency in querying multiple geo-distributed {es} clusters, or for
  40. preventing search load from interfering with indexing by offloading search to a
  41. secondary cluster
  42. Watch the
  43. https://www.elastic.co/webinars/replicate-elasticsearch-data-with-cross-cluster-replication-ccr[{ccr} webinar] to learn more about the following use cases.
  44. Then, <<ccr-getting-started-tutorial,set up {ccr}>> on your local machine and work
  45. through the demo from the webinar.
  46. IMPORTANT: In all of these use cases, you must
  47. <<manually-configure-security,configure security>> independently on every
  48. cluster. The security configuration is not replicated when configuring {ccr} for
  49. disaster recovery. To ensure that the {es} `security` feature state is backed up,
  50. <<back-up-specific-feature-state,take snapshots>> regularly. You can then restore
  51. the native users, roles, and tokens from your security configuration.
  52. [discrete]
  53. [[ccr-disaster-recovery]]
  54. ==== Disaster recovery and high availability
  55. Disaster recovery provides your mission-critical applications with the
  56. tolerance to withstand datacenter or region outages. This use case is the
  57. most common deployment of {ccr}. You can configure clusters in different
  58. architectures to support disaster recovery and high availability:
  59. * <<ccr-single-datacenter-recovery>>
  60. * <<ccr-multiple-datacenter-recovery>>
  61. * <<ccr-chained-replication>>
  62. * <<ccr-bi-directional-replication>>
  63. [discrete]
  64. [[ccr-single-datacenter-recovery]]
  65. ===== Single disaster recovery datacenter
  66. In this configuration, data is replicated from the production datacenter to the
  67. disaster recovery datacenter. Because the follower indices replicate the leader
  68. index, your application can use the disaster recovery datacenter if the
  69. production datacenter is unavailable.
  70. image::images/ccr-arch-disaster-recovery.png[Production datacenter that replicates data to a disaster recovery datacenter]
  71. [discrete]
  72. [[ccr-multiple-datacenter-recovery]]
  73. ===== Multiple disaster recovery datacenters
  74. You can replicate data from one datacenter to multiple datacenters. This
  75. configuration provides both disaster recovery and high availability, ensuring
  76. that data is replicated in two datacenters if the primary datacenter is down
  77. or unavailable.
  78. In the following diagram, data from Datacenter A is replicated to
  79. Datacenter B and Datacenter C, which both have a read-only copy of the leader
  80. index from Datacenter A.
  81. image::images/ccr-arch-multiple-dcs.png[Production datacenter that replicates data to two other datacenters]
  82. [discrete]
  83. [[ccr-chained-replication]]
  84. ===== Chained replication
  85. You can replicate data across multiple datacenters to form a replication
  86. chain. In the following diagram, Datacenter A contains the leader index.
  87. Datacenter B replicates data from Datacenter A, and Datacenter C replicates
  88. from the follower indices in Datacenter B. The connection between these
  89. datacenters forms a chained replication pattern.
  90. image::images/ccr-arch-chain-dcs.png[Three datacenters connected to form a replication chain]
  91. [discrete]
  92. [[ccr-bi-directional-replication]]
  93. ===== Bi-directional replication
  94. In a https://www.elastic.co/blog/bi-directional-replication-with-elasticsearch-cross-cluster-replication-ccr[bi-directional replication] setup, all clusters have access to view
  95. all data, and all clusters have an index to write to without manually
  96. implementing failover. Applications can write to the local index within each
  97. datacenter, and read across multiple indices for a global view of all
  98. information.
  99. This configuration requires no manual intervention when a cluster or datacenter
  100. is unavailable. In the following diagram, if Datacenter A is unavailable, you can continue using Datacenter B without manual failover. When Datacenter A
  101. comes online, replication resumes between the clusters.
  102. image::images/ccr-arch-bi-directional.png[Bi-directional configuration where each cluster contains both a leader index and follower indices]
  103. This configuration is particularly useful for index-only workloads, where no updates
  104. to document values occur. In this configuration, documents indexed by {es} are
  105. immutable. Clients are located in each datacenter alongside the {es}
  106. cluster, and do not communicate with clusters in different datacenters.
  107. [discrete]
  108. [[ccr-data-locality]]
  109. ==== Data locality
  110. Bringing data closer to your users or application server can reduce latency
  111. and response time. This methodology also applies when replicating data in {es}.
  112. For example, you can replicate a product catalog or reference dataset to 20 or
  113. more datacenters around the world to minimize the distance between the data and
  114. the application server.
  115. In the following diagram, data is replicated from one datacenter to three
  116. additional datacenters, each in their own region. The central datacenter
  117. contains the leader index, and the additional datacenters contain follower
  118. indices that replicate data in that particular region. This configuration
  119. puts data closer to the application accessing it.
  120. image::images/ccr-arch-data-locality.png[A centralized datacenter replicated across three other datacenters, each in their own region]
  121. [discrete]
  122. [[ccr-centralized-reporting]]
  123. ==== Centralized reporting
  124. Using a centralized reporting cluster is useful when querying across a large
  125. network is inefficient. In this configuration, you replicate data from many
  126. smaller clusters to the centralized reporting cluster.
  127. For example, a large global bank might have 100 {es} clusters around the world
  128. that are distributed across different regions for each bank branch. Using
  129. {ccr}, the bank can replicate events from all 100 banks to a central cluster to
  130. analyze and aggregate events locally for reporting. Rather than maintaining a
  131. mirrored cluster, the bank can use {ccr} to replicate specific indices.
  132. In the following diagram, data from three datacenters in different regions is
  133. replicated to a centralized reporting cluster. This configuration enables you
  134. to copy data from regional hubs to a central cluster, where you can run all
  135. reports locally.
  136. image::images/ccr-arch-central-reporting.png[Three clusters in different regions sending data to a centralized reporting cluster for analysis]
  137. [discrete]
  138. [[ccr-replication-mechanics]]
  139. === Replication mechanics
  140. Although you <<ccr-getting-started-tutorial,set up {ccr}>> at the index level, {es}
  141. achieves replication at the shard level. When a follower index is created,
  142. each shard in that index pulls changes from its corresponding shard in the
  143. leader index, which means that a follower index has the same number of
  144. shards as its leader index. All operations on the leader are replicated by the
  145. follower, such as operations to create, update, or delete a document.
  146. These requests can be served from any copy of the leader shard (primary or
  147. replica).
  148. When a follower shard sends a read request, the leader shard responds with
  149. any new operations, limited by the read parameters that you establish when
  150. configuring the follower index. If no new operations are available, the
  151. leader shard waits up to the configured timeout for new operations. If the
  152. timeout elapses, the leader shard responds to the follower shard that there
  153. are no new operations. The follower shard updates shard statistics and
  154. immediately sends another read request to the leader shard. This
  155. communication model ensures that network connections between the remote
  156. cluster and the local cluster are continually in use, avoiding forceful
  157. termination by an external source such as a firewall.
  158. If a read request fails, the cause of the failure is inspected. If the
  159. cause of the failure is deemed to be recoverable (such as a network
  160. failure), the follower shard enters into a retry loop. Otherwise, the
  161. follower shard pauses
  162. <<ccr-pause-replication,until you resume it>>.
  163. [discrete]
  164. [[ccr-update-leader-index]]
  165. ==== Processing updates
  166. You can't manually modify a follower index's mappings or aliases. To make
  167. changes, you must update the leader index. Because they are read-only, follower
  168. indices reject writes in all configurations.
  169. NOTE: Although changes to aliases on the leader index are replicated to follower
  170. indices, write indices are ignored. Follower indices can't accept direct writes,
  171. so if any leader aliases have `is_write_index` set to `true`, that value is
  172. forced to `false`.
  173. For example, you index a document named `doc_1` in Datacenter A, which
  174. replicates to Datacenter B. If a client connects to Datacenter B and attempts
  175. to update `doc_1`, the request fails. To update `doc_1`, the client must
  176. connect to Datacenter A and update the document in the leader index.
  177. When a follower shard receives operations from the leader shard, it places
  178. those operations in a write buffer. The follower shard submits bulk write
  179. requests using operations from the write buffer. If the write buffer exceeds
  180. its configured limits, no additional read requests are sent. This configuration
  181. provides a back-pressure against read requests, allowing the follower shard
  182. to resume sending read requests when the write buffer is no longer full.
  183. To manage how operations are replicated from the leader index, you can
  184. configure settings when
  185. <<ccr-getting-started-follower-index,creating the follower index>>.
  186. Changes in the index mapping on the leader index are replicated to the
  187. follower index as soon as possible. This behavior is true for index
  188. settings as well, except for some settings that are local to the leader
  189. index. For example, changing the number of replicas on the leader index is
  190. not replicated by the follower index, so that setting might not be retrieved.
  191. If you apply a non-dynamic settings change to the leader index that is
  192. needed by the follower index, the follower index closes itself, applies the
  193. settings update, and then re-opens itself. The follower index is unavailable
  194. for reads and cannot replicate writes during this cycle.
  195. [discrete]
  196. [[ccr-remote-recovery]]
  197. === Initializing followers using remote recovery
  198. When you create a follower index, you cannot use it until it is fully
  199. initialized. The _remote recovery_ process builds a new copy of a shard on a
  200. follower node by copying data from the primary shard in the leader cluster.
  201. {es} uses this remote recovery process to bootstrap a follower index using the
  202. data from the leader index. This process provides the follower with a copy of
  203. the current state of the leader index, even if a complete history of changes
  204. is not available on the leader due to Lucene segment merging.
  205. Remote recovery is a network intensive process that transfers all of the Lucene
  206. segment files from the leader cluster to the follower cluster. The follower
  207. requests that a recovery session be initiated on the primary shard in the
  208. leader cluster. The follower then requests file chunks concurrently from the
  209. leader. By default, the process concurrently requests five 1MB file
  210. chunks. This default behavior is designed to support leader and follower
  211. clusters with high network latency between them.
  212. TIP: You can modify dynamic <<ccr-recovery-settings,remote recovery settings>>
  213. to rate-limit the transmitted data and manage the resources consumed by remote
  214. recoveries.
  215. Use the <<cat-recovery,recovery API>> on the cluster containing the follower
  216. index to obtain information about an in-progress remote recovery. Because {es}
  217. implements remote recoveries using the
  218. <<snapshot-restore,snapshot and restore>> infrastructure, running remote
  219. recoveries are labelled as type `snapshot` in the recovery API.
  220. [discrete]
  221. [[ccr-leader-requirements]]
  222. === Replicating a leader requires soft deletes
  223. {ccr-cap} works by replaying the history of individual write
  224. operations that were performed on the shards of the leader index. {es} needs to
  225. retain the
  226. <<index-modules-history-retention,history of these operations>> on the leader
  227. shards so that they can be pulled by the follower shard tasks. The underlying
  228. mechanism used to retain these operations is _soft deletes_.
  229. A soft delete occurs whenever an existing document is deleted or updated. By
  230. retaining these soft deletes up to configurable limits, the history of
  231. operations can be retained on the leader shards and made available to the
  232. follower shard tasks as it replays the history of operations.
  233. The <<ccr-index-soft-deletes-retention-period,`index.soft_deletes.retention_lease.period`>>
  234. setting defines the maximum time to retain a shard history retention lease
  235. before it is considered expired. This setting determines how long the cluster
  236. containing your follower index can be offline, which is 12 hours by default. If
  237. a shard copy recovers after its retention lease expires, but the missing
  238. operations are still available on the leader index, then {es} will establish a
  239. new lease and copy the missing operations. However {es} does not guarantee to
  240. retain unleased operations, so it is also possible that some of the missing
  241. operations have been discarded by the leader and are now completely
  242. unavailable. If this happens then the follower cannot recover automatically so
  243. you must <<ccr-recreate-follower-index,recreate it>>.
  244. Soft deletes must be enabled for indices that you want to use as leader
  245. indices. Soft deletes are enabled by default on new indices created on
  246. or after {es} 7.0.0.
  247. // tag::ccr-existing-indices-tag[]
  248. IMPORTANT: {ccr-cap} cannot be used on existing indices created using {es}
  249. 7.0.0 or earlier, where soft deletes are disabled. You must
  250. <<docs-reindex,reindex>> your data into a new index with soft deletes
  251. enabled.
  252. // end::ccr-existing-indices-tag[]
  253. [discrete]
  254. [[ccr-learn-more]]
  255. === Use {ccr}
  256. This following sections provide more information about how to configure
  257. and use {ccr}:
  258. * <<ccr-getting-started-tutorial>>
  259. * <<ccr-managing>>
  260. * <<ccr-auto-follow>>
  261. * <<ccr-upgrading>>
  262. [discrete]
  263. [[ccr-limitations]]
  264. === {ccr-cap} limitations
  265. {ccr-cap} is designed to replicate user-generated indices only, and doesn't
  266. currently replicate any of the following:
  267. * <<system-indices,System indices>>
  268. * {ml-docs}/machine-learning-intro.html[Machine learning jobs]
  269. * <<index-templates,index templates>>
  270. * <<index-lifecycle-management,{ilm-cap}>> and
  271. <<snapshot-lifecycle-management-api,{slm}>> polices
  272. * {ref}/mapping-roles.html[User permissions and role mappings]
  273. * <<snapshots-register-repository,Snapshot repository settings>>
  274. * <<modules-cluster,Cluster settings>>
  275. * <<searchable-snapshots,Searchable snapshot>>
  276. If you want to replicate any of this data, you must replicate it to a remote
  277. cluster manually.
  278. NOTE: Data for <<searchable-snapshots,searchable snapshot>> indices is stored in
  279. the snapshot repository. {ccr-cap} won't replicate these indices completely, even
  280. though they're either partially or fully-cached on the {es} nodes. To achieve
  281. searchable snapshots in a remote cluster, configure snapshot repositories on
  282. the remote cluster and use the same {ilm} policy from the local cluster to move
  283. data into the cold or frozen tiers on the remote cluster.
  284. include::getting-started.asciidoc[]
  285. include::managing.asciidoc[]
  286. include::auto-follow.asciidoc[]
  287. include::upgrading.asciidoc[]
  288. include::uni-directional-disaster-recovery.asciidoc[]
  289. include::bi-directional-disaster-recovery.asciidoc[]