| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302 | [role="xpack"][testenv="platinum"][[xpack-ccr]]== {ccr-cap}With {ccr}, you can replicate indices across clusters to:* Continue handling search requests in the event of a datacenter outage* Prevent search volume from impacting indexing throughput* Reduce search latency by processing search requests in geo-proximity to theuser{ccr-cap} uses an active-passive model. You index to a _leader_ index, and thedata is replicated to one or more read-only _follower_ indices. Before you can add a follower index to a cluster, you must configure the _remote cluster_ that contains the leader index.When the leader index receives writes, the follower indices pull changes fromthe leader index on the remote cluster. You can manually create followerindices, or configure auto-follow patterns to automatically create followerindices for new time series indices.You configure {ccr} clusters in a uni-directional or bi-directional setup:* In a uni-directional configuration, one cluster contains onlyleader indices, and the other cluster contains only follower indices.* In a bi-directional configuration, each cluster contains both leader andfollower indices.In a uni-directional configuration, the cluster containing follower indicesmust be running **the same or newer** version of {es} as the remote cluster.If newer, the versions must also be compatible as outlined in the following matrix.[%collapsible][[ccr-version-compatibility]].Version compatibility matrix====include::../modules/remote-clusters.asciidoc[tag=remote-cluster-compatibility-matrix]====[discrete][[ccr-multi-cluster-architectures]]=== Multi-cluster architecturesUse {ccr} to construct several multi-cluster architectures within the ElasticStack:* <<ccr-disaster-recovery,Disaster recovery>> in case a primary cluster fails,with a secondary cluster serving as a hot backup* <<ccr-data-locality,Data locality>> to maintain multiple copies of thedataset close to the application servers (and users), and reduce costly latency* <<ccr-centralized-reporting,Centralized reporting>> for minimizing networktraffic and latency in querying multiple geo-distributed {es} clusters, or forpreventing search load from interfering with indexing by offloading search to asecondary clusterWatch thehttps://www.elastic.co/webinars/replicate-elasticsearch-data-with-cross-cluster-replication-ccr[{ccr} webinar] to learn more about the following use cases.Then, <<ccr-getting-started,set up {ccr}>> on your local machine and workthrough the demo from the webinar.[discrete][[ccr-disaster-recovery]]==== Disaster recovery and high availabilityDisaster recovery provides your mission-critical applications with thetolerance to withstand datacenter or region outages. This use case is themost common deployment of {ccr}. You can configure clusters in differentarchitectures to support disaster recovery and high availability:* <<ccr-single-datacenter-recovery>>* <<ccr-multiple-datacenter-recovery>>* <<ccr-chained-replication>>* <<ccr-bi-directional-replication>>[discrete][[ccr-single-datacenter-recovery]]===== Single disaster recovery datacenterIn this configuration, data is replicated from the production datacenter to thedisaster recovery datacenter. Because the follower indices replicate the leaderindex, your application can use the disaster recovery datacenter if theproduction datacenter is unavailable.image::images/ccr-arch-disaster-recovery.png[Production datacenter that replicates data to a disaster recovery datacenter][discrete][[ccr-multiple-datacenter-recovery]]===== Multiple disaster recovery datacentersYou can replicate data from one datacenter to multiple datacenters. Thisconfiguration provides both disaster recovery and high availability, ensuringthat data is replicated in two datacenters if the primary datacenter is downor unavailable.In the following diagram, data from Datacenter A is replicated toDatacenter B and Datacenter C, which both have a read-only copy of the leaderindex from Datacenter A.image::images/ccr-arch-multiple-dcs.png[Production datacenter that replicates data to two other datacenters][discrete][[ccr-chained-replication]]===== Chained replicationYou can replicate data across multiple datacenters to form a replicationchain. In the following diagram, Datacenter A contains the leader index.Datacenter B replicates data from Datacenter A, and Datacenter C replicatesfrom the follower indices in Datacenter B. The connection between thesedatacenters forms a chained replication pattern.image::images/ccr-arch-chain-dcs.png[Three datacenters connected to form a replication chain][discrete][[ccr-bi-directional-replication]]===== Bi-directional replicationIn a https://www.elastic.co/blog/bi-directional-replication-with-elasticsearch-cross-cluster-replication-ccr[bi-directional replication] setup, all clusters have access to viewall data, and all clusters have an index to write to without manuallyimplementing failover. Applications can write to the local index within eachdatacenter, and read across multiple indices for a global view of allinformation.This configuration requires no manual intervention when a cluster or datacenteris unavailable. In the following diagram, if Datacenter A is unavailable, you can continue using Datacenter B without manual failover. When Datacenter Acomes online, replication resumes between the clusters.image::images/ccr-arch-bi-directional.png[Bi-directional configuration where each cluster contains both a leader index and follower indices]NOTE: This configuration is useful for index-only workloads, where no updatesto document values occur. In this configuration, documents indexed by {es} areimmutable. Clients are located in each datacenter alongside the {es}cluster, and do not communicate with clusters in different datacenters.[discrete][[ccr-data-locality]]==== Data localityBringing data closer to your users or application server can reduce latencyand response time. This methodology also applies when replicating data in {es}.For example, you can replicate a product catalog or reference dataset to 20 ormore datacenters around the world to minimize the distance between the data andthe application server.In the following diagram, data is replicated from one datacenter to threeadditional datacenters, each in their own region. The central datacentercontains the leader index, and the additional datacenters contain followerindices that replicate data in that particular region. This configurationputs data closer to the application accessing it.image::images/ccr-arch-data-locality.png[A centralized datacenter replicated across three other datacenters, each in their own region][discrete][[ccr-centralized-reporting]]==== Centralized reportingUsing a centralized reporting cluster is useful when querying across a largenetwork is inefficient. In this configuration, you replicate data from manysmaller clusters to the centralized reporting cluster.For example, a large global bank might have 100 {es} clusters around the worldthat are distributed across different regions for each bank branch. Using{ccr}, the bank can replicate events from all 100 banks to a central cluster toanalyze and aggregate events locally for reporting. Rather than maintaining amirrored cluster, the bank can use {ccr} to replicate specific indices.In the following diagram, data from three datacenters in different regions isreplicated to a centralized reporting cluster. This configuration enables youto copy data from regional hubs to a central cluster, where you can run allreports locally.image::images/ccr-arch-central-reporting.png[Three clusters in different regions sending data to a centralized reporting cluster for analysis][discrete][[ccr-replication-mechanics]]=== Replication mechanicsAlthough you <<ccr-getting-started,set up {ccr}>> at the index level, {es}achieves replication at the shard level. When a follower index is created,each shard in that index pulls changes from its corresponding shard in theleader index, which means that a follower index has the same number ofshards as its leader index. All operations on the leader are replicated by thefollower, such as operations to create, update, or delete a document.These requests can be served from any copy of the leader shard (primary orreplica).When a follower shard sends a read request, the leader shard responds withany new operations, limited by the read parameters that you establish whenconfiguring the follower index. If no new operations are available, theleader shard waits up to the configured timeout for new operations. If thetimeout elapses, the leader shard responds to the follower shard that thereare no new operations. The follower shard updates shard statistics andimmediately sends another read request to the leader shard. Thiscommunication model ensures that network connections between the remotecluster and the local cluster are continually in use, avoiding forcefultermination by an external source such as a firewall.If a read request fails, the cause of the failure is inspected. If thecause of the failure is deemed to be recoverable (such as a networkfailure), the follower shard enters into a retry loop. Otherwise, thefollower shard pauses<<ccr-pause-replication,until you resume it>>.When a follower shard receives operations from the leader shard, it placesthose operations in a write buffer. The follower shard submits bulk writerequests using operations from the write buffer. If the write buffer exceedsits configured limits, no additional read requests are sent. This configurationprovides a back-pressure against read requests, allowing the follower shardto resume sending read requests when the write buffer is no longer full.To manage how operations are replicated from the leader index, you canconfigure settings when<<ccr-getting-started-follower-index,creating the follower index>>.The follower index automatically retrieves some updates applied to the leaderindex, while other updates are retrieved as needed:[cols="3"]|===h| Update type h| Automatic  h| As needed| Alias        | {yes-icon} | {no-icon}| Mapping      | {no-icon}  | {yes-icon}| Settings     | {no-icon}  | {yes-icon}|===For example, changing the number of replicas on the leader index is notreplicated by the follower index, so that setting might not be retrieved.NOTE: You cannot manually modify a follower index's mappings or aliases.If you apply a non-dynamic settings change to the leader index that isneeded by the follower index, the follower index closes itself, applies thesettings update, and then re-opens itself. The follower index is unavailablefor reads and cannot replicate writes during this cycle.[discrete][[ccr-remote-recovery]]=== Initializing followers using remote recoveryWhen you create a follower index, you cannot use it until it is fullyinitialized. The _remote recovery_ process builds a new copy of a shard on afollower node by copying data from the primary shard in the leader cluster.{es} uses this remote recovery process to bootstrap a follower index using thedata from the leader index. This process provides the follower with a copy ofthe current state of the leader index, even if a complete history of changesis not available on the leader due to Lucene segment merging.Remote recovery is a network intensive process that transfers all of the Lucenesegment files from the leader cluster to the follower cluster. The followerrequests that a recovery session be initiated on the primary shard in theleader cluster. The follower then requests file chunks concurrently from theleader. By default, the process concurrently requests five 1MB filechunks. This default behavior is designed to support leader and followerclusters with high network latency between them.TIP: You can modify dynamic <<ccr-recovery-settings,remote recovery settings>>to rate-limit the transmitted data and manage the resources consumed by remoterecoveries.Use the <<cat-recovery,recovery API>> on the cluster containing the followerindex to obtain information about an in-progress remote recovery. Because {es}implements remote recoveries using the<<snapshot-restore,snapshot and restore>> infrastructure, running remoterecoveries are labelled as type `snapshot` in the recovery API.[discrete][[ccr-leader-requirements]]=== Replicating a leader requires soft deletes{ccr-cap} works by replaying the history of individual writeoperations that were performed on the shards of the leader index. {es} needs toretain the<<index-modules-history-retention,history of these operations>> on the leadershards so that they can be pulled by the follower shard tasks. The underlyingmechanism used to retain these operations is _soft deletes_.A soft delete occurs whenever an existing document is deleted or updated. Byretaining these soft deletes up to configurable limits, the history ofoperations can be retained on the leader shards and made available to thefollower shard tasks as it replays the history of operations.The <<ccr-index-soft-deletes-retention-period,`index.soft_deletes.retention_lease.period`>> setting defines themaximum time to retain a shard history retention lease before it isconsidered expired. This setting determines how long the cluster containingyour leader index can be offline, which is 12 hours by default. If a shard copyrecovers after its retention lease expires, then {es} will fall back to copyingthe entire index, because it can no longer replay the missing history.Soft deletes must be enabled for indices that you want to use as leaderindices. Soft deletes are enabled by default on new indices created onor after {es} 7.0.0.// tag::ccr-existing-indices-tag[]IMPORTANT: {ccr-cap} cannot be used on existing indices created using {es}7.0.0 or earlier, where soft deletes are disabled. You must<<docs-reindex,reindex>> your data into a new index with soft deletesenabled.// end::ccr-existing-indices-tag[][discrete][[ccr-learn-more]]=== Use {ccr}This following sections provide more information about how to configureand use {ccr}:* <<ccr-getting-started>>* <<ccr-managing>>* <<ccr-auto-follow>>* <<ccr-upgrading>>include::getting-started.asciidoc[]include::managing.asciidoc[]include::auto-follow.asciidoc[]include::upgrading.asciidoc[]
 |