overview.asciidoc 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[ccr-overview]]
  4. === Overview
  5. {ccr-cap} is done on an index-by-index basis. Replication is
  6. configured at the index level. For each configured replication there is a
  7. replication source index called the _leader index_ and a replication target
  8. index called the _follower index_.
  9. Replication is active-passive. This means that while the leader index
  10. can directly be written into, the follower index can not directly receive
  11. writes.
  12. Replication is pull-based. This means that replication is driven by the
  13. follower index. This simplifies state management on the leader index and means
  14. that {ccr} does not interfere with indexing on the leader index.
  15. In {ccr}, the cluster performing this pull is known as the _local cluster_. The
  16. cluster being replicated is known as the _remote cluster_.
  17. ==== Prerequisites
  18. * {ccr-cap} requires <<modules-remote-clusters, remote clusters>>.
  19. * The {es} version of the local cluster must be **the same as or newer** than
  20. the remote cluster. If newer, the versions must also be compatible as outlined
  21. in the following matrix.
  22. include::../modules/remote-clusters.asciidoc[tag=remote-cluster-compatibility-matrix]
  23. ==== Configuring replication
  24. Replication can be configured in two ways:
  25. * Manually creating specific follower indices (in {kib} or by using the
  26. {ref}/ccr-put-follow.html[create follower API])
  27. * Automatically creating follower indices from auto-follow patterns (in {kib} or
  28. by using the {ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API])
  29. For more information about managing {ccr} in {kib}, see
  30. {kibana-ref}/working-remote-clusters.html[Working with remote clusters].
  31. NOTE: You must also <<ccr-requirements,configure the leader index>>.
  32. When you initiate replication either manually or through an auto-follow pattern, the
  33. follower index is created on the local cluster. Once the follower index is created,
  34. the <<remote-recovery, remote recovery>> process copies all of the Lucene segment
  35. files from the remote cluster to the local cluster.
  36. By default, if you initiate following manually (by using {kib} or the create follower API),
  37. the recovery process is asynchronous in relationship to the
  38. {ref}/ccr-put-follow.html[create follower request]. The request returns before
  39. the <<remote-recovery, remote recovery>> process completes. If you would like to wait on
  40. the process to complete, you can use the `wait_for_active_shards` parameter.
  41. //////////////////////////
  42. [source,console]
  43. --------------------------------------------------
  44. PUT /follower_index/_ccr/follow?wait_for_active_shards=1
  45. {
  46. "remote_cluster" : "remote_cluster",
  47. "leader_index" : "leader_index"
  48. }
  49. --------------------------------------------------
  50. // TESTSETUP
  51. // TEST[setup:remote_cluster_and_leader_index]
  52. [source,console]
  53. --------------------------------------------------
  54. POST /follower_index/_ccr/pause_follow
  55. --------------------------------------------------
  56. // TEARDOWN
  57. //////////////////////////
  58. ==== The mechanics of replication
  59. While replication is managed at the index level, replication is performed at the
  60. shard level. When a follower index is created, it is automatically
  61. configured to have an identical number of shards as the leader index. A follower
  62. shard task in the follower index pulls from the corresponding leader shard in
  63. the leader index by sending read requests for new operations. These read
  64. requests can be served from any copy of the leader shard (primary or replicas).
  65. For each read request sent by the follower shard task, if there are new
  66. operations available on the leader shard, the leader shard responds with
  67. operations limited by the read parameters that you established when you
  68. configured the follower index. If there are no new operations available on the
  69. leader shard, the leader shard waits up to a configured timeout for new
  70. operations. If new operations occur within that timeout, the leader shard
  71. immediately responds with those new operations. Otherwise, if the timeout
  72. elapses, the leader shard replies that there are no new operations. The
  73. follower shard task updates some statistics and immediately sends another read
  74. request to the leader shard. This ensures that the network connections between
  75. the remote cluster and the local cluster are continually being used so as to
  76. avoid forceful termination by an external source (such as a firewall).
  77. If a read request fails, the cause of the failure is inspected. If the
  78. cause of the failure is deemed to be a failure that can be recovered from (for
  79. example, a network failure), the follower shard task enters into a retry
  80. loop. Otherwise, the follower shard task is paused and requires user
  81. intervention before it can be resumed with the
  82. {ref}/ccr-post-resume-follow.html[resume follower API].
  83. When operations are received by the follower shard task, they are placed in a
  84. write buffer. The follower shard task manages this write buffer and submits
  85. bulk write requests from this write buffer to the follower shard. The write
  86. buffer and these write requests are managed by the write parameters that you
  87. established when you configured the follower index. The write buffer serves as
  88. back-pressure against read requests. If the write buffer exceeds its configured
  89. limits, no additional read requests are sent by the follower shard task. The
  90. follower shard task resumes sending read requests when the write buffer no
  91. longer exceeds its configured limits.
  92. NOTE: The intricacies of how operations are replicated from the leader are
  93. governed by settings that you can configure when you create the follower index
  94. in {kib} or by using the {ref}/ccr-put-follow.html[create follower API].
  95. Mapping updates applied to the leader index are automatically retrieved
  96. as-needed by the follower index. It is not possible to manually modify the
  97. mapping of a follower index.
  98. Settings updates applied to the leader index that are needed by the follower
  99. index are automatically retried as-needed by the follower index. Not all
  100. settings updates are needed by the follower index. For example, changing the
  101. number of replicas on the leader index is not replicated by the follower index.
  102. Alias updates applied to the leader index are automatically retrieved by the
  103. follower index. It is not possible to manually modify an alias of a follower
  104. index.
  105. NOTE: If you apply a non-dynamic settings change to the leader index that is
  106. needed by the follower index, the follower index will go through a cycle of
  107. closing itself, applying the settings update, and then re-opening itself. The
  108. follower index will be unavailable for reads and not replicating writes
  109. during this cycle.
  110. ==== Inspecting the progress of replication
  111. You can inspect the progress of replication at the shard level with the
  112. {ref}/ccr-get-follow-stats.html[get follower stats API]. This API gives you
  113. insight into the read and writes managed by the follower shard task. It also
  114. reports read exceptions that can be retried and fatal exceptions that require
  115. user intervention.
  116. ==== Pausing and resuming replication
  117. You can pause replication with the
  118. {ref}/ccr-post-pause-follow.html[pause follower API] and then later resume
  119. replication with the {ref}/ccr-post-resume-follow.html[resume follower API].
  120. Using these APIs in tandem enables you to adjust the read and write parameters
  121. on the follower shard task if your initial configuration is not suitable for
  122. your use case.
  123. ==== Leader index retaining operations for replication
  124. If the follower is unable to replicate operations from a leader for a period of
  125. time, the following process can fail due to the leader lacking a complete history
  126. of operations necessary for replication.
  127. Operations replicated to the follower are identified using a sequence number
  128. generated when the operation was initially performed. Lucene segment files are
  129. occasionally merged in order to optimize searches and save space. When these
  130. merges occur, it is possible for operations associated with deleted or updated
  131. documents to be pruned during the merge. When the follower requests the sequence
  132. number for a pruned operation, the process will fail due to the operation missing
  133. on the leader.
  134. This scenario is not possible in an append-only workflow. As documents are never
  135. deleted or updated, the underlying operation will not be pruned.
  136. Elasticsearch attempts to mitigate this potential issue for update workflows using
  137. a Lucene feature called soft deletes. When a document is updated or deleted, the
  138. underlying operation is retained in the Lucene index for a period of time. This
  139. period of time is governed by the `index.soft_deletes.retention_lease.period`
  140. setting which can be <<ccr-requirements,configured on the leader index>>.
  141. When a follower initiates the index following, it acquires a retention lease from
  142. the leader. This informs the leader that it should not allow a soft delete to be
  143. pruned until either the follower indicates that it has received the operation or
  144. the lease expires. It is valuable to have monitoring in place to detect a follower
  145. replication issue prior to the lease expiring so that the problem can be remedied
  146. before the follower falls fatally behind.
  147. ==== Remedying a follower that has fallen behind
  148. If a follower falls sufficiently behind a leader that it can no longer replicate
  149. operations this can be detected in {kib} or by using the
  150. {ref}/ccr-get-follow-stats.html[get follow stats API]. It will be reported as a
  151. `indices[].fatal_exception`.
  152. In order to restart the follower, you must pause the following process, close the
  153. index, and the create follower index again. For example:
  154. [source,console]
  155. ----------------------------------------------------------------------
  156. POST /follower_index/_ccr/pause_follow
  157. POST /follower_index/_close
  158. PUT /follower_index/_ccr/follow?wait_for_active_shards=1
  159. {
  160. "remote_cluster" : "remote_cluster",
  161. "leader_index" : "leader_index"
  162. }
  163. ----------------------------------------------------------------------
  164. Re-creating the follower index is a destructive action. All of the existing Lucene
  165. segment files are deleted on the follower cluster. The
  166. <<remote-recovery, remote recovery>> process copies the Lucene segment
  167. files from the leader again. After the follower index initializes, the
  168. following process starts again.
  169. ==== Terminating replication
  170. You can terminate replication with the
  171. {ref}/ccr-post-unfollow.html[unfollow API]. This API converts a follower index
  172. to a regular (non-follower) index.