overview.asciidoc 9.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[ccr-overview]]
  4. === Overview
  5. {ccr-cap} is done on an index-by-index basis. Replication is
  6. configured at the index level. For each configured replication there is a
  7. replication source index called the _leader index_ and a replication target
  8. index called the _follower index_.
  9. Replication is active-passive. This means that while the leader index
  10. can directly be written into, the follower index can not directly receive
  11. writes.
  12. Replication is pull-based. This means that replication is driven by the
  13. follower index. This simplifies state management on the leader index and means
  14. that {ccr} does not interfere with indexing on the leader index.
  15. IMPORTANT: {ccr-cap} requires <<modules-remote-clusters, remote clusters>>.
  16. ==== Configuring replication
  17. Replication can be configured in two ways:
  18. * Manually creating specific follower indices (in {kib} or by using the
  19. {ref}/ccr-put-follow.html[create follower API])
  20. * Automatically creating follower indices from auto-follow patterns (in {kib} or
  21. by using the {ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API])
  22. For more information about managing {ccr} in {kib}, see
  23. {kibana-ref}/working-remote-clusters.html[Working with remote clusters].
  24. NOTE: You must also <<ccr-requirements,configure the leader index>>.
  25. When you initiate replication either manually or through an auto-follow pattern, the
  26. follower index is created on the local cluster. Once the follower index is created,
  27. the <<remote-recovery, remote recovery>> process copies all of the Lucene segment
  28. files from the remote cluster to the local cluster.
  29. By default, if you initiate following manually (by using {kib} or the create follower API),
  30. the recovery process is asynchronous in relationship to the
  31. {ref}/ccr-put-follow.html[create follower request]. The request returns before
  32. the <<remote-recovery, remote recovery>> process completes. If you would like to wait on
  33. the process to complete, you can use the `wait_for_active_shards` parameter.
  34. //////////////////////////
  35. [source,console]
  36. --------------------------------------------------
  37. PUT /follower_index/_ccr/follow?wait_for_active_shards=1
  38. {
  39. "remote_cluster" : "remote_cluster",
  40. "leader_index" : "leader_index"
  41. }
  42. --------------------------------------------------
  43. // TESTSETUP
  44. // TEST[setup:remote_cluster_and_leader_index]
  45. [source,console]
  46. --------------------------------------------------
  47. POST /follower_index/_ccr/pause_follow
  48. --------------------------------------------------
  49. // TEARDOWN
  50. //////////////////////////
  51. ==== The mechanics of replication
  52. While replication is managed at the index level, replication is performed at the
  53. shard level. When a follower index is created, it is automatically
  54. configured to have an identical number of shards as the leader index. A follower
  55. shard task in the follower index pulls from the corresponding leader shard in
  56. the leader index by sending read requests for new operations. These read
  57. requests can be served from any copy of the leader shard (primary or replicas).
  58. For each read request sent by the follower shard task, if there are new
  59. operations available on the leader shard, the leader shard responds with
  60. operations limited by the read parameters that you established when you
  61. configured the follower index. If there are no new operations available on the
  62. leader shard, the leader shard waits up to a configured timeout for new
  63. operations. If new operations occur within that timeout, the leader shard
  64. immediately responds with those new operations. Otherwise, if the timeout
  65. elapses, the follower shard replies that there are no new operations. The
  66. follower shard task updates some statistics and immediately sends another read
  67. request to the leader shard. This ensures that the network connections between
  68. the remote cluster and the local cluster are continually being used so as to
  69. avoid forceful termination by an external source (such as a firewall).
  70. If a read request fails, the cause of the failure is inspected. If the
  71. cause of the failure is deemed to be a failure that can be recovered from (for
  72. example, a network failure), the follower shard task enters into a retry
  73. loop. Otherwise, the follower shard task is paused and requires user
  74. intervention before it can be resumed with the
  75. {ref}/ccr-post-resume-follow.html[resume follower API].
  76. When operations are received by the follower shard task, they are placed in a
  77. write buffer. The follower shard task manages this write buffer and submits
  78. bulk write requests from this write buffer to the follower shard. The write
  79. buffer and these write requests are managed by the write parameters that you
  80. established when you configured the follower index. The write buffer serves as
  81. back-pressure against read requests. If the write buffer exceeds its configured
  82. limits, no additional read requests are sent by the follower shard task. The
  83. follower shard task resumes sending read requests when the write buffer no
  84. longer exceeds its configured limits.
  85. NOTE: The intricacies of how operations are replicated from the leader are
  86. governed by settings that you can configure when you create the follower index
  87. in {kib} or by using the {ref}/ccr-put-follow.html[create follower API].
  88. Mapping updates applied to the leader index are automatically retrieved
  89. as-needed by the follower index. It is not possible to manually modify the
  90. mapping of a follower index.
  91. Settings updates applied to the leader index that are needed by the follower
  92. index are automatically retried as-needed by the follower index. Not all
  93. settings updates are needed by the follower index. For example, changing the
  94. number of replicas on the leader index is not replicated by the follower index.
  95. Alias updates applied to the leader index are automatically retrieved by the
  96. follower index. It is not possible to manually modify an alias of a follower
  97. index.
  98. NOTE: If you apply a non-dynamic settings change to the leader index that is
  99. needed by the follower index, the follower index will go through a cycle of
  100. closing itself, applying the settings update, and then re-opening itself. The
  101. follower index will be unavailable for reads and not replicating writes
  102. during this cycle.
  103. ==== Inspecting the progress of replication
  104. You can inspect the progress of replication at the shard level with the
  105. {ref}/ccr-get-follow-stats.html[get follower stats API]. This API gives you
  106. insight into the read and writes managed by the follower shard task. It also
  107. reports read exceptions that can be retried and fatal exceptions that require
  108. user intervention.
  109. ==== Pausing and resuming replication
  110. You can pause replication with the
  111. {ref}/ccr-post-pause-follow.html[pause follower API] and then later resume
  112. replication with the {ref}/ccr-post-resume-follow.html[resume follower API].
  113. Using these APIs in tandem enables you to adjust the read and write parameters
  114. on the follower shard task if your initial configuration is not suitable for
  115. your use case.
  116. ==== Leader index retaining operations for replication
  117. If the follower is unable to replicate operations from a leader for a period of
  118. time, the following process can fail due to the leader lacking a complete history
  119. of operations necessary for replication.
  120. Operations replicated to the follower are identified using a sequence number
  121. generated when the operation was initially performed. Lucene segment files are
  122. occasionally merged in order to optimize searches and save space. When these
  123. merges occur, it is possible for operations associated with deleted or updated
  124. documents to be pruned during the merge. When the follower requests the sequence
  125. number for a pruned operation, the process will fail due to the operation missing
  126. on the leader.
  127. This scenario is not possible in an append-only workflow. As documents are never
  128. deleted or updated, the underlying operation will not be pruned.
  129. Elasticsearch attempts to mitigate this potential issue for update workflows using
  130. a Lucene feature called soft deletes. When a document is updated or deleted, the
  131. underlying operation is retained in the Lucene index for a period of time. This
  132. period of time is governed by the `index.soft_deletes.retention_lease.period`
  133. setting which can be <<ccr-requirements,configured on the leader index>>.
  134. When a follower initiates the index following, it acquires a retention lease from
  135. the leader. This informs the leader that it should not allow a soft delete to be
  136. pruned until either the follower indicates that it has received the operation or
  137. the lease expires. It is valuable to have monitoring in place to detect a follower
  138. replication issue prior to the lease expiring so that the problem can be remedied
  139. before the follower falls fatally behind.
  140. ==== Remedying a follower that has fallen behind
  141. If a follower falls sufficiently behind a leader that it can no longer replicate
  142. operations this can be detected in {kib} or by using the
  143. {ref}/ccr-get-follow-stats.html[get follow stats API]. It will be reported as a
  144. `indices[].fatal_exception`.
  145. In order to restart the follower, you must pause the following process, close the
  146. index, and the create follower index again. For example:
  147. [source,console]
  148. ----------------------------------------------------------------------
  149. POST /follower_index/_ccr/pause_follow
  150. POST /follower_index/_close
  151. PUT /follower_index/_ccr/follow?wait_for_active_shards=1
  152. {
  153. "remote_cluster" : "remote_cluster",
  154. "leader_index" : "leader_index"
  155. }
  156. ----------------------------------------------------------------------
  157. Re-creating the follower index is a destructive action. All of the existing Lucene
  158. segment files are deleted on the follower cluster. The
  159. <<remote-recovery, remote recovery>> process copies the Lucene segment
  160. files from the leader again. After the follower index initializes, the
  161. following process starts again.
  162. ==== Terminating replication
  163. You can terminate replication with the
  164. {ref}/ccr-post-unfollow.html[unfollow API]. This API converts a follower index
  165. to a regular (non-follower) index.