overview.asciidoc 9.8 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[ccr-overview]]
  4. == Overview
  5. {ccr-cap} is done on an index-by-index basis. Replication is
  6. configured at the index level. For each configured replication there is a
  7. replication source index called the _leader index_ and a replication target
  8. index called the _follower index_.
  9. Replication is active-passive. This means that while the leader index
  10. can directly be written into, the follower index can not directly receive
  11. writes.
  12. Replication is pull-based. This means that replication is driven by the
  13. follower index. This simplifies state management on the leader index and means
  14. that {ccr} does not interfere with indexing on the leader index.
  15. [float]
  16. === Configuring replication
  17. Replication can be configured in two ways:
  18. * Manually creating specific follower indices (in {kib} or by using the
  19. {ref}/ccr-put-follow.html[create follower API])
  20. * Automatically creating follower indices from auto-follow patterns (in {kib} or
  21. by using the {ref}/ccr-put-auto-follow-pattern.html[create auto-follow pattern API])
  22. For more information about managing {ccr} in {kib}, see
  23. {kibana-ref}/working-remote-clusters.html[Working with remote clusters].
  24. NOTE: You must also <<ccr-requirements,configure the leader index>>.
  25. When you initiate replication either manually or through an auto-follow pattern, the
  26. follower index is created on the local cluster. Once the follower index is created,
  27. the <<remote-recovery, remote recovery>> process copies all of the Lucene segment
  28. files from the remote cluster to the local cluster.
  29. By default, if you initiate following manually (by using {kib} or the create follower API),
  30. the recovery process is asynchronous in relationship to the
  31. {ref}/ccr-put-follow.html[create follower request]. The request returns before
  32. the <<remote-recovery, remote recovery>> process completes. If you would like to wait on
  33. the process to complete, you can use the `wait_for_active_shards` parameter.
  34. //////////////////////////
  35. [source,js]
  36. --------------------------------------------------
  37. PUT /follower_index/_ccr/follow?wait_for_active_shards=1
  38. {
  39. "remote_cluster" : "remote_cluster",
  40. "leader_index" : "leader_index"
  41. }
  42. --------------------------------------------------
  43. // CONSOLE
  44. // TESTSETUP
  45. // TEST[setup:remote_cluster_and_leader_index]
  46. [source,js]
  47. --------------------------------------------------
  48. POST /follower_index/_ccr/pause_follow
  49. --------------------------------------------------
  50. // CONSOLE
  51. // TEARDOWN
  52. //////////////////////////
  53. [float]
  54. === The mechanics of replication
  55. While replication is managed at the index level, replication is performed at the
  56. shard level. When a follower index is created, it is automatically
  57. configured to have an identical number of shards as the leader index. A follower
  58. shard task in the follower index pulls from the corresponding leader shard in
  59. the leader index by sending read requests for new operations. These read
  60. requests can be served from any copy of the leader shard (primary or replicas).
  61. For each read request sent by the follower shard task, if there are new
  62. operations available on the leader shard, the leader shard responds with
  63. operations limited by the read parameters that you established when you
  64. configured the follower index. If there are no new operations available on the
  65. leader shard, the leader shard waits up to a configured timeout for new
  66. operations. If new operations occur within that timeout, the leader shard
  67. immediately responds with those new operations. Otherwise, if the timeout
  68. elapses, the follower shard replies that there are no new operations. The
  69. follower shard task updates some statistics and immediately sends another read
  70. request to the leader shard. This ensures that the network connections between
  71. the remote cluster and the local cluster are continually being used so as to
  72. avoid forceful termination by an external source (such as a firewall).
  73. If a read request fails, the cause of the failure is inspected. If the
  74. cause of the failure is deemed to be a failure that can be recovered from (for
  75. example, a network failure), the follower shard task enters into a retry
  76. loop. Otherwise, the follower shard task is paused and requires user
  77. intervention before it can be resumed with the
  78. {ref}/ccr-post-resume-follow.html[resume follower API].
  79. When operations are received by the follower shard task, they are placed in a
  80. write buffer. The follower shard task manages this write buffer and submits
  81. bulk write requests from this write buffer to the follower shard. The write
  82. buffer and these write requests are managed by the write parameters that you
  83. established when you configured the follower index. The write buffer serves as
  84. back-pressure against read requests. If the write buffer exceeds its configured
  85. limits, no additional read requests are sent by the follower shard task. The
  86. follower shard task resumes sending read requests when the write buffer no
  87. longer exceeds its configured limits.
  88. NOTE: The intricacies of how operations are replicated from the leader are
  89. governed by settings that you can configure when you create the follower index
  90. in {kib} or by using the {ref}/ccr-put-follow.html[create follower API].
  91. Mapping updates applied to the leader index are automatically retrieved
  92. as-needed by the follower index. It is not possible to manually modify the
  93. mapping of a follower index.
  94. Settings updates applied to the leader index that are needed by the follower
  95. index are automatically retried as-needed by the follower index. Not all
  96. settings updates are needed by the follower index. For example, changing the
  97. number of replicas on the leader index is not replicated by the follower index.
  98. Alias updates applied to the leader index are automatically retrieved by the
  99. follower index. It is not possible to manually modify an alias of a follower
  100. index.
  101. NOTE: If you apply a non-dynamic settings change to the leader index that is
  102. needed by the follower index, the follower index will go through a cycle of
  103. closing itself, applying the settings update, and then re-opening itself. The
  104. follower index will be unavailable for reads and not replicating writes
  105. during this cycle.
  106. [float]
  107. === Inspecting the progress of replication
  108. You can inspect the progress of replication at the shard level with the
  109. {ref}/ccr-get-follow-stats.html[get follower stats API]. This API gives you
  110. insight into the read and writes managed by the follower shard task. It also
  111. reports read exceptions that can be retried and fatal exceptions that require
  112. user intervention.
  113. [float]
  114. === Pausing and resuming replication
  115. You can pause replication with the
  116. {ref}/ccr-post-pause-follow.html[pause follower API] and then later resume
  117. replication with the {ref}/ccr-post-resume-follow.html[resume follower API].
  118. Using these APIs in tandem enables you to adjust the read and write parameters
  119. on the follower shard task if your initial configuration is not suitable for
  120. your use case.
  121. [float]
  122. === Leader index retaining operations for replication
  123. If the follower is unable to replicate operations from a leader for a period of
  124. time, the following process can fail due to the leader lacking a complete history
  125. of operations necessary for replication.
  126. Operations replicated to the follower are identified using a sequence number
  127. generated when the operation was initially performed. Lucene segment files are
  128. occasionally merged in order to optimize searches and save space. When these
  129. merges occur, it is possible for operations associated with deleted or updated
  130. documents to be pruned during the merge. When the follower requests the sequence
  131. number for a pruned operation, the process will fail due to the operation missing
  132. on the leader.
  133. This scenario is not possible in an append-only workflow. As documents are never
  134. deleted or updated, the underlying operation will not be pruned.
  135. Elasticsearch attempts to mitigate this potential issue for update workflows using
  136. a Lucene feature called soft deletes. When a document is updated or deleted, the
  137. underlying operation is retained in the Lucene index for a period of time. This
  138. period of time is governed by the `index.soft_deletes.retention_lease.period`
  139. setting which can be <<ccr-requirements,configured on the leader index>>.
  140. When a follower initiates the index following, it acquires a retention lease from
  141. the leader. This informs the leader that it should not allow a soft delete to be
  142. pruned until either the follower indicates that it has received the operation or
  143. the lease expires. It is valuable to have monitoring in place to detect a follower
  144. replication issue prior to the lease expiring so that the problem can be remedied
  145. before the follower falls fatally behind.
  146. [float]
  147. === Remedying a follower that has fallen behind
  148. If a follower falls sufficiently behind a leader that it can no longer replicate
  149. operations this can be detected in {kib} or by using the
  150. {ref}/ccr-get-follow-stats.html[get follow stats API]. It will be reported as a
  151. `indices[].fatal_exception`.
  152. In order to restart the follower, you must pause the following process, close the
  153. index, and the create follower index again. For example:
  154. ["source","js"]
  155. ----------------------------------------------------------------------
  156. POST /follower_index/_ccr/pause_follow
  157. POST /follower_index/_close
  158. PUT /follower_index/_ccr/follow?wait_for_active_shards=1
  159. {
  160. "remote_cluster" : "remote_cluster",
  161. "leader_index" : "leader_index"
  162. }
  163. ----------------------------------------------------------------------
  164. // CONSOLE
  165. Re-creating the follower index is a destructive action. All of the existing Lucene
  166. segment files are deleted on the follower cluster. The
  167. <<remote-recovery, remote recovery>> process copies the Lucene segment
  168. files from the leader again. After the follower index initializes, the
  169. following process starts again.
  170. [float]
  171. === Terminating replication
  172. You can terminate replication with the
  173. {ref}/ccr-post-unfollow.html[unfollow API]. This API converts a follower index
  174. to a regular (non-follower) index.