index.asciidoc 16 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332
  1. [[searchable-snapshots]]
  2. == {search-snaps-cap}
  3. {search-snaps-cap} let you use <<snapshot-restore,snapshots>> to search
  4. infrequently accessed and read-only data in a very cost-effective fashion. The
  5. <<cold-tier,cold>> and <<frozen-tier,frozen>> data tiers use {search-snaps} to
  6. reduce your storage and operating costs.
  7. {search-snaps-cap} eliminate the need for <<scalability,replica shards>> after
  8. rolling over from the hot tier, potentially halving the local storage needed to
  9. search your data. {search-snaps-cap} rely on the same snapshot mechanism you
  10. already use for backups and have minimal impact on your snapshot repository
  11. storage costs.
  12. [discrete]
  13. [[using-searchable-snapshots]]
  14. === Using {search-snaps}
  15. Searching a {search-snap} index is the same as searching any other index.
  16. By default, {search-snap} indices have no replicas. The underlying snapshot
  17. provides resilience and the query volume is expected to be low enough that a
  18. single shard copy will be sufficient. However, if you need to support a higher
  19. query volume, you can add replicas by adjusting the `index.number_of_replicas`
  20. index setting.
  21. If a node fails and {search-snap} shards need to be recovered elsewhere, there
  22. is a brief window of time while {es} allocates the shards to other nodes where
  23. the cluster health will not be `green`. Searches that hit these shards may fail
  24. or return partial results until the shards are reallocated to healthy nodes.
  25. You typically manage {search-snaps} through {ilm-init}. The
  26. <<ilm-searchable-snapshot, searchable snapshots>> action automatically converts
  27. a regular index into a {search-snap} index when it reaches the `cold` or
  28. `frozen` phase. You can also make indices in existing snapshots searchable by
  29. manually mounting them using the <<searchable-snapshots-api-mount-snapshot,
  30. mount snapshot>> API.
  31. To mount an index from a snapshot that contains multiple indices, we recommend
  32. creating a <<clone-snapshot-api, clone>> of the snapshot that contains only the
  33. index you want to search, and mounting the clone. You should not delete a
  34. snapshot if it has any mounted indices, so creating a clone enables you to
  35. manage the lifecycle of the backup snapshot independently of any {search-snaps}.
  36. If you use {ilm-init} to manage your {search-snaps} then it will automatically
  37. look after cloning the snapshot as needed.
  38. You can control the allocation of the shards of {search-snap} indices using the
  39. same mechanisms as for regular indices. For example, you could use
  40. <<shard-allocation-filtering>> to restrict {search-snap} shards to a subset of
  41. your nodes.
  42. The speed of recovery of a {search-snap} index is limited by the repository
  43. setting `max_restore_bytes_per_sec` and the node setting
  44. `indices.recovery.max_bytes_per_sec` just like a normal restore operation. By
  45. default `max_restore_bytes_per_sec` is unlimited, but the default for
  46. `indices.recovery.max_bytes_per_sec` depends on the configuration of the node.
  47. See <<recovery-settings>>.
  48. We recommend that you <<indices-forcemerge, force-merge>> indices to a single
  49. segment per shard before taking a snapshot that will be mounted as a
  50. {search-snap} index. Each read from a snapshot repository takes time and costs
  51. money, and the fewer segments there are the fewer reads are needed to restore
  52. the snapshot or to respond to a search.
  53. [TIP]
  54. ====
  55. {search-snaps-cap} are ideal for managing a large archive of historical data.
  56. Historical information is typically searched less frequently than recent data
  57. and therefore may not need replicas for their performance benefits.
  58. For more complex or time-consuming searches, you can use <<async-search>> with
  59. {search-snaps}.
  60. ====
  61. [[searchable-snapshots-repository-types]]
  62. // tag::searchable-snapshot-repo-types[]
  63. Use any of the following repository types with searchable snapshots:
  64. * <<repository-s3,AWS S3>>
  65. * <<repository-gcs,Google Cloud Storage>>
  66. * <<repository-azure,Azure Blob Storage>>
  67. * {plugins}/repository-hdfs.html[Hadoop Distributed File Store (HDFS)]
  68. * <<snapshots-filesystem-repository,Shared filesystems>> such as NFS
  69. * <<snapshots-read-only-repository,Read-only HTTP and HTTPS repositories>>
  70. You can also use alternative implementations of these repository types, for
  71. instance <<repository-s3-client,MinIO>>, as long as they are fully compatible.
  72. Use the <<repo-analysis-api>> API to analyze your repository's suitability for
  73. use with searchable snapshots.
  74. // end::searchable-snapshot-repo-types[]
  75. [discrete]
  76. [[how-searchable-snapshots-work]]
  77. === How {search-snaps} work
  78. When an index is mounted from a snapshot, {es} allocates its shards to data
  79. nodes within the cluster. The data nodes then automatically retrieve the
  80. relevant shard data from the repository onto local storage, based on the
  81. <<searchable-snapshot-mount-storage-options,mount options>> specified. If
  82. possible, searches use data from local storage. If the data is not available
  83. locally, {es} downloads the data that it needs from the snapshot repository.
  84. If a node holding one of these shards fails, {es} automatically allocates the
  85. affected shards on another node, and that node restores the relevant shard data
  86. from the repository. No replicas are needed, and no complicated monitoring or
  87. orchestration is necessary to restore lost shards. Although searchable snapshot
  88. indices have no replicas by default, you may add replicas to these indices by
  89. adjusting `index.number_of_replicas`. Replicas of {search-snap} shards are
  90. recovered by copying data from the snapshot repository, just like primaries of
  91. {search-snap} shards. In contrast, replicas of regular indices are restored by
  92. copying data from the primary.
  93. [discrete]
  94. [[searchable-snapshot-mount-storage-options]]
  95. ==== Mount options
  96. To search a snapshot, you must first mount it locally as an index. Usually
  97. {ilm-init} will do this automatically, but you can also call the
  98. <<searchable-snapshots-api-mount-snapshot,mount snapshot>> API yourself. There
  99. are two options for mounting an index from a snapshot, each with different
  100. performance characteristics and local storage footprints:
  101. [[fully-mounted]]
  102. Fully mounted index::
  103. Fully caches the snapshotted index's shards in the {es} cluster. {ilm-init} uses
  104. this option in the `hot` and `cold` phases.
  105. +
  106. Search performance for a fully mounted index is normally comparable to a regular
  107. index, since there is minimal need to access the snapshot repository. While
  108. recovery is ongoing, search performance may be slower than with a regular index
  109. because a search may need some data that has not yet been retrieved into the
  110. local cache. If that happens, {es} will eagerly retrieve the data needed to
  111. complete the search in parallel with the ongoing recovery. On-disk data is
  112. preserved across restarts, such that the node does not need to re-download data
  113. that is already stored on the node after a restart.
  114. +
  115. Indices managed by {ilm-init} are prefixed with `restored-` when fully mounted.
  116. [[partially-mounted]]
  117. Partially mounted index::
  118. Uses a local cache containing only recently searched parts of the snapshotted
  119. index's data. This cache has a fixed size and is shared across shards of
  120. partially mounted indices allocated on the same data node. {ilm-init} uses this
  121. option in the `frozen` phase.
  122. +
  123. If a search requires data that is not in the cache, {es} fetches the missing
  124. data from the snapshot repository. Searches that require these fetches are
  125. slower, but the fetched data is stored in the cache so that similar searches can
  126. be served more quickly in future. {es} will evict infrequently used data from
  127. the cache to free up space. The cache is cleared when a node is restarted.
  128. +
  129. Although slower than a fully mounted index or a regular index, a partially
  130. mounted index still returns search results quickly, even for large data sets,
  131. because the layout of data in the repository is heavily optimized for search.
  132. Many searches will need to retrieve only a small subset of the total shard data
  133. before returning results.
  134. +
  135. Indices managed by {ilm-init} are prefixed with `partial-` when partially
  136. mounted.
  137. To partially mount an index, you must have one or more nodes with a shared cache
  138. available. By default, dedicated frozen data tier nodes (nodes with the
  139. `data_frozen` role and no other data roles) have a shared cache configured using
  140. the greater of 90% of total disk space and total disk space subtracted a
  141. headroom of 100GB.
  142. Using a dedicated frozen tier is highly recommended for production use. If you
  143. do not have a dedicated frozen tier, you must configure the
  144. `xpack.searchable.snapshot.shared_cache.size` setting to reserve space for the
  145. cache on one or more nodes. Partially mounted indices are only allocated to
  146. nodes that have a shared cache.
  147. [[searchable-snapshots-shared-cache]]
  148. `xpack.searchable.snapshot.shared_cache.size`::
  149. (<<static-cluster-setting,Static>>)
  150. Disk space reserved for the shared cache of partially mounted indices. Accepts a
  151. percentage of total disk space or an absolute <<byte-units,byte value>>.
  152. Defaults to `90%` of total disk space for dedicated frozen data tier nodes.
  153. Otherwise defaults to `0b`.
  154. `xpack.searchable.snapshot.shared_cache.size.max_headroom`::
  155. (<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
  156. For dedicated frozen tier nodes, the max headroom to maintain. If
  157. `xpack.searchable.snapshot.shared_cache.size` is not explicitly set, this
  158. setting defaults to `100GB`. Otherwise it defaults to `-1` (not set). You can
  159. only configure this setting if `xpack.searchable.snapshot.shared_cache.size` is
  160. set as a percentage.
  161. To illustrate how these settings work in concert let us look at two examples
  162. when using the default values of the settings on a dedicated frozen node:
  163. * A 4000 GB disk will result in a shared cache sized at 3900 GB. 90% of 4000 GB
  164. is 3600 GB, leaving 400 GB headroom. The default `max_headroom` of 100 GB takes
  165. effect, and the result is therefore 3900 GB.
  166. * A 400 GB disk will result in a shared cache sized at 360 GB.
  167. You can configure the settings in `elasticsearch.yml`:
  168. [source,yaml]
  169. ----
  170. xpack.searchable.snapshot.shared_cache.size: 4TB
  171. ----
  172. IMPORTANT: You can only configure these settings on nodes with the
  173. <<data-frozen-node,`data_frozen`>> role. Additionally, nodes with a shared cache
  174. can only have a single <<path-settings,data path>>.
  175. {es} also uses a dedicated system index named `.snapshot-blob-cache` to speed up
  176. the recoveries of {search-snap} shards. This index is used as an additional
  177. caching layer on top of the partially or fully mounted data and contains the
  178. minimal required data to start the {search-snap} shards. {es} automatically
  179. deletes the documents that are no longer used in this index. This periodic clean
  180. up can be tuned using the following settings:
  181. `searchable_snapshots.blob_cache.periodic_cleanup.interval`::
  182. (<<dynamic-cluster-setting,Dynamic>>)
  183. The interval at which the periodic cleanup of the `.snapshot-blob-cache` index
  184. is scheduled. Defaults to every hour (`1h`).
  185. `searchable_snapshots.blob_cache.periodic_cleanup.retention_period`::
  186. (<<dynamic-cluster-setting,Dynamic>>)
  187. The retention period to keep obsolete documents in the `.snapshot-blob-cache`
  188. index. Defaults to every hour (`1h`).
  189. `searchable_snapshots.blob_cache.periodic_cleanup.batch_size`::
  190. (<<dynamic-cluster-setting,Dynamic>>)
  191. The number of documents that are searched for and bulk-deleted at once during
  192. the periodic cleanup of the `.snapshot-blob-cache` index. Defaults to `100`.
  193. `searchable_snapshots.blob_cache.periodic_cleanup.pit_keep_alive`::
  194. (<<dynamic-cluster-setting,Dynamic>>)
  195. The value used for the <<point-in-time-keep-alive,point-in-time keep alive>>
  196. requests executed during the periodic cleanup of the `.snapshot-blob-cache`
  197. index. Defaults to `10m`.
  198. [discrete]
  199. [[searchable-snapshots-costs]]
  200. === Reduce costs with {search-snaps}
  201. In most cases, {search-snaps} reduce the costs of running a cluster by removing
  202. the need for replica shards and for shard data to be copied between nodes.
  203. However, if it's particularly expensive to retrieve data from a snapshot
  204. repository in your environment, {search-snaps} may be more costly than regular
  205. indices. Ensure that the cost structure of your operating environment is
  206. compatible with {search-snaps} before using them.
  207. [discrete]
  208. [[searchable-snapshots-costs-replicas]]
  209. ==== Replica costs
  210. For resiliency, a regular index requires multiple redundant copies of each shard
  211. across multiple nodes. If a node fails, {es} uses the redundancy to rebuild any
  212. lost shard copies. A {search-snap} index doesn't require replicas. If a node
  213. containing a {search-snap} index fails, {es} can rebuild the lost shard cache
  214. from the snapshot repository.
  215. Without replicas, rarely-accessed {search-snap} indices require far fewer
  216. resources. A cold data tier that contains replica-free fully-mounted
  217. {search-snap} indices requires half the nodes and disk space of a tier
  218. containing the same data in regular indices. The frozen tier, which contains
  219. only partially-mounted {search-snap} indices, requires even fewer resources.
  220. [discrete]
  221. [[snapshot-retrieval-costs]]
  222. ==== Data transfer costs
  223. When a shard of a regular index is moved between nodes, its contents are copied
  224. from another node in your cluster. In many environments, the costs of moving
  225. data between nodes are significant, especially if running in a Cloud environment
  226. with nodes in different zones. In contrast, when mounting a {search-snap} index
  227. or moving one of its shards, the data is always copied from the snapshot
  228. repository. This is typically much cheaper.
  229. WARNING: Most cloud providers charge significant fees for data transferred
  230. between regions and for data transferred out of their platforms. You should only
  231. mount snapshots into a cluster that is in the same region as the snapshot
  232. repository. If you wish to search data across multiple regions, configure
  233. multiple clusters and use <<modules-cross-cluster-search,{ccs}>> or
  234. <<xpack-ccr,{ccr}>> instead of {search-snaps}.
  235. [discrete]
  236. [[back-up-restore-searchable-snapshots]]
  237. === Back up and restore {search-snaps}
  238. You can use <<snapshots-take-snapshot,regular snapshots>> to back up a cluster
  239. containing {search-snap} indices. When you restore a snapshot containing
  240. {search-snap} indices, these indices are restored as {search-snap} indices
  241. again.
  242. Before you restore a snapshot containing a {search-snap} index, you must first
  243. <<snapshots-register-repository,register the repository>> containing the
  244. original index snapshot. When restored, the {search-snap} index mounts the
  245. original index snapshot from its original repository. If wanted, you can use
  246. separate repositories for regular snapshots and {search-snaps}.
  247. A snapshot of a {search-snap} index contains only a small amount of metadata
  248. which identifies its original index snapshot. It does not contain any data from
  249. the original index. The restore of a backup will fail to restore any
  250. {search-snap} indices whose original index snapshot is unavailable.
  251. Because {search-snap} indices are not regular indices, it is not possible to use
  252. a <<snapshots-source-only-repository,source-only repository>> to take snapshots
  253. of {search-snap} indices.
  254. [discrete]
  255. [[searchable-snapshots-reliability]]
  256. === Reliability of {search-snaps}
  257. The sole copy of the data in a {search-snap} index is the underlying snapshot,
  258. stored in the repository. For example:
  259. * You cannot unregister a repository while any of the searchable snapshots it
  260. contains are mounted in {es}. You also cannot delete a snapshot if any of its
  261. indices are mounted as a searchable snapshot in the same cluster.
  262. * If you mount indices from snapshots held in a repository to which a different
  263. cluster has write access then you must make sure that the other cluster does not
  264. delete these snapshots.
  265. * If you delete a snapshot while it is mounted as a searchable snapshot then the
  266. data is lost. Similarly, if the repository fails or corrupts the contents of the
  267. snapshot then the data is lost.
  268. * Although {es} may have cached the data onto local storage, these caches may be
  269. incomplete and cannot be used to recover any data after a repository failure.
  270. You must make sure that your repository is reliable and protects against
  271. corruption of your data while it is at rest in the repository.
  272. The blob storage offered by all major public cloud providers typically offers
  273. very good protection against data loss or corruption. If you manage your own
  274. repository storage then you are responsible for its reliability.