index.asciidoc 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312
  1. [[searchable-snapshots]]
  2. == {search-snaps-cap}
  3. {search-snaps-cap} let you use <<snapshot-restore,snapshots>> to search
  4. infrequently accessed and read-only data in a very cost-effective fashion. The
  5. <<cold-tier,cold>> and <<frozen-tier,frozen>> data tiers use {search-snaps} to
  6. reduce your storage and operating costs.
  7. {search-snaps-cap} eliminate the need for <<scalability,replica shards>>,
  8. potentially halving the local storage needed to search your data.
  9. {search-snaps-cap} rely on the same snapshot mechanism you already use for
  10. backups and have minimal impact on your snapshot repository storage costs.
  11. [discrete]
  12. [[using-searchable-snapshots]]
  13. === Using {search-snaps}
  14. Searching a {search-snap} index is the same as searching any other index.
  15. By default, {search-snap} indices have no replicas. The underlying snapshot
  16. provides resilience and the query volume is expected to be low enough that a
  17. single shard copy will be sufficient. However, if you need to support a higher
  18. query volume, you can add replicas by adjusting the `index.number_of_replicas`
  19. index setting.
  20. If a node fails and {search-snap} shards need to be recovered elsewhere, there
  21. is a brief window of time while {es} allocates the shards to other nodes where
  22. the cluster health will not be `green`. Searches that hit these shards may fail
  23. or return partial results until the shards are reallocated to healthy nodes.
  24. You typically manage {search-snaps} through {ilm-init}. The
  25. <<ilm-searchable-snapshot, searchable snapshots>> action automatically converts
  26. a regular index into a {search-snap} index when it reaches the `cold` or
  27. `frozen` phase. You can also make indices in existing snapshots searchable by
  28. manually mounting them using the <<searchable-snapshots-api-mount-snapshot,
  29. mount snapshot>> API.
  30. To mount an index from a snapshot that contains multiple indices, we recommend
  31. creating a <<clone-snapshot-api, clone>> of the snapshot that contains only the
  32. index you want to search, and mounting the clone. You should not delete a
  33. snapshot if it has any mounted indices, so creating a clone enables you to
  34. manage the lifecycle of the backup snapshot independently of any
  35. {search-snaps}. If you use {ilm-init} to manage your {search-snaps} then it
  36. will automatically look after cloning the snapshot as needed.
  37. You can control the allocation of the shards of {search-snap} indices using the
  38. same mechanisms as for regular indices. For example, you could use
  39. <<shard-allocation-filtering>> to restrict {search-snap} shards to a subset of
  40. your nodes.
  41. The speed of recovery of a {search-snap} index is limited by the repository
  42. setting `max_restore_bytes_per_sec` and the node setting
  43. `indices.recovery.max_bytes_per_sec` just like a normal restore operation. By
  44. default `max_restore_bytes_per_sec` is unlimited, but the default for
  45. `indices.recovery.max_bytes_per_sec` depends on the configuration of the node.
  46. See <<recovery-settings>>.
  47. We recommend that you <<indices-forcemerge, force-merge>> indices to a single
  48. segment per shard before taking a snapshot that will be mounted as a
  49. {search-snap} index. Each read from a snapshot repository takes time and costs
  50. money, and the fewer segments there are the fewer reads are needed to restore
  51. the snapshot or to respond to a search.
  52. [TIP]
  53. ====
  54. {search-snaps-cap} are ideal for managing a large archive of historical data.
  55. Historical information is typically searched less frequently than recent data
  56. and therefore may not need replicas for their performance benefits.
  57. For more complex or time-consuming searches, you can use <<async-search>> with
  58. {search-snaps}.
  59. ====
  60. [[searchable-snapshots-repository-types]]
  61. // tag::searchable-snapshot-repo-types[]
  62. Use any of the following repository types with searchable snapshots:
  63. * {plugins}/repository-s3.html[AWS S3]
  64. * {plugins}/repository-gcs.html[Google Cloud Storage]
  65. * {plugins}/repository-azure.html[Azure Blob Storage]
  66. * {plugins}/repository-hdfs.html[Hadoop Distributed File Store (HDFS)]
  67. * <<snapshots-filesystem-repository,Shared filesystems>> such as NFS
  68. * <<snapshots-read-only-repository,Read-only HTTP and HTTPS repositories>>
  69. You can also use alternative implementations of these repository types, for
  70. instance
  71. {plugins}/repository-s3-client.html#repository-s3-compatible-services[MinIO],
  72. as long as they are fully compatible. Use the <<repo-analysis-api>> API
  73. to analyze your repository's suitability for use with searchable snapshots.
  74. // end::searchable-snapshot-repo-types[]
  75. [discrete]
  76. [[how-searchable-snapshots-work]]
  77. === How {search-snaps} work
  78. When an index is mounted from a snapshot, {es} allocates its shards to data
  79. nodes within the cluster. The data nodes then automatically retrieve the
  80. relevant shard data from the repository onto local storage, based on the
  81. <<searchable-snapshot-mount-storage-options,mount options>> specified. If
  82. possible, searches use data from local storage. If the data is not available
  83. locally, {es} downloads the data that it needs from the snapshot repository.
  84. If a node holding one of these shards fails, {es} automatically allocates the
  85. affected shards on another node, and that node restores the relevant shard data
  86. from the repository. No replicas are needed, and no complicated monitoring or
  87. orchestration is necessary to restore lost shards. Although searchable snapshot
  88. indices have no replicas by default, you may add replicas to these indices by
  89. adjusting `index.number_of_replicas`. Replicas of {search-snap} shards are
  90. recovered by copying data from the snapshot repository, just like primaries of
  91. {search-snap} shards. In contrast, replicas of regular indices are restored by
  92. copying data from the primary.
  93. [discrete]
  94. [[searchable-snapshot-mount-storage-options]]
  95. ==== Mount options
  96. To search a snapshot, you must first mount it locally as an index. Usually
  97. {ilm-init} will do this automatically, but you can also call the
  98. <<searchable-snapshots-api-mount-snapshot,mount snapshot>> API yourself. There
  99. are two options for mounting an index from a snapshot, each with different
  100. performance characteristics and local storage footprints:
  101. [[fully-mounted]]
  102. Fully mounted index::
  103. Loads a full copy of the snapshotted index's shards onto node-local storage
  104. within the cluster. {ilm-init} uses this option in the `hot` and `cold` phases.
  105. +
  106. Search performance for a fully mounted index is normally
  107. comparable to a regular index, since there is minimal need to access the
  108. snapshot repository. While recovery is ongoing, search performance may be
  109. slower than with a regular index because a search may need some data that has
  110. not yet been retrieved into the local copy. If that happens, {es} will eagerly
  111. retrieve the data needed to complete the search in parallel with the ongoing
  112. recovery.
  113. +
  114. Indices managed by {ilm-init} are prefixed with `recovered-` when fully mounted.
  115. [[partially-mounted]]
  116. Partially mounted index::
  117. Uses a local cache containing only recently searched parts of the snapshotted
  118. index's data. This cache has a fixed size and is shared across nodes in the
  119. frozen tier. {ilm-init} uses this option in the `frozen` phase.
  120. +
  121. If a search requires data that is not in the cache, {es} fetches the missing
  122. data from the snapshot repository. Searches that require these fetches are
  123. slower, but the fetched data is stored in the cache so that similar searches
  124. can be served more quickly in future. {es} will evict infrequently used data
  125. from the cache to free up space.
  126. +
  127. Although slower than a fully mounted index or a regular index, a
  128. partially mounted index still returns search results quickly, even for
  129. large data sets, because the layout of data in the repository is heavily
  130. optimized for search. Many searches will need to retrieve only a small subset of
  131. the total shard data before returning results.
  132. +
  133. Indices managed by {ilm-init} are prefixed with `partial-` when partially mounted.
  134. To partially mount an index, you must have one or more nodes with a shared cache
  135. available. By default, dedicated frozen data tier nodes (nodes with the
  136. `data_frozen` role and no other data roles) have a shared cache configured using
  137. the greater of 90% of total disk space and total disk space subtracted a
  138. headroom of 100GB.
  139. Using a dedicated frozen tier is highly recommended for production use. If you
  140. do not have a dedicated frozen tier, you must configure the
  141. `xpack.searchable.snapshot.shared_cache.size` setting to reserve space for the
  142. cache on one or more nodes. Partially mounted indices
  143. are only allocated to nodes that have a shared cache.
  144. [[searchable-snapshots-shared-cache]]
  145. `xpack.searchable.snapshot.shared_cache.size`::
  146. (<<static-cluster-setting,Static>>)
  147. Disk space reserved for the shared cache of partially mounted indices.
  148. Accepts a percentage of total disk space or an absolute <<byte-units,byte
  149. value>>. Defaults to `90%` of total disk space for dedicated frozen data tier
  150. nodes. Otherwise defaults to `0b`.
  151. `xpack.searchable.snapshot.shared_cache.size.max_headroom`::
  152. (<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
  153. For dedicated frozen tier nodes, the max headroom to maintain. If
  154. `xpack.searchable.snapshot.shared_cache.size` is not explicitly set, this
  155. setting defaults to `100GB`. Otherwise it defaults to `-1` (not set). You can
  156. only configure this setting if `xpack.searchable.snapshot.shared_cache.size` is
  157. set as a percentage.
  158. To illustrate how these settings work in concert let us look at two examples
  159. when using the default values of the settings on a dedicated frozen node:
  160. * A 4000 GB disk will result in a shared cache sized at 3900 GB. 90% of 4000 GB
  161. is 3600 GB, leaving 400 GB headroom. The default `max_headroom` of 100 GB
  162. takes effect, and the result is therefore 3900 GB.
  163. * A 400 GB disk will result in a shared cache sized at 360 GB.
  164. You can configure the settings in `elasticsearch.yml`:
  165. [source,yaml]
  166. ----
  167. xpack.searchable.snapshot.shared_cache.size: 4TB
  168. ----
  169. IMPORTANT: You can only configure these settings on nodes with the
  170. <<data-frozen-node,`data_frozen`>> role. Additionally, nodes with a shared
  171. cache can only have a single <<path-settings,data path>>.
  172. {es} also uses a dedicated system index named `.snapshot-blob-cache` to speed
  173. up the recoveries of {search-snap} shards. This index is used as an additional
  174. caching layer on top of the partially or fully mounted data and contains the
  175. minimal required data to start the {search-snap} shards. {es} automatically
  176. deletes the documents that are no longer used in this index. This periodic
  177. clean up can be tuned using the following settings:
  178. `searchable_snapshots.blob_cache.periodic_cleanup.interval`::
  179. (<<dynamic-cluster-setting,Dynamic>>)
  180. The interval at which the periodic cleanup of the `.snapshot-blob-cache`
  181. index is scheduled. Defaults to every hour (`1h`).
  182. `searchable_snapshots.blob_cache.periodic_cleanup.retention_period`::
  183. (<<dynamic-cluster-setting,Dynamic>>)
  184. The retention period to keep obsolete documents in the `.snapshot-blob-cache`
  185. index. Defaults to every hour (`1h`).
  186. `searchable_snapshots.blob_cache.periodic_cleanup.batch_size`::
  187. (<<dynamic-cluster-setting,Dynamic>>)
  188. The number of documents that are searched for and bulk-deleted at once during
  189. the periodic cleanup of the `.snapshot-blob-cache` index. Defaults to `100`.
  190. `searchable_snapshots.blob_cache.periodic_cleanup.pit_keep_alive`::
  191. (<<dynamic-cluster-setting,Dynamic>>)
  192. The value used for the <point-in-time-keep-alive,point-in-time keep alive>>
  193. requests executed during the periodic cleanup of the `.snapshot-blob-cache`
  194. index. Defaults to `10m`.
  195. [discrete]
  196. [[searchable-snapshots-costs]]
  197. === Reduce costs with {search-snaps}
  198. In most cases, {search-snaps} reduce the costs of running a cluster by removing
  199. the need for replica shards and for shard data to be copied between
  200. nodes. However, if it's particularly expensive to retrieve data from a snapshot
  201. repository in your environment, {search-snaps} may be more costly than
  202. regular indices. Ensure that the cost structure of your operating environment is
  203. compatible with {search-snaps} before using them.
  204. [discrete]
  205. [[searchable-snapshots-costs-replicas]]
  206. ==== Replica costs
  207. For resiliency, a regular index requires multiple redundant copies of each shard
  208. across multiple nodes. If a node fails, {es} uses the redundancy to rebuild any
  209. lost shard copies. A {search-snap} index doesn't require replicas. If a node
  210. containing a {search-snap} index fails, {es} can rebuild the lost shard copy
  211. from the snapshot repository.
  212. Without replicas, rarely-accessed {search-snap} indices require far fewer
  213. resources. A cold data tier that contains replica-free fully-mounted
  214. {search-snap} indices requires half the nodes and disk space of a tier
  215. containing the same data in regular indices. The frozen tier, which contains
  216. only partially-mounted {search-snap} indices, requires even fewer resources.
  217. [discrete]
  218. [[snapshot-retrieval-costs]]
  219. ==== Data transfer costs
  220. When a shard of a regular index is moved between nodes, its contents are copied
  221. from another node in your cluster. In many environments, the costs of moving data
  222. between nodes are significant, especially if running in a Cloud environment with
  223. nodes in different zones. In contrast, when mounting a {search-snap} index or
  224. moving one of its shards, the data is always copied from the snapshot repository.
  225. This is typically much cheaper.
  226. WARNING: Most cloud providers charge significant fees for data transferred
  227. between regions and for data transferred out of their platforms. You should only
  228. mount snapshots into a cluster that is in the same region as the snapshot
  229. repository. If you wish to search data across multiple regions, configure
  230. multiple clusters and use <<modules-cross-cluster-search,{ccs}>> or
  231. <<xpack-ccr,{ccr}>> instead of {search-snaps}.
  232. [discrete]
  233. [[back-up-restore-searchable-snapshots]]
  234. === Back up and restore {search-snaps}
  235. You can use <<snapshots-take-snapshot,regular snapshots>> to back up a
  236. cluster containing {search-snap} indices. When you restore a snapshot
  237. containing {search-snap} indices, these indices are restored as {search-snap}
  238. indices again.
  239. Before you restore a snapshot containing a {search-snap} index, you must first
  240. <<snapshots-register-repository,register the repository>> containing the
  241. original index snapshot. When restored, the {search-snap} index mounts the
  242. original index snapshot from its original repository. If wanted, you
  243. can use separate repositories for regular snapshots and {search-snaps}.
  244. A snapshot of a {search-snap} index contains only a small amount of metadata
  245. which identifies its original index snapshot. It does not contain any data from
  246. the original index. The restore of a backup will fail to restore any
  247. {search-snap} indices whose original index snapshot is unavailable.
  248. [discrete]
  249. [[searchable-snapshots-reliability]]
  250. === Reliability of {search-snaps}
  251. The sole copy of the data in a {search-snap} index is the underlying snapshot,
  252. stored in the repository. If the repository fails or corrupts the contents of
  253. the snapshot then the data is lost. Although {es} may have made copies of the
  254. data onto local storage, these copies may be incomplete and cannot be used to
  255. recover any data after a repository failure. You must make sure that your
  256. repository is reliable and protects against corruption of your data while it is
  257. at rest in the repository.
  258. The blob storage offered by all major public cloud providers typically offers
  259. very good protection against data loss or corruption. If you manage your own
  260. repository storage then you are responsible for its reliability.