index.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251
  1. [[searchable-snapshots]]
  2. == {search-snaps-cap}
  3. {search-snaps-cap} let you use <<snapshot-restore,snapshots>> to search
  4. infrequently accessed and read-only data in a very cost-effective fashion. The
  5. <<cold-tier,cold>> and <<frozen-tier,frozen>> data tiers use {search-snaps} to
  6. reduce your storage and operating costs.
  7. {search-snaps-cap} eliminate the need for <<scalability,replica shards>>,
  8. potentially halving the local storage needed to search your data.
  9. {search-snaps-cap} rely on the same snapshot mechanism you already use for
  10. backups and have minimal impact on your snapshot repository storage costs.
  11. [discrete]
  12. [[using-searchable-snapshots]]
  13. === Using {search-snaps}
  14. Searching a {search-snap} index is the same as searching any other index.
  15. By default, {search-snap} indices have no replicas. The underlying snapshot
  16. provides resilience and the query volume is expected to be low enough that a
  17. single shard copy will be sufficient. However, if you need to support a higher
  18. query volume, you can add replicas by adjusting the `index.number_of_replicas`
  19. index setting.
  20. If a node fails and {search-snap} shards need to be recovered elsewhere, there
  21. is a brief window of time while {es} allocates the shards to other nodes where
  22. the cluster health will not be `green`. Searches that hit these shards may fail
  23. or return partial results until the shards are reallocated to healthy nodes.
  24. You typically manage {search-snaps} through {ilm-init}. The
  25. <<ilm-searchable-snapshot, searchable snapshots>> action automatically converts
  26. a regular index into a {search-snap} index when it reaches the `cold` or
  27. `frozen` phase. You can also make indices in existing snapshots searchable by
  28. manually mounting them using the <<searchable-snapshots-api-mount-snapshot,
  29. mount snapshot>> API.
  30. To mount an index from a snapshot that contains multiple indices, we recommend
  31. creating a <<clone-snapshot-api, clone>> of the snapshot that contains only the
  32. index you want to search, and mounting the clone. You should not delete a
  33. snapshot if it has any mounted indices, so creating a clone enables you to
  34. manage the lifecycle of the backup snapshot independently of any
  35. {search-snaps}. If you use {ilm-init} to manage your {search-snaps} then it
  36. will automatically look after cloning the snapshot as needed.
  37. You can control the allocation of the shards of {search-snap} indices using the
  38. same mechanisms as for regular indices. For example, you could use
  39. <<shard-allocation-filtering>> to restrict {search-snap} shards to a subset of
  40. your nodes.
  41. The speed of recovery of a {search-snap} index is limited by the repository
  42. setting `max_restore_bytes_per_sec` and the node setting
  43. `indices.recovery.max_bytes_per_sec` just like a normal restore operation. By
  44. default `max_restore_bytes_per_sec` is unlimited, but the default for
  45. `indices.recovery.max_bytes_per_sec` depends on the configuration of the node.
  46. See <<recovery-settings>>.
  47. We recommend that you <<indices-forcemerge, force-merge>> indices to a single
  48. segment per shard before taking a snapshot that will be mounted as a
  49. {search-snap} index. Each read from a snapshot repository takes time and costs
  50. money, and the fewer segments there are the fewer reads are needed to restore
  51. the snapshot or to respond to a search.
  52. [TIP]
  53. ====
  54. {search-snaps-cap} are ideal for managing a large archive of historical data.
  55. Historical information is typically searched less frequently than recent data
  56. and therefore may not need replicas for their performance benefits.
  57. For more complex or time-consuming searches, you can use <<async-search>> with
  58. {search-snaps}.
  59. ====
  60. [[searchable-snapshots-repository-types]]
  61. You can use any of the following repository types with searchable snapshots:
  62. * {plugins}/repository-s3.html[AWS S3]
  63. * {plugins}/repository-gcs.html[Google Cloud Storage]
  64. * {plugins}/repository-azure.html[Azure Blob Storage]
  65. * {plugins}/repository-hdfs.html[Hadoop Distributed File Store (HDFS)]
  66. * <<snapshots-filesystem-repository,Shared filesystems>> such as NFS
  67. You can also use alternative implementations of these repository types, for
  68. instance
  69. {plugins}/repository-s3-client.html#repository-s3-compatible-services[Minio],
  70. as long as they are fully compatible. You can use the <<repo-analysis-api>> API
  71. to analyze your repository's suitability for use with searchable snapshots.
  72. [discrete]
  73. [[how-searchable-snapshots-work]]
  74. === How {search-snaps} work
  75. When an index is mounted from a snapshot, {es} allocates its shards to data
  76. nodes within the cluster. The data nodes then automatically retrieve the
  77. relevant shard data from the repository onto local storage, based on the
  78. <<searchable-snapshot-mount-storage-options,mount options>> specified. If
  79. possible, searches use data from local storage. If the data is not available
  80. locally, {es} downloads the data that it needs from the snapshot repository.
  81. If a node holding one of these shards fails, {es} automatically allocates the
  82. affected shards on another node, and that node restores the relevant shard data
  83. from the repository. No replicas are needed, and no complicated monitoring or
  84. orchestration is necessary to restore lost shards. Although searchable snapshot
  85. indices have no replicas by default, you may add replicas to these indices by
  86. adjusting `index.number_of_replicas`. Replicas of {search-snap} shards are
  87. recovered by copying data from the snapshot repository, just like primaries of
  88. {search-snap} shards. In contrast, replicas of regular indices are restored by
  89. copying data from the primary.
  90. [discrete]
  91. [[searchable-snapshot-mount-storage-options]]
  92. ==== Mount options
  93. To search a snapshot, you must first mount it locally as an index. Usually
  94. {ilm-init} will do this automatically, but you can also call the
  95. <<searchable-snapshots-api-mount-snapshot,mount snapshot>> API yourself. There
  96. are two options for mounting a snapshot, each with different performance
  97. characteristics and local storage footprints:
  98. [[full-copy]]
  99. Full copy::
  100. Loads a full copy of the snapshotted index's shards onto node-local storage
  101. within the cluster. This is the default mount option. {ilm-init} uses this
  102. option by default in the `hot` and `cold` phases.
  103. +
  104. Search performance for a full-copy searchable snapshot index is normally
  105. comparable to a regular index, since there is minimal need to access the
  106. snapshot repository. While recovery is ongoing, search performance may be
  107. slower than with a regular index because a search may need some data that has
  108. not yet been retrieved into the local copy. If that happens, {es} will eagerly
  109. retrieve the data needed to complete the search in parallel with the ongoing
  110. recovery.
  111. [[shared-cache]]
  112. Shared cache::
  113. +
  114. experimental::[]
  115. +
  116. Uses a local cache containing only recently searched parts of the snapshotted
  117. index's data. {ilm-init} uses this option by default in the `frozen` phase and
  118. corresponding frozen tier.
  119. +
  120. If a search requires data that is not in the cache, {es} fetches the missing
  121. data from the snapshot repository. Searches that require these fetches are
  122. slower, but the fetched data is stored in the cache so that similar searches
  123. can be served more quickly in future. {es} will evict infrequently used data
  124. from the cache to free up space.
  125. +
  126. Although slower than a full local copy or a regular index, a shared-cache
  127. searchable snapshot index still returns search results quickly, even for large
  128. data sets, because the layout of data in the repository is heavily optimized
  129. for search. Many searches will need to retrieve only a small subset of the
  130. total shard data before returning results.
  131. To mount a searchable snapshot index with the shared cache mount option, you
  132. must configure the `xpack.searchable.snapshot.shared_cache.size` setting to
  133. reserve space for the cache on one or more nodes. Indices mounted with the
  134. shared cache mount option are only allocated to nodes that have this setting
  135. configured.
  136. [[searchable-snapshots-shared-cache]]
  137. `xpack.searchable.snapshot.shared_cache.size`::
  138. (<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
  139. The size of the space reserved for the shared cache. Defaults to `0b`, meaning
  140. that the node has no shared cache.
  141. You can configure the setting in `elasticsearch.yml`:
  142. [source,yaml]
  143. ----
  144. xpack.searchable.snapshot.shared_cache.size: 4TB
  145. ----
  146. IMPORTANT: Currently, you can configure
  147. `xpack.searchable.snapshot.shared_cache.size` on any node. In a future release,
  148. you will only be able to configure this setting on nodes with the
  149. <<data-frozen-node,`data_frozen`>> role.
  150. You can set `xpack.searchable.snapshot.shared_cache.size` to any size between a
  151. couple of gigabytes up to 90% of available disk space. We only recommend larger
  152. sizes if you use the node exclusively on a frozen tier or for searchable
  153. snapshots.
  154. [discrete]
  155. [[back-up-restore-searchable-snapshots]]
  156. === Back up and restore {search-snaps}
  157. You can use <<snapshot-lifecycle-management,regular snapshots>> to back up a
  158. cluster containing {search-snap} indices. When you restore a snapshot
  159. containing {search-snap} indices, these indices are restored as {search-snap}
  160. indices again.
  161. Before you restore a snapshot containing a {search-snap} index, you must first
  162. <<snapshots-register-repository,register the repository>> containing the
  163. original index snapshot. When restored, the {search-snap} index mounts the
  164. original index snapshot from its original repository. If wanted, you
  165. can use separate repositories for regular snapshots and {search-snaps}.
  166. A snapshot of a {search-snap} index contains only a small amount of metadata
  167. which identifies its original index snapshot. It does not contain any data from
  168. the original index. The restore of a backup will fail to restore any
  169. {search-snap} indices whose original index snapshot is unavailable.
  170. [discrete]
  171. [[searchable-snapshots-reliability]]
  172. === Reliability of {search-snaps}
  173. The sole copy of the data in a {search-snap} index is the underlying snapshot,
  174. stored in the repository. If the repository fails or corrupts the contents of
  175. the snapshot then the data is lost. Although {es} may have made copies of the
  176. data onto local storage, these copies may be incomplete and cannot be used to
  177. recover any data after a repository failure. You must make sure that your
  178. repository is reliable and protects against corruption of your data while it is
  179. at rest in the repository.
  180. The blob storage offered by all major public cloud providers typically offers
  181. very good protection against data loss or corruption. If you manage your own
  182. repository storage then you are responsible for its reliability.
  183. [discrete]
  184. [[searchable-snapshots-frozen-tier-on-cloud]]
  185. === Configure a frozen tier on {ess}
  186. The frozen data tier is not yet available on the {ess-trial}[{ess}]. However,
  187. you can configure another tier to use <<shared-cache,shared snapshot caches>>.
  188. This effectively recreates a frozen tier in your {ess} deployment. Follow these
  189. steps:
  190. . Choose an existing tier to use. Typically, you'll use the cold tier, but the
  191. hot and warm tiers are also supported. You can use this tier as a shared tier,
  192. or you can dedicate the tier exclusively to shared snapshot caches.
  193. . Log in to the {ess-trial}[{ess} Console].
  194. . Select your deployment from the {ess} home page or the deployments page.
  195. . From your deployment menu, select **Edit deployment**.
  196. . On the **Edit** page, click **Edit elasticsearch.yml** under your selected
  197. {es} tier.
  198. . In the `elasticsearch.yml` file, add the
  199. <<searchable-snapshots-shared-cache,`xpack.searchable.snapshot.shared_cache.size`>>
  200. setting. For example:
  201. +
  202. [source,yaml]
  203. ----
  204. xpack.searchable.snapshot.shared_cache.size: 50GB
  205. ----
  206. . Click **Save** and **Confirm** to apply your configuration changes.