|
@@ -117,16 +117,15 @@ copying data from the primary.
|
|
|
To search a snapshot, you must first mount it locally as an index. Usually
|
|
|
{ilm-init} will do this automatically, but you can also call the
|
|
|
<<searchable-snapshots-api-mount-snapshot,mount snapshot>> API yourself. There
|
|
|
-are two options for mounting a snapshot, each with different performance
|
|
|
-characteristics and local storage footprints:
|
|
|
+are two options for mounting an index from a snapshot, each with different
|
|
|
+performance characteristics and local storage footprints:
|
|
|
|
|
|
-[[full-copy]]
|
|
|
-Full copy::
|
|
|
+[[fully-mounted]]
|
|
|
+Fully mounted index::
|
|
|
Loads a full copy of the snapshotted index's shards onto node-local storage
|
|
|
-within the cluster. This is the default mount option. {ilm-init} uses this
|
|
|
-option by default in the `hot` and `cold` phases.
|
|
|
+within the cluster. {ilm-init} uses this option in the `hot` and `cold` phases.
|
|
|
+
|
|
|
-Search performance for a full-copy searchable snapshot index is normally
|
|
|
+Search performance for a fully mounted index is normally
|
|
|
comparable to a regular index, since there is minimal need to access the
|
|
|
snapshot repository. While recovery is ongoing, search performance may be
|
|
|
slower than with a regular index because a search may need some data that has
|
|
@@ -134,11 +133,11 @@ not yet been retrieved into the local copy. If that happens, {es} will eagerly
|
|
|
retrieve the data needed to complete the search in parallel with the ongoing
|
|
|
recovery.
|
|
|
|
|
|
-[[shared-cache]]
|
|
|
-Shared cache::
|
|
|
+[[partially-mounted]]
|
|
|
+Partially mounted index::
|
|
|
Uses a local cache containing only recently searched parts of the snapshotted
|
|
|
-index's data. {ilm-init} uses this option by default in the `frozen` phase and
|
|
|
-corresponding frozen tier.
|
|
|
+index's data. This cache has a fixed size and is shared across nodes in the
|
|
|
+frozen tier. {ilm-init} uses this option in the `frozen` phase.
|
|
|
+
|
|
|
If a search requires data that is not in the cache, {es} fetches the missing
|
|
|
data from the snapshot repository. Searches that require these fetches are
|
|
@@ -146,39 +145,39 @@ slower, but the fetched data is stored in the cache so that similar searches
|
|
|
can be served more quickly in future. {es} will evict infrequently used data
|
|
|
from the cache to free up space.
|
|
|
+
|
|
|
-Although slower than a full local copy or a regular index, a shared-cache
|
|
|
-searchable snapshot index still returns search results quickly, even for large
|
|
|
-data sets, because the layout of data in the repository is heavily optimized
|
|
|
-for search. Many searches will need to retrieve only a small subset of the
|
|
|
-total shard data before returning results.
|
|
|
-
|
|
|
-To mount a searchable snapshot index with the shared cache mount option, you
|
|
|
-must have one or more nodes with a shared cache available. By default,
|
|
|
-dedicated frozen data tier nodes (nodes with the `data_frozen` role and no other
|
|
|
-data roles) have a shared cache configured using the greater of 90% of total
|
|
|
-disk space and total disk space subtracted a headroom of 100GB.
|
|
|
+Although slower than a fully mounted index or a regular index, a
|
|
|
+partially mounted index still returns search results quickly, even for
|
|
|
+large data sets, because the layout of data in the repository is heavily
|
|
|
+optimized for search. Many searches will need to retrieve only a small subset of
|
|
|
+the total shard data before returning results.
|
|
|
+
|
|
|
+To partially mount an index, you must have one or more nodes with a shared cache
|
|
|
+available. By default, dedicated frozen data tier nodes (nodes with the
|
|
|
+`data_frozen` role and no other data roles) have a shared cache configured using
|
|
|
+the greater of 90% of total disk space and total disk space subtracted a
|
|
|
+headroom of 100GB.
|
|
|
|
|
|
Using a dedicated frozen tier is highly recommended for production use. If you
|
|
|
do not have a dedicated frozen tier, you must configure the
|
|
|
`xpack.searchable.snapshot.shared_cache.size` setting to reserve space for the
|
|
|
-cache on one or more nodes. Indices mounted with the shared cache mount option
|
|
|
+cache on one or more nodes. Partially mounted indices
|
|
|
are only allocated to nodes that have a shared cache.
|
|
|
|
|
|
[[searchable-snapshots-shared-cache]]
|
|
|
`xpack.searchable.snapshot.shared_cache.size`::
|
|
|
(<<static-cluster-setting,Static>>)
|
|
|
-The size of the space reserved for the shared cache, either specified as a
|
|
|
-percentage of total disk space or an absolute <<byte-units,byte value>>.
|
|
|
-Defaults to 90% of total disk space on dedicated frozen data tier nodes,
|
|
|
-otherwise `0b`.
|
|
|
+Disk space reserved for the shared cache of partially mounted indices.
|
|
|
+Accepts a percentage of total disk space or an absolute <<byte-units,byte
|
|
|
+value>>. Defaults to `90%` of total disk space for dedicated frozen data tier
|
|
|
+nodes. Otherwise defaults to `0b`.
|
|
|
|
|
|
`xpack.searchable.snapshot.shared_cache.size.max_headroom`::
|
|
|
(<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
|
|
|
-For dedicated frozen tier nodes, the max headroom to maintain. Defaults to 100GB
|
|
|
-on dedicated frozen tier nodes when
|
|
|
-`xpack.searchable.snapshot.shared_cache.size` is not explicitly set, otherwise
|
|
|
--1 (not set). Can only be set when `xpack.searchable.snapshot.shared_cache.size`
|
|
|
-is set as a percentage.
|
|
|
+For dedicated frozen tier nodes, the max headroom to maintain. If
|
|
|
+`xpack.searchable.snapshot.shared_cache.size` is not explicitly set, this
|
|
|
+setting defaults to `100GB`. Otherwise it defaults to `-1` (not set). You can
|
|
|
+only configure this setting if `xpack.searchable.snapshot.shared_cache.size` is
|
|
|
+set as a percentage.
|
|
|
|
|
|
To illustrate how these settings work in concert let us look at two examples
|
|
|
when using the default values of the settings on a dedicated frozen node:
|
|
@@ -186,7 +185,7 @@ when using the default values of the settings on a dedicated frozen node:
|
|
|
* A 4000 GB disk will result in a shared cache sized at 3900 GB. 90% of 4000 GB
|
|
|
is 3600 GB, leaving 400 GB headroom. The default `max_headroom` of 100 GB
|
|
|
takes effect, and the result is therefore 3900 GB.
|
|
|
-* A 400 GB disk will result in a shard cache sized at 360 GB.
|
|
|
+* A 400 GB disk will result in a shared cache sized at 360 GB.
|
|
|
|
|
|
You can configure the settings in `elasticsearch.yml`:
|
|
|
|
|
@@ -199,11 +198,6 @@ IMPORTANT: You can only configure these settings on nodes with the
|
|
|
<<data-frozen-node,`data_frozen`>> role. Additionally, nodes with a shared
|
|
|
cache can only have a single <<path-settings,data path>>.
|
|
|
|
|
|
-You can set `xpack.searchable.snapshot.shared_cache.size` to any size between a
|
|
|
-couple of gigabytes up to 90% of available disk space. We only recommend larger
|
|
|
-sizes if you use the node exclusively on a frozen tier or for searchable
|
|
|
-snapshots.
|
|
|
-
|
|
|
[discrete]
|
|
|
[[back-up-restore-searchable-snapshots]]
|
|
|
=== Back up and restore {search-snaps}
|