|
@@ -1,162 +1,611 @@
|
|
|
[[snapshots-take-snapshot]]
|
|
|
== Create a snapshot
|
|
|
|
|
|
-A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
|
|
|
-cluster.
|
|
|
-
|
|
|
-Use the <<put-snapshot-repo-api,create or update snapshot repository API>> to
|
|
|
-register or update a snapshot repository, and then use the
|
|
|
-<<create-snapshot-api,create snapshot API>> to create a snapshot in a
|
|
|
-repository.
|
|
|
-
|
|
|
-The following request creates a snapshot with the name `snapshot_1` in the repository `my_backup`:
|
|
|
-
|
|
|
////
|
|
|
[source,console]
|
|
|
------------------------------------
|
|
|
-PUT /_snapshot/my_backup
|
|
|
+----
|
|
|
+PUT _slm/policy/nightly-snapshots
|
|
|
{
|
|
|
- "type": "fs",
|
|
|
- "settings": {
|
|
|
- "location": "my_backup_location"
|
|
|
+ "schedule": "0 30 1 * * ?",
|
|
|
+ "name": "<nightly-snap-{now/d}>",
|
|
|
+ "repository": "my_repository",
|
|
|
+ "config": {
|
|
|
+ "indices": "*",
|
|
|
+ "include_global_state": true
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "30d",
|
|
|
+ "min_count": 5,
|
|
|
+ "max_count": 50
|
|
|
}
|
|
|
}
|
|
|
------------------------------------
|
|
|
+----
|
|
|
+// TEST[setup:setup-repository]
|
|
|
// TESTSETUP
|
|
|
////
|
|
|
|
|
|
+This guide shows you how to take snapshots of a running cluster. You can later
|
|
|
+<<snapshots-restore-snapshot,restore a snapshot>> to recover or transfer its
|
|
|
+data.
|
|
|
+
|
|
|
+In this guide, you’ll learn how to:
|
|
|
+
|
|
|
+* Automate snapshot creation and retention with {slm} ({slm-init})
|
|
|
+* Manually take a snapshot
|
|
|
+* Monitor a snapshot's progress
|
|
|
+* Delete or cancel a snapshot
|
|
|
+* Back up cluster configuration files
|
|
|
+
|
|
|
+The guide also provides tips for creating dedicated cluster state snapshots and
|
|
|
+taking snapshots at different time intervals.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[create-snapshot-prereqs]]
|
|
|
+=== Prerequisites
|
|
|
+
|
|
|
+include::register-repository.asciidoc[tag=kib-snapshot-prereqs]
|
|
|
+
|
|
|
+* You can only take a snapshot from a running cluster with an elected
|
|
|
+<<master-node,master node>>.
|
|
|
+
|
|
|
+* A snapshot repository must be <<snapshots-register-repository,registered>> and
|
|
|
+available to the cluster.
|
|
|
+
|
|
|
+* The cluster's global metadata must be readable. To include an index in a
|
|
|
+snapshot, the index and its metadata must also be readable. Ensure there aren't
|
|
|
+any <<cluster-read-only,cluster blocks>> or <<index-modules-blocks,index
|
|
|
+blocks>> that prevent read access.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[create-snapshot-considerations]]
|
|
|
+=== Considerations
|
|
|
+
|
|
|
+* Each snapshot must have a unique name within its repository. Attempts to
|
|
|
+create a snapshot with the same name as an existing snapshot will fail.
|
|
|
+
|
|
|
+* Snapshots are automatically deduplicated. You can take frequent snapshots with
|
|
|
+little impact to your storage overhead.
|
|
|
+
|
|
|
+* Each snapshot is logically independent. You can delete a snapshot without
|
|
|
+affecting other snapshots.
|
|
|
+
|
|
|
+* Taking a snapshot can temporarily pause shard allocations.
|
|
|
+See <<snapshots-shard-allocation>>.
|
|
|
+
|
|
|
+* Taking a snapshot doesn't block indexing or other requests. However, the
|
|
|
+snapshot won't include changes made after the snapshot process starts.
|
|
|
+
|
|
|
+* You can take multiple snapshots at the same time. The
|
|
|
+<<snapshot-max-concurrent-ops,`snapshot.max_concurrent_operations`>> cluster
|
|
|
+setting limits the maximum number of concurrent snapshot operations.
|
|
|
+
|
|
|
+* If you include a data stream in a snapshot, the snapshot also includes the
|
|
|
+stream’s backing indices and metadata.
|
|
|
++
|
|
|
+You can also include only specific backing indices in a snapshot. However, the
|
|
|
+snapshot won't include the data stream’s metadata or its other backing indices.
|
|
|
+
|
|
|
+* A snapshot can include a data stream but exclude specific backing indices.
|
|
|
+When you restore such a data stream, it will contain only backing indices in the
|
|
|
+snapshot. If the stream’s original write index is not in the snapshot, the most
|
|
|
+recent backing index from the snapshot becomes the stream’s write index.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[automate-snapshots-slm]]
|
|
|
+=== Automate snapshots with {slm-init}
|
|
|
+
|
|
|
+{slm-cap} ({slm-init}) is the easiest way to regularly back up a cluster. An
|
|
|
+{slm-init} policy automatically takes snapshots on a preset schedule. The policy
|
|
|
+can also delete snapshots based on retention rules you define.
|
|
|
+
|
|
|
+TIP: {ess} deployments automatically include the `cloud-snapshot-policy`
|
|
|
+{slm-init} policy. {ess} uses this policy to take periodic snapshots of your
|
|
|
+cluster. For more information, see the {cloud}/ec-snapshot-restore.html[{ess}
|
|
|
+snapshot documentation].
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[slm-security]]
|
|
|
+==== {slm-init} security
|
|
|
+
|
|
|
+The following <<privileges-list-cluster,cluster privileges>> control access to
|
|
|
+the {slm-init} actions when {es} {security-features} are enabled:
|
|
|
+
|
|
|
+`manage_slm`::
|
|
|
+Allows a user to perform all {slm-init} actions, including
|
|
|
+creating and updating policies and starting and stopping {slm-init}.
|
|
|
+
|
|
|
+`read_slm`::
|
|
|
+Allows a user to perform all read-only {slm-init} actions, such as getting
|
|
|
+policies and checking the {slm-init} status.
|
|
|
+
|
|
|
+`cluster:admin/snapshot/*`::
|
|
|
+Allows a user to take and delete snapshots of any index, whether or not they
|
|
|
+have access to that index.
|
|
|
+
|
|
|
+You can create and manage roles to assign these privileges through {kib}
|
|
|
+Management.
|
|
|
+
|
|
|
+To grant the privileges necessary to create and manage {slm-init} policies and
|
|
|
+snapshots, you can set up a role with the `manage_slm` and
|
|
|
+`cluster:admin/snapshot/*` cluster privileges and full access to the {slm-init}
|
|
|
+history indices.
|
|
|
+
|
|
|
+For example, the following request creates an `slm-admin` role:
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+POST _security/role/slm-admin
|
|
|
+{
|
|
|
+ "cluster": [ "manage_slm", "cluster:admin/snapshot/*" ],
|
|
|
+ "indices": [
|
|
|
+ {
|
|
|
+ "names": [ ".slm-history-*" ],
|
|
|
+ "privileges": [ "all" ]
|
|
|
+ }
|
|
|
+ ]
|
|
|
+}
|
|
|
+----
|
|
|
+// TEST[skip:security is not enabled here]
|
|
|
+
|
|
|
+To grant read-only access to {slm-init} policies and the snapshot history,
|
|
|
+you can set up a role with the `read_slm` cluster privilege and read access
|
|
|
+to the {slm} history indices.
|
|
|
+
|
|
|
+For example, the following request creates a `slm-read-only` role:
|
|
|
+
|
|
|
[source,console]
|
|
|
------------------------------------
|
|
|
-PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
|
|
|
------------------------------------
|
|
|
+----
|
|
|
+POST _security/role/slm-read-only
|
|
|
+{
|
|
|
+ "cluster": [ "read_slm" ],
|
|
|
+ "indices": [
|
|
|
+ {
|
|
|
+ "names": [ ".slm-history-*" ],
|
|
|
+ "privileges": [ "read" ]
|
|
|
+ }
|
|
|
+ ]
|
|
|
+}
|
|
|
+----
|
|
|
+// TEST[skip:security is not enabled here]
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[create-slm-policy]]
|
|
|
+==== Create an {slm-init} policy
|
|
|
+
|
|
|
+To manage {slm-init} in {kib}, go to the main menu and click **Stack
|
|
|
+Management** > **Snapshot and Restore** > **Policies**. To create a policy,
|
|
|
+click **Create policy**.
|
|
|
+
|
|
|
+You can also manage {slm-init} using the
|
|
|
+<<snapshot-lifecycle-management-api,{slm-init} APIs>>. To create a policy, use
|
|
|
+the <<slm-api-put-policy,create {slm-init} policy API>>.
|
|
|
|
|
|
-The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
|
|
|
-initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
|
|
|
-previous snapshots is loaded into memory, which means that in large repositories it may take several seconds (or
|
|
|
-even minutes) for this request to return even if the `wait_for_completion` parameter is set to `false`.
|
|
|
+The following request creates a policy that backs up the cluster state, all data
|
|
|
+streams, and all indices daily at 1:30 a.m. UTC.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/nightly-snapshots
|
|
|
+{
|
|
|
+ "schedule": "0 30 1 * * ?", <1>
|
|
|
+ "name": "<nightly-snap-{now/d}>", <2>
|
|
|
+ "repository": "my_repository", <3>
|
|
|
+ "config": {
|
|
|
+ "indices": "*", <4>
|
|
|
+ "include_global_state": true <5>
|
|
|
+ },
|
|
|
+ "retention": { <6>
|
|
|
+ "expire_after": "30d",
|
|
|
+ "min_count": 5,
|
|
|
+ "max_count": 50
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+
|
|
|
+<1> When to take snapshots, written in <<schedule-cron,Cron syntax>>.
|
|
|
+<2> Snapshot name. Supports <<api-date-math-index-names,date math>>. To prevent
|
|
|
+ naming conflicts, the policy also appends a UUID to each snapshot name.
|
|
|
+<3> <<snapshots-register-repository,Registered snapshot repository>> used to
|
|
|
+ store the policy's snapshots.
|
|
|
+<4> Data streams and indices to include in the policy's snapshots. This
|
|
|
+ configuration includes all data streams and indices, including system
|
|
|
+ indices.
|
|
|
+<5> If `true`, the policy's snapshots include the cluster state. This also
|
|
|
+ includes all feature states by default. To only include specific feature
|
|
|
+ states, see <<back-up-specific-feature-state>>.
|
|
|
+<6> Optional retention rules. This configuration keeps snapshots for 30 days,
|
|
|
+ retaining at least 5 and no more than 50 snapshots regardless of age. See
|
|
|
+ <<slm-retention-task>> and <<snapshot-retention-limits>>.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[manually-run-slm-policy]]
|
|
|
+==== Manually run an {slm-init} policy
|
|
|
+
|
|
|
+You can manually run an {slm-init} policy to immediately create a snapshot. This
|
|
|
+is useful for testing a new policy or taking a snapshot before an upgrade.
|
|
|
+Manually running a policy doesn't affect its snapshot schedule.
|
|
|
+
|
|
|
+To run a policy in {kib}, go to the **Policies** page and click the run icon
|
|
|
+under the **Actions** column. You can also use the
|
|
|
+<<slm-api-execute-lifecycle,execute {slm-init} policy API>>.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+POST _slm/policy/nightly-snapshots/_execute
|
|
|
+----
|
|
|
+// TEST[skip:we can't easily handle snapshots from docs tests]
|
|
|
+
|
|
|
+The snapshot process runs in the background. To monitor its progress, see
|
|
|
+<<monitor-snapshot>>.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[slm-retention-task]]
|
|
|
+==== {slm-init} retention
|
|
|
|
|
|
-By default, a snapshot backs up all data streams and open indices in the cluster. You can change this behavior by
|
|
|
-specifying the list of data streams and indices in the body of the snapshot request:
|
|
|
+{slm-init} snapshot retention is a cluster-level task that runs separately from
|
|
|
+a policy's snapshot schedule. To control when the {slm-init} retention task
|
|
|
+runs, configure the <<slm-retention-schedule,`slm.retention_schedule`>> cluster
|
|
|
+setting.
|
|
|
|
|
|
[source,console]
|
|
|
------------------------------------
|
|
|
-PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
|
|
|
+----
|
|
|
+PUT _cluster/settings
|
|
|
{
|
|
|
- "indices": "data_stream_1,index_1,index_2",
|
|
|
- "ignore_unavailable": true,
|
|
|
- "include_global_state": false,
|
|
|
- "metadata": {
|
|
|
- "taken_by": "kimchy",
|
|
|
- "taken_because": "backup before upgrading"
|
|
|
+ "persistent" : {
|
|
|
+ "slm.retention_schedule" : "0 30 1 * * ?"
|
|
|
}
|
|
|
}
|
|
|
------------------------------------
|
|
|
-// TEST[skip:cannot complete subsequent snapshot]
|
|
|
-
|
|
|
-Use the `indices` parameter to list the data streams and indices that should be included in the snapshot. This parameter supports
|
|
|
-<<api-multi-index,multi-target syntax>>, although the options that control the behavior of multi-index syntax
|
|
|
-must be supplied in the body of the request, rather than as request parameters.
|
|
|
-
|
|
|
-Data stream backups include the stream's backing indices and metadata, such as
|
|
|
-the current <<data-streams-generation,generation>> and timestamp field.
|
|
|
-
|
|
|
-You can also choose to include only specific backing indices in a snapshot.
|
|
|
-However, these backups do not include the associated data stream's
|
|
|
-metadata or its other backing indices.
|
|
|
-
|
|
|
-Snapshots can also include a data stream but exclude specific backing indices.
|
|
|
-When you restore the data stream, it will contain only backing indices present
|
|
|
-in the snapshot. If the stream's original write index is not in the snapshot,
|
|
|
-the most recent backing index from the snapshot becomes the stream's write index.
|
|
|
-
|
|
|
-[discrete]
|
|
|
-[[create-snapshot-process-details]]
|
|
|
-=== Snapshot process details
|
|
|
-
|
|
|
-The snapshot process works by taking a byte-for-byte copy of the files that
|
|
|
-make up each index or data stream and placing these copies in the repository.
|
|
|
-These files are mostly written by Lucene and contain a compact representation
|
|
|
-of all the data in each index or data stream in a form that is designed to be
|
|
|
-searched efficiently. This means that when you restore an index or data stream
|
|
|
-from a snapshot there is no need to rebuild these search-focused data
|
|
|
-structures. It also means that you can use <<searchable-snapshots>> to directly
|
|
|
-search the data in the repository.
|
|
|
-
|
|
|
-The snapshot process is incremental: {es} compares the files that make up the
|
|
|
-index or data stream against the files that already exist in the repository
|
|
|
-and only copies files that were created or changed
|
|
|
-since the last snapshot. Snapshots are very space-efficient since they reuse
|
|
|
-any files copied to the repository by earlier snapshots.
|
|
|
-
|
|
|
-Snapshotting does not interfere with ongoing indexing or searching operations.
|
|
|
-A snapshot captures a view of each shard at some point in time between the
|
|
|
-start and end of the snapshotting process. The snapshot may not include
|
|
|
-documents added to a data stream or index after the snapshot process starts.
|
|
|
-
|
|
|
-You can start multiple snapshot operations at the same time. Concurrent snapshot
|
|
|
-operations are limited by the `snapshot.max_concurrent_operations` cluster
|
|
|
-setting, which defaults to `1000`. This limit applies in total to all ongoing snapshot
|
|
|
-creation, cloning, and deletion operations. {es} will reject any operations
|
|
|
-that would exceed this limit.
|
|
|
-
|
|
|
-The snapshot process starts immediately for the primary shards that have been
|
|
|
-started and are not relocating at the moment. {es} waits for relocation or
|
|
|
-initialization of shards to complete before snapshotting them.
|
|
|
-
|
|
|
-Besides creating a copy of each data stream and index, the snapshot process can
|
|
|
-also store global cluster metadata, which includes persistent cluster settings,
|
|
|
-templates, and data stored in system indices, such as Watches and task records,
|
|
|
-regardless of whether those system indices are named in the `indices` section
|
|
|
-of the request. You can also use the create snapshot
|
|
|
-API's <<create-snapshot-api-feature-states,`feature_states`>> parameter to
|
|
|
-include only a subset of system indices in the snapshot. Snapshots do not
|
|
|
-store transient settings or registered snapshot repositories.
|
|
|
-
|
|
|
-While a snapshot of a particular shard is being created, the shard cannot be
|
|
|
-moved to another node, which can interfere with rebalancing and allocation
|
|
|
-filtering. {es} can only move the shard to another node (according to the current
|
|
|
-allocation filtering settings and rebalancing algorithm) after the snapshot
|
|
|
-process is finished.
|
|
|
-
|
|
|
-You can use the <<get-snapshot-api,Get snapshot API>> to retrieve information
|
|
|
-about ongoing and completed snapshots. See
|
|
|
-<<snapshots-monitor-snapshot-restore,Monitor snapshot and restore progress>>.
|
|
|
-
|
|
|
-[discrete]
|
|
|
-[[create-snapshot-options]]
|
|
|
-=== Options for creating a snapshot
|
|
|
-The create snapshot request supports the
|
|
|
-`ignore_unavailable` option. Setting it to `true` will cause data streams and indices that do not exist to be ignored during snapshot
|
|
|
-creation. By default, when the `ignore_unavailable` option is not set and a data stream or index is missing, the snapshot request will fail.
|
|
|
-
|
|
|
-By setting `include_global_state` to `false` it's possible to prevent the cluster global state to be stored as part of
|
|
|
-the snapshot.
|
|
|
-
|
|
|
-IMPORTANT: The global cluster state includes the cluster's index
|
|
|
-templates, such as those <<create-index-template,matching a data
|
|
|
-stream>>. If your snapshot includes data streams, we recommend storing the
|
|
|
-global state as part of the snapshot. This lets you later restored any
|
|
|
-templates required for a data stream.
|
|
|
-
|
|
|
-By default, the entire snapshot will fail if one or more indices participating in the snapshot do not have
|
|
|
-all primary shards available. You can change this behaviour by setting `partial` to `true`. The `expand_wildcards`
|
|
|
-option can be used to control whether hidden and closed indices will be included in the snapshot, and defaults to `all`.
|
|
|
-
|
|
|
-Use the `metadata` field to attach arbitrary metadata to the snapshot,
|
|
|
-such as who took the snapshot,
|
|
|
-why it was taken, or any other data that might be useful.
|
|
|
-
|
|
|
-Snapshot names can be automatically derived using <<date-math-index-names,date math expressions>>, similarly as when creating
|
|
|
-new indices. Special characters must be URI encoded.
|
|
|
-
|
|
|
-For example, use the <<create-snapshot-api,create snapshot API>> to create
|
|
|
-a snapshot with the current day in the name, such as `snapshot-2020.07.11`:
|
|
|
-
|
|
|
-[source,console]
|
|
|
------------------------------------
|
|
|
-PUT /_snapshot/my_backup/<snapshot-{now/d}>
|
|
|
-PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E
|
|
|
------------------------------------
|
|
|
-// TEST[continued]
|
|
|
-
|
|
|
-NOTE: You can also create snapshots that are copies of part of an existing snapshot using the <<clone-snapshot-api,clone snapshot API>>.
|
|
|
+----
|
|
|
+
|
|
|
+To immediately run the retention task, use the
|
|
|
+<<slm-api-execute-retention,execute {slm-init} retention policy API>>.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+POST _slm/_execute_retention
|
|
|
+----
|
|
|
+
|
|
|
+An {slm-init} policy's retention rules only apply to snapshots created using the
|
|
|
+policy. Other snapshots don't count toward the policy's retention limits.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[snapshot-retention-limits]]
|
|
|
+==== Snapshot retention limits
|
|
|
+
|
|
|
+While not a hard limit, a snapshot repository shouldn't contain more than
|
|
|
+{max-snapshot-count} snapshots at a time. This ensures the repository's metadata
|
|
|
+doesn't grow to a size that may destabilize the master node. We recommend you
|
|
|
+set up your {slm-init} policy's retention rules to enforce this limit.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[manually-create-snapshot]]
|
|
|
+=== Manually create a snapshot
|
|
|
+
|
|
|
+To take a snapshot without an {slm-init} policy, use the
|
|
|
+<<create-snapshot-api,create snapshot API>>. The snapshot name supports
|
|
|
+<<api-date-math-index-names,date math>>.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+# PUT _snapshot/my_repository/<my_snapshot_{now/d}>
|
|
|
+PUT _snapshot/my_repository/%3Cmy_snapshot_%7Bnow%2Fd%7D%3E
|
|
|
+----
|
|
|
+// TEST[s/3E/3E?wait_for_completion=true/]
|
|
|
+
|
|
|
+Depending on its size, a snapshot can take a while to complete. By default,
|
|
|
+the create snapshot API only initiates the snapshot process, which runs in the
|
|
|
+background. To block the client until the snapshot finishes, set the
|
|
|
+`wait_for_completion` query parameter to `true`.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _snapshot/my_repository/my_snapshot?wait_for_completion=true
|
|
|
+----
|
|
|
+
|
|
|
+You can also clone an existing snapshot using <<clone-snapshot-api,clone
|
|
|
+snapshot API>>.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[monitor-snapshot]]
|
|
|
+=== Monitor a snapshot
|
|
|
+
|
|
|
+To monitor any currently running snapshots, use the <<get-snapshot-api,get
|
|
|
+snapshot API>> with the `_current` request path parameter.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+GET _snapshot/my_repository/_current
|
|
|
+----
|
|
|
+
|
|
|
+To get a complete breakdown of each shard participating in any currently running
|
|
|
+snapshots, use the <<get-snapshot-api,get snapshot status API>>.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+GET _snapshot/_status
|
|
|
+----
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[check-slm-history]]
|
|
|
+==== Check {slm-init} history
|
|
|
+
|
|
|
+Use the <<slm-api-get-policy,get {slm-init} policy API>> to check when an
|
|
|
+{slm-init} policy last successfully started the snapshot process. A successful
|
|
|
+start doesn't guarantee the snapshot completed.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+GET _slm/policy/nightly-snapshots
|
|
|
+----
|
|
|
+
|
|
|
+To get more information about a cluster's {slm-init} execution history,
|
|
|
+including stats for each {slm-init} policy, use the <<slm-api-get-stats,get
|
|
|
+{slm-init} stats API>>. The API also returns information about the cluster's
|
|
|
+snapshot retention task history.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+GET _slm/stats
|
|
|
+----
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[delete-snapshot]]
|
|
|
+=== Delete or cancel a snapshot
|
|
|
+
|
|
|
+To delete a snapshot in {kib}, go to the **Snapshots** page and click the trash
|
|
|
+icon under the **Actions** column. You can also use the
|
|
|
+<<delete-snapshot-api,delete snapshot API>>.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+DELETE _snapshot/my_repository/my_snapshot_2099.05.06
|
|
|
+----
|
|
|
+// TEST[setup:setup-snapshots]
|
|
|
+
|
|
|
+If you delete a snapshot that's in progress, {es} cancels it. The snapshot
|
|
|
+process halts and deletes any files created for the snapshot. Deleting a
|
|
|
+snapshot doesn't delete files used by other snapshots.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[back-up-config-files]]
|
|
|
+=== Back up configuration files
|
|
|
+
|
|
|
+If you run {es} on your own hardware, we recommend that, in addition to backups,
|
|
|
+you take regular backups of the files in each node's `$ES_PATH_CONF` directory
|
|
|
+using the file backup software of your choice. Snapshots don't back up these
|
|
|
+files.
|
|
|
+
|
|
|
+Depending on your setup, some of these configuration files may contain sensitive
|
|
|
+data, such as passwords or keys. If so, consider encrypting your file backups.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[back-up-specific-feature-state]]
|
|
|
+=== Back up a specific feature state
|
|
|
+
|
|
|
+By default, a snapshot that includes the cluster state also includes all
|
|
|
+<<feature-state,feature states>>. Similarly, a snapshot that excludes the
|
|
|
+cluster state excludes all feature states by default.
|
|
|
+
|
|
|
+You can also configure a snapshot to only include specific feature states,
|
|
|
+regardless of the cluster state.
|
|
|
+
|
|
|
+To get a list of available features, use the <<get-features-api,get features
|
|
|
+API>>.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+GET _features
|
|
|
+----
|
|
|
+
|
|
|
+The API returns:
|
|
|
+
|
|
|
+[source,console-result]
|
|
|
+----
|
|
|
+{
|
|
|
+ "features": [
|
|
|
+ {
|
|
|
+ "name": "tasks",
|
|
|
+ "description": "Manages task results"
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "name": "kibana",
|
|
|
+ "description": "Manages Kibana configuration and reports"
|
|
|
+ },
|
|
|
+ {
|
|
|
+ "name": "security",
|
|
|
+ "description": "Manages configuration for Security features, such as users and roles"
|
|
|
+ },
|
|
|
+ ...
|
|
|
+ ]
|
|
|
+}
|
|
|
+----
|
|
|
+// TESTRESPONSE[skip:response may vary based on features in test cluster]
|
|
|
+
|
|
|
+To include a specific feature state in a snapshot, specify the feature `name` in
|
|
|
+the `feature_states` array.
|
|
|
+
|
|
|
+For example, the following {slm-init} policy only includes feature states for
|
|
|
+the {kib} and {es} security features in its snapshots.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/nightly-snapshots
|
|
|
+{
|
|
|
+ "schedule": "0 30 2 * * ?",
|
|
|
+ "name": "<nightly-snap-{now/d}>",
|
|
|
+ "repository": "my_repository",
|
|
|
+ "config": {
|
|
|
+ "indices": "*",
|
|
|
+ "include_global_state": true,
|
|
|
+ "feature_states": [
|
|
|
+ "kibana",
|
|
|
+ "security"
|
|
|
+ ]
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "30d",
|
|
|
+ "min_count": 5,
|
|
|
+ "max_count": 50
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+
|
|
|
+Any index or data stream that's part of the feature state will display in a
|
|
|
+snapshot's contents. For example, if you back up the `security` feature state,
|
|
|
+the `security-*` system indices display in the <<get-snapshot-api,get snapshot
|
|
|
+API>>'s response under both `indices` and `feature_states`.
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[cluster-state-snapshots]]
|
|
|
+=== Dedicated cluster state snapshots
|
|
|
+
|
|
|
+Some feature states contain sensitive data. For example, the `security` feature
|
|
|
+state includes system indices that may contain user names and encrypted password
|
|
|
+hashes.
|
|
|
+
|
|
|
+To better protect this data, consider creating a dedicated repository and
|
|
|
+{slm-init} policy for snapshots of the cluster state. This lets you strictly
|
|
|
+limit and audit access to the repository.
|
|
|
+
|
|
|
+For example, the following {slm-init} policy only backs up the cluster state.
|
|
|
+The policy stores these snapshots in a dedicated repository.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/nightly-cluster-state-snapshots
|
|
|
+{
|
|
|
+ "schedule": "0 30 2 * * ?",
|
|
|
+ "name": "<nightly-cluster-state-snap-{now/d}>",
|
|
|
+ "repository": "my_secure_repository",
|
|
|
+ "config": {
|
|
|
+ "include_global_state": true, <1>
|
|
|
+ "indices": "-*" <2>
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "30d",
|
|
|
+ "min_count": 5,
|
|
|
+ "max_count": 50
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+// TEST[s/my_secure_repository/my_repository/]
|
|
|
+
|
|
|
+<1> Includes the cluster state. This also includes all feature states by
|
|
|
+ default.
|
|
|
+<2> Excludes regular data streams and indices.
|
|
|
+
|
|
|
+If you take dedicated snapshots of the cluster state, you'll need to exclude the
|
|
|
+cluster state and system indices from your other snapshots. For example:
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/nightly-snapshots
|
|
|
+{
|
|
|
+ "schedule": "0 30 2 * * ?",
|
|
|
+ "name": "<nightly-snap-{now/d}>",
|
|
|
+ "repository": "my_repository",
|
|
|
+ "config": {
|
|
|
+ "include_global_state": false, <1>
|
|
|
+ "indices": "*,-.*" <2>
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "30d",
|
|
|
+ "min_count": 5,
|
|
|
+ "max_count": 50
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+
|
|
|
+<1> Excludes the cluster state. This also excludes all feature states by
|
|
|
+ default.
|
|
|
+<2> Includes all data streams and indices except system indices and other
|
|
|
+ indices that begin with a dot (`.`).
|
|
|
+
|
|
|
+[discrete]
|
|
|
+[[create-snapshots-different-time-intervals]]
|
|
|
+=== Create snapshots at different time intervals
|
|
|
+
|
|
|
+If you only use a single {slm-init} policy, it can be difficult to take frequent
|
|
|
+snapshots and retain snapshots with longer time intervals.
|
|
|
+
|
|
|
+For example, a policy that takes snapshots every 30 minutes with a maximum of
|
|
|
+100 snapshots will only keep snapshots for approximately two days. While this
|
|
|
+setup is great for backing up recent changes, it doesn't let you restore data
|
|
|
+from a previous week or month.
|
|
|
+
|
|
|
+To fix this, you can create multiple {slm-init} policies with the same snapshot
|
|
|
+repository that run on different schedules. Since a policy's retention rules
|
|
|
+only apply to its snapshots, a policy won't delete a snapshot created by another
|
|
|
+policy. However, you'll need to ensure the total number of snapshots in the
|
|
|
+repository doesn't exceed the <<snapshot-retention-limits,{max-snapshot-count}
|
|
|
+snapshot soft limit>>.
|
|
|
+
|
|
|
+For example, the following {slm-init} policy takes hourly snapshots with a
|
|
|
+maximum of 24 snapshots. The policy keeps its snapshots for one day.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/hourly-snapshots
|
|
|
+{
|
|
|
+ "name": "<hourly-snapshot-{now/d}>",
|
|
|
+ "schedule": "0 0 * * * ?",
|
|
|
+ "repository": "my_repository",
|
|
|
+ "config": {
|
|
|
+ "indices": "*",
|
|
|
+ "include_global_state": true
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "1d",
|
|
|
+ "min_count": 1,
|
|
|
+ "max_count": 24
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+
|
|
|
+The following policy takes nightly snapshots in the same snapshot repository.
|
|
|
+The policy keeps its snapshots for one month.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/daily-snapshots
|
|
|
+{
|
|
|
+ "name": "<daily-snapshot-{now/d}>",
|
|
|
+ "schedule": "0 45 23 * * ?", <1>
|
|
|
+ "repository": "my_repository",
|
|
|
+ "config": {
|
|
|
+ "indices": "*",
|
|
|
+ "include_global_state": true
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "30d",
|
|
|
+ "min_count": 1,
|
|
|
+ "max_count": 31
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+
|
|
|
+<1> Runs at 11:45 p.m. UTC every day.
|
|
|
+
|
|
|
+The following policy creates monthly snapshots in the same repository. The
|
|
|
+policy keeps its snapshots for one year.
|
|
|
+
|
|
|
+[source,console]
|
|
|
+----
|
|
|
+PUT _slm/policy/monthly-snapshots
|
|
|
+{
|
|
|
+ "name": "<monthly-snapshot-{now/d}>",
|
|
|
+ "schedule": "0 56 23 1 * ?", <1>
|
|
|
+ "repository": "my_repository",
|
|
|
+ "config": {
|
|
|
+ "indices": "*",
|
|
|
+ "include_global_state": true
|
|
|
+ },
|
|
|
+ "retention": {
|
|
|
+ "expire_after": "366d",
|
|
|
+ "min_count": 1,
|
|
|
+ "max_count": 12
|
|
|
+ }
|
|
|
+}
|
|
|
+----
|
|
|
+
|
|
|
+<1> Runs on the first of the month at 11:56 p.m. UTC.
|