123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645 |
- [[modules-snapshots]]
- == Snapshot And Restore
- You can store snapshots of individual indices or an entire cluster in
- a remote repository like a shared file system, S3, or HDFS. These snapshots
- are great for backups because they can be restored relatively quickly. However,
- snapshots can only be restored to versions of Elasticsearch that can read the
- indices:
- * A snapshot of an index created in 5.x can be restored to 6.x.
- * A snapshot of an index created in 2.x can be restored to 5.x.
- * A snapshot of an index created in 1.x can be restored to 2.x.
- Conversely, snapshots of indices created in 1.x **cannot** be restored to
- 5.x or 6.x, and snapshots of indices created in 2.x **cannot** be restored
- to 6.x.
- Snapshots are incremental and can contain indices created in various
- versions of Elasticsearch. If any indices in a snapshot were created in an
- incompatible version, you will not be able restore the snapshot.
- IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you
- won't be able to restore snapshots after you upgrade if they contain indices
- created in a version that's incompatible with the upgrade version.
- If you end up in a situation where you need to restore a snapshot of an index
- that is incompatible with the version of the cluster you are currently running,
- you can restore it on the latest compatible version and use
- <<reindex-from-remote,reindex-from-remote>> to rebuild the index on the current
- version. Reindexing from remote is only possible if the original index has
- source enabled. Retrieving and reindexing the data can take significantly longer
- than simply restoring a snapshot. If you have a large amount of data, we
- recommend testing the reindex from remote process with a subset of your data to
- understand the time requirements before proceeding.
- [float]
- === Repositories
- You must register a snapshot repository before you can perform snapshot and
- restore operations. We recommend creating a new snapshot repository for each
- major version. The valid repository settings depend on the repository type.
- If you register same snapshot repository with multiple clusters, only
- one cluster should have write access to the repository. All other clusters
- connected to that repository should set the repository to `readonly` mode.
- IMPORTANT: The snapshot format can change across major versions, so if you have
- clusters on different versions trying to write the same repository, snapshots
- written by one version may not be visible to the other and the repository could
- be corrupted. While setting the repository to `readonly` on all but one of the
- clusters should work with multiple clusters differing by one major version, it
- is not a supported configuration.
- [source,js]
- -----------------------------------
- PUT /_snapshot/my_backup
- {
- "type": "fs",
- "settings": {
- "location": "my_backup_location"
- }
- }
- -----------------------------------
- // CONSOLE
- // TESTSETUP
- To retrieve information about a registered repository, use a GET request:
- [source,js]
- -----------------------------------
- GET /_snapshot/my_backup
- -----------------------------------
- // CONSOLE
- which returns:
- [source,js]
- -----------------------------------
- {
- "my_backup": {
- "type": "fs",
- "settings": {
- "location": "my_backup_location"
- }
- }
- }
- -----------------------------------
- // TESTRESPONSE
- To retrieve information about multiple repositories, specify a
- a comma-delimited list of repositories. You can also use the * wildcard when
- specifying repository names. For example, the following request retrieves
- information about all of the snapshot repositories that start with `repo` or
- contain `backup`:
- [source,js]
- -----------------------------------
- GET /_snapshot/repo*,*backup*
- -----------------------------------
- // CONSOLE
- To retrieve information about all registered snapshot repositories, omit the
- repository name or specify `_all`:
- [source,js]
- -----------------------------------
- GET /_snapshot
- -----------------------------------
- // CONSOLE
- or
- [source,js]
- -----------------------------------
- GET /_snapshot/_all
- -----------------------------------
- // CONSOLE
- [float]
- ===== Shared File System Repository
- The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register
- the shared file system repository it is necessary to mount the same shared filesystem to the same location on all
- master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo`
- setting on all master and data nodes.
- Assuming that the shared filesystem is mounted to `/mount/backups/my_backup`, the following setting should be added to
- `elasticsearch.yml` file:
- [source,yaml]
- --------------
- path.repo: ["/mount/backups", "/mount/longterm_backups"]
- --------------
- The `path.repo` setting supports Microsoft Windows UNC paths as long as at least server name and share are specified as
- a prefix and back slashes are properly escaped:
- [source,yaml]
- --------------
- path.repo: ["\\\\MY_SERVER\\Snapshots"]
- --------------
- After all nodes are restarted, the following command can be used to register the shared file system repository with
- the name `my_backup`:
- [source,js]
- -----------------------------------
- PUT /_snapshot/my_fs_backup
- {
- "type": "fs",
- "settings": {
- "location": "/mount/backups/my_fs_backup_location",
- "compress": true
- }
- }
- -----------------------------------
- // CONSOLE
- // TEST[skip:no access to absolute path]
- If the repository location is specified as a relative path this path will be resolved against the first path specified
- in `path.repo`:
- [source,js]
- -----------------------------------
- PUT /_snapshot/my_fs_backup
- {
- "type": "fs",
- "settings": {
- "location": "my_fs_backup_location",
- "compress": true
- }
- }
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The following settings are supported:
- [horizontal]
- `location`:: Location of the snapshots. Mandatory.
- `compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`.
- `chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by
- using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
- `max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
- `max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
- `readonly`:: Makes repository read-only. Defaults to `false`.
- [float]
- ===== Read-only URL Repository
- The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file
- system repository. The URL specified in the `url` parameter should point to the root of the shared filesystem repository.
- The following settings are supported:
- [horizontal]
- `url`:: Location of the snapshots. Mandatory.
- URL Repository supports the following protocols: "http", "https", "ftp", "file" and "jar". URL repositories with `http:`,
- `https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the `repositories.url.allowed_urls` setting.
- This setting supports wildcards in the place of host, path, query, and fragment. For example:
- [source,yaml]
- -----------------------------------
- repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"]
- -----------------------------------
- URL repositories with `file:` URLs can only point to locations registered in the `path.repo` setting similar to
- shared file system repository.
- [float]
- ===== Repository plugins
- Other repository backends are available in these official plugins:
- * {plugins}/repository-s3.html[repository-s3] for S3 repository support
- * {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments
- * {plugins}/repository-azure.html[repository-azure] for Azure storage repositories
- * {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories
- [float]
- ===== Repository Verification
- When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional
- on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository
- verification when registering or updating a repository:
- [source,js]
- -----------------------------------
- PUT /_snapshot/my_unverified_backup?verify=false
- {
- "type": "fs",
- "settings": {
- "location": "my_unverified_backup_location"
- }
- }
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The verification process can also be executed manually by running the following command:
- [source,js]
- -----------------------------------
- POST /_snapshot/my_unverified_backup/_verify
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
- [float]
- === Snapshot
- A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
- cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
- command:
- [source,js]
- -----------------------------------
- PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
- initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
- previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or
- even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`.
- By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by
- specifying the list of indices in the body of the snapshot request.
- [source,js]
- -----------------------------------
- PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
- {
- "indices": "index_1,index_2",
- "ignore_unavailable": true,
- "include_global_state": false
- }
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
- supports <<multi-index,multi index syntax>>. The snapshot request also supports the
- `ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot
- creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail.
- By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of
- the snapshot. By default, the entire snapshot will fail if one or more indices participating in the snapshot don't have
- all primary shards available. This behaviour can be changed by setting `partial` to `true`.
- Snapshot names can be automatically derived using <<date-math-index-names,date math expressions>>, similarly as when creating
- new indices. Note that special characters need to be URI encoded.
- For example, creating a snapshot with the current day in the name, like `snapshot-2018.05.11`, can be achieved with
- the following command:
- [source,js]
- -----------------------------------
- # PUT /_snapshot/my_backup/<snapshot-{now/d}>
- PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses
- the list of the index files that are already stored in the repository and copies only files that were created or
- changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form.
- Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be
- executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index
- at the moment when snapshot was created, so no records that were added to the index after the snapshot process was started
- will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started
- and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or
- initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for
- relocation or initialization of shards to complete before snapshotting them.
- Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent
- cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of
- the snapshot.
- Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being
- created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation
- filtering. Elasticsearch will only be able to move a shard to another node (according to the current allocation
- filtering settings and rebalancing algorithm) once the snapshot is finished.
- Once a snapshot is created information about this snapshot can be obtained using the following command:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/snapshot_1
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- This command returns basic information about the snapshot including start and end time, version of
- Elasticsearch that created the snapshot, the list of included indices, the current state of the
- snapshot and the list of failures that occurred during the snapshot. The snapshot `state` can be
- [horizontal]
- `IN_PROGRESS`::
- The snapshot is currently running.
- `SUCCESS`::
- The snapshot finished and all shards were stored successfully.
- `FAILED`::
- The snapshot finished with an error and failed to store any data.
- `PARTIAL`::
- The global cluster state was stored, but data of at least one shard wasn't stored successfully.
- The `failure` section in this case should contain more detailed information about shards
- that were not processed correctly.
- `INCOMPATIBLE`::
- The snapshot was created with an old version of Elasticsearch and therefore is incompatible with
- the current version of the cluster.
- Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- All snapshots currently stored in the repository can be listed using the following command:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/_all
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to
- return all snapshots that are currently available.
- Getting all snapshots in the repository can be costly on cloud-based repositories,
- both from a cost and performance perspective. If the only information required is
- the snapshot names/uuids in the repository and the indices in each snapshot, then
- the optional boolean parameter `verbose` can be set to `false` to execute a more
- performant and cost-effective retrieval of the snapshots in the repository. Note
- that setting `verbose` to `false` will omit all other information about the snapshot
- such as status information, the number of snapshotted shards, etc. The default
- value of the `verbose` parameter is `true`.
- A currently running snapshot can be retrieved using the following command:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/_current
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- A snapshot can be deleted from the repository using the following command:
- [source,sh]
- -----------------------------------
- DELETE /_snapshot/my_backup/snapshot_2
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
- snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
- created the snapshotting process will be aborted and all files created as part of the snapshotting process will be
- cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were
- started by mistake.
- A repository can be unregistered using the following command:
- [source,sh]
- -----------------------------------
- DELETE /_snapshot/my_fs_backup
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing
- the snapshots. The snapshots themselves are left untouched and in place.
- [float]
- === Restore
- A snapshot can be restored using the following command:
- [source,sh]
- -----------------------------------
- POST /_snapshot/my_backup/snapshot_1/_restore
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- By default, all indices in the snapshot are restored, and the cluster state is
- *not* restored. It's possible to select indices that should be restored as well
- as to allow the global cluster state from being restored by using `indices` and
- `include_global_state` options in the restore request body. The list of indices
- supports <<multi-index,multi index syntax>>. The `rename_pattern`
- and `rename_replacement` options can be also used to rename indices on restore
- using regular expression that supports referencing the original text as
- explained
- http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
- Set `include_aliases` to `false` to prevent aliases from being restored together
- with associated indices
- [source,js]
- -----------------------------------
- POST /_snapshot/my_backup/snapshot_1/_restore
- {
- "indices": "index_1,index_2",
- "ignore_unavailable": true,
- "include_global_state": true,
- "rename_pattern": "index_(.+)",
- "rename_replacement": "restored_index_$1"
- }
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The restore operation can be performed on a functioning cluster. However, an
- existing index can be only restored if it's <<indices-open-close,closed>> and
- has the same number of shards as the index in the snapshot. The restore
- operation automatically opens restored indices if they were closed and creates
- new indices if they didn't exist in the cluster. If cluster state is restored
- with `include_global_state` (defaults to `false`), the restored templates that
- don't currently exist in the cluster are added and existing templates with the
- same name are replaced by the restored templates. The restored persistent
- settings are added to the existing persistent settings.
- [float]
- ==== Partial restore
- By default, the entire restore operation will fail if one or more indices participating in the operation don't have
- snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
- restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be
- restored in this case and all missing shards will be recreated empty.
- [float]
- ==== Changing index settings during restore
- Most of index settings can be overridden during the restore process. For example, the following command will restore
- the index `index_1` without creating any replicas while switching back to default refresh interval:
- [source,js]
- -----------------------------------
- POST /_snapshot/my_backup/snapshot_1/_restore
- {
- "indices": "index_1",
- "index_settings": {
- "index.number_of_replicas": 0
- },
- "ignore_index_settings": [
- "index.refresh_interval"
- ]
- }
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation.
- [float]
- ==== Restoring to a different cluster
- The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it's possible to
- restore a snapshot made from one cluster into another cluster. All that is required is registering the repository
- containing the snapshot in the new cluster and starting the restore process. The new cluster doesn't have to have the
- same size or topology. However, the version of the new cluster should be the same or newer (only 1 major version newer) than the cluster that was used to create the snapshot. For example, you can restore a 1.x snapshot to a 2.x cluster, but not a 1.x snapshot to a 5.x cluster.
- If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure
- that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings
- during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also
- possible to select only subset of the indices using the `indices` parameter.
- If indices in the original cluster were assigned to particular nodes using
- <<shard-allocation-filtering,shard allocation filtering>>, the same rules will be enforced in the new cluster. Therefore
- if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such
- index will not be successfully restored unless these index allocation settings are changed during restore operation.
- The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally
- restoring an incompatible settings such as `discovery.zen.minimum_master_nodes` and as a result disable a smaller cluster until the
- required number of master eligible nodes is added. If you need to restore a snapshot with incompatible persistent settings, try
- restoring it without the global cluster state.
- [float]
- === Snapshot status
- A list of currently running snapshots with their detailed status information can be obtained using the following command:
- [source,sh]
- -----------------------------------
- GET /_snapshot/_status
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
- to limit the results to a particular repository:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/_status
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
- if it's not currently running:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/snapshot_1/_status
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- Multiple ids are also supported:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- [float]
- === Monitoring snapshot/restore progress
- There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both
- operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
- the simplest method that can be used to get notified about operation completion.
- The snapshot operation can be also monitored by periodic calls to the snapshot info:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/snapshot_1
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
- executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
- for available resources before returning the result. On very large shards the wait time can be significant.
- To get more immediate and complete information about snapshots the snapshot status command can be used instead:
- [source,sh]
- -----------------------------------
- GET /_snapshot/my_backup/snapshot_1/_status
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
- complete breakdown of the current state for each shard participating in the snapshot.
- The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
- monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
- typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
- restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
- state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
- creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
- are created, the cluster switches to the `green` states.
- The cluster health operation provides only a high level status of the restore process. It's possible to get more
- detailed insight into the current state of the recovery process by using <<indices-recovery, indices recovery>> and
- <<cat-recovery, cat recovery>> APIs.
- [float]
- === Stopping currently running snapshot and restore operations
- The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently
- running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation.
- The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
- that snapshot before deleting the snapshot data from the repository.
- [source,sh]
- -----------------------------------
- DELETE /_snapshot/my_backup/snapshot_1
- -----------------------------------
- // CONSOLE
- // TEST[continued]
- The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can
- be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
- from the cluster as a result of this operation.
- [float]
- === Effect of cluster blocks on snapshot and restore operations
- Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
- repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as
- well as the global metadata were readable. The restore operation requires the global metadata to be writable, however
- the index level blocks are ignored during restore because indices are essentially recreated during restore.
- Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal
- repository operations such as listing or deleting snapshots from an already registered repository.
|