snapshots.asciidoc 27 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631
  1. [[modules-snapshots]]
  2. == Snapshot And Restore
  3. You can store snapshots of individual indices or an entire cluster in
  4. a remote repository like a shared file system, S3, or HDFS. These snapshots
  5. are great for backups because they can be restored relatively quickly. However,
  6. snapshots can only be restored to versions of Elasticsearch that can read the
  7. indices:
  8. * A snapshot of an index created in 5.x can be restored to 6.x.
  9. * A snapshot of an index created in 2.x can be restored to 5.x.
  10. * A snapshot of an index created in 1.x can be restored to 2.x.
  11. Conversely, snapshots of indices created in 1.x **cannot** be restored to
  12. 5.x or 6.x, and snapshots of indices created in 2.x **cannot** be restored
  13. to 6.x.
  14. Snapshots are incremental and can contain indices created in various
  15. versions of Elasticsearch. If any indices in a snapshot were created in an
  16. incompatible version, you will not be able restore the snapshot.
  17. IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you
  18. won't be able to restore snapshots after you upgrade if they contain indices
  19. created in a version that's incompatible with the upgrade version.
  20. If you end up in a situation where you need to restore a snapshot of an index
  21. that is incompatible with the version of the cluster you are currently running,
  22. you can restore it on the latest compatible version and use
  23. <<reindex-from-remote,reindex-from-remote>> to rebuild the index on the current
  24. version. Reindexing from remote is only possible if the original index has
  25. source enabled. Retrieving and reindexing the data can take significantly longer
  26. than simply restoring a snapshot. If you have a large amount of data, we
  27. recommend testing the reindex from remote process with a subset of your data to
  28. understand the time requirements before proceeding.
  29. [float]
  30. === Repositories
  31. You must register a snapshot repository before you can perform snapshot and
  32. restore operations. We recommend creating a new snapshot repository for each
  33. major version. The valid repository settings depend on the repository type.
  34. If you register same snapshot repository with multiple clusters, only
  35. one cluster should have write access to the repository. All other clusters
  36. connected to that repository should set the repository to `readonly` mode.
  37. NOTE: The snapshot format can change across major versions, so if you have
  38. clusters on different major versions trying to write the same repository,
  39. new snapshots written by one version will not be visible to the other. While
  40. setting the repository to `readonly` on all but one of the clusters should work
  41. with multiple clusters differing by one major version, it is not a supported
  42. configuration.
  43. [source,js]
  44. -----------------------------------
  45. PUT /_snapshot/my_backup
  46. {
  47. "type": "fs",
  48. "settings": {
  49. "location": "my_backup_location"
  50. }
  51. }
  52. -----------------------------------
  53. // CONSOLE
  54. // TESTSETUP
  55. To retrieve information about a registered repository, use a GET request:
  56. [source,js]
  57. -----------------------------------
  58. GET /_snapshot/my_backup
  59. -----------------------------------
  60. // CONSOLE
  61. which returns:
  62. [source,js]
  63. -----------------------------------
  64. {
  65. "my_backup": {
  66. "type": "fs",
  67. "settings": {
  68. "location": "my_backup_location"
  69. }
  70. }
  71. }
  72. -----------------------------------
  73. // TESTRESPONSE
  74. To retrieve information about multiple repositories, specify a
  75. a comma-delimited list of repositories. You can also use the * wildcard when
  76. specifying repository names. For example, the following request retrieves
  77. information about all of the snapshot repositories that start with `repo` or
  78. contain `backup`:
  79. [source,js]
  80. -----------------------------------
  81. GET /_snapshot/repo*,*backup*
  82. -----------------------------------
  83. // CONSOLE
  84. To retrieve information about all registered snapshot repositories, omit the
  85. repository name or specify `_all`:
  86. [source,js]
  87. -----------------------------------
  88. GET /_snapshot
  89. -----------------------------------
  90. // CONSOLE
  91. or
  92. [source,js]
  93. -----------------------------------
  94. GET /_snapshot/_all
  95. -----------------------------------
  96. // CONSOLE
  97. [float]
  98. ===== Shared File System Repository
  99. The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register
  100. the shared file system repository it is necessary to mount the same shared filesystem to the same location on all
  101. master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo`
  102. setting on all master and data nodes.
  103. Assuming that the shared filesystem is mounted to `/mount/backups/my_backup`, the following setting should be added to
  104. `elasticsearch.yml` file:
  105. [source,yaml]
  106. --------------
  107. path.repo: ["/mount/backups", "/mount/longterm_backups"]
  108. --------------
  109. The `path.repo` setting supports Microsoft Windows UNC paths as long as at least server name and share are specified as
  110. a prefix and back slashes are properly escaped:
  111. [source,yaml]
  112. --------------
  113. path.repo: ["\\\\MY_SERVER\\Snapshots"]
  114. --------------
  115. After all nodes are restarted, the following command can be used to register the shared file system repository with
  116. the name `my_backup`:
  117. [source,js]
  118. -----------------------------------
  119. PUT /_snapshot/my_fs_backup
  120. {
  121. "type": "fs",
  122. "settings": {
  123. "location": "/mount/backups/my_fs_backup_location",
  124. "compress": true
  125. }
  126. }
  127. -----------------------------------
  128. // CONSOLE
  129. // TEST[skip:no access to absolute path]
  130. If the repository location is specified as a relative path this path will be resolved against the first path specified
  131. in `path.repo`:
  132. [source,js]
  133. -----------------------------------
  134. PUT /_snapshot/my_fs_backup
  135. {
  136. "type": "fs",
  137. "settings": {
  138. "location": "my_fs_backup_location",
  139. "compress": true
  140. }
  141. }
  142. -----------------------------------
  143. // CONSOLE
  144. // TEST[continued]
  145. The following settings are supported:
  146. [horizontal]
  147. `location`:: Location of the snapshots. Mandatory.
  148. `compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`.
  149. `chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by
  150. using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
  151. `max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
  152. `max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
  153. `readonly`:: Makes repository read-only. Defaults to `false`.
  154. [float]
  155. ===== Read-only URL Repository
  156. The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file
  157. system repository. The URL specified in the `url` parameter should point to the root of the shared filesystem repository.
  158. The following settings are supported:
  159. [horizontal]
  160. `url`:: Location of the snapshots. Mandatory.
  161. URL Repository supports the following protocols: "http", "https", "ftp", "file" and "jar". URL repositories with `http:`,
  162. `https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the `repositories.url.allowed_urls` setting.
  163. This setting supports wildcards in the place of host, path, query, and fragment. For example:
  164. [source,yaml]
  165. -----------------------------------
  166. repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"]
  167. -----------------------------------
  168. URL repositories with `file:` URLs can only point to locations registered in the `path.repo` setting similar to
  169. shared file system repository.
  170. [float]
  171. ===== Repository plugins
  172. Other repository backends are available in these official plugins:
  173. * {plugins}/repository-s3.html[repository-s3] for S3 repository support
  174. * {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments
  175. * {plugins}/repository-azure.html[repository-azure] for Azure storage repositories
  176. * {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories
  177. [float]
  178. ===== Repository Verification
  179. When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional
  180. on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository
  181. verification when registering or updating a repository:
  182. [source,js]
  183. -----------------------------------
  184. PUT /_snapshot/my_unverified_backup?verify=false
  185. {
  186. "type": "fs",
  187. "settings": {
  188. "location": "my_unverified_backup_location"
  189. }
  190. }
  191. -----------------------------------
  192. // CONSOLE
  193. // TEST[continued]
  194. The verification process can also be executed manually by running the following command:
  195. [source,js]
  196. -----------------------------------
  197. POST /_snapshot/my_unverified_backup/_verify
  198. -----------------------------------
  199. // CONSOLE
  200. // TEST[continued]
  201. It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
  202. [float]
  203. === Snapshot
  204. A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
  205. cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
  206. command:
  207. [source,js]
  208. -----------------------------------
  209. PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
  210. -----------------------------------
  211. // CONSOLE
  212. // TEST[continued]
  213. The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
  214. initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
  215. previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or
  216. even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`.
  217. By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by
  218. specifying the list of indices in the body of the snapshot request.
  219. [source,js]
  220. -----------------------------------
  221. PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
  222. {
  223. "indices": "index_1,index_2",
  224. "ignore_unavailable": true,
  225. "include_global_state": false
  226. }
  227. -----------------------------------
  228. // CONSOLE
  229. // TEST[continued]
  230. The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
  231. supports <<multi-index,multi index syntax>>. The snapshot request also supports the
  232. `ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot
  233. creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail.
  234. By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of
  235. the snapshot. By default, the entire snapshot will fail if one or more indices participating in the snapshot don't have
  236. all primary shards available. This behaviour can be changed by setting `partial` to `true`.
  237. The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses
  238. the list of the index files that are already stored in the repository and copies only files that were created or
  239. changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form.
  240. Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be
  241. executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index
  242. at the moment when snapshot was created, so no records that were added to the index after the snapshot process was started
  243. will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started
  244. and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or
  245. initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for
  246. relocation or initialization of shards to complete before snapshotting them.
  247. Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent
  248. cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of
  249. the snapshot.
  250. Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being
  251. created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation
  252. filtering. Elasticsearch will only be able to move a shard to another node (according to the current allocation
  253. filtering settings and rebalancing algorithm) once the snapshot is finished.
  254. Once a snapshot is created information about this snapshot can be obtained using the following command:
  255. [source,sh]
  256. -----------------------------------
  257. GET /_snapshot/my_backup/snapshot_1
  258. -----------------------------------
  259. // CONSOLE
  260. // TEST[continued]
  261. This command returns basic information about the snapshot including start and end time, version of
  262. Elasticsearch that created the snapshot, the list of included indices, the current state of the
  263. snapshot and the list of failures that occurred during the snapshot. The snapshot `state` can be
  264. [horizontal]
  265. `IN_PROGRESS`::
  266. The snapshot is currently running.
  267. `SUCCESS`::
  268. The snapshot finished and all shards were stored successfully.
  269. `FAILED`::
  270. The snapshot finished with an error and failed to store any data.
  271. `PARTIAL`::
  272. The global cluster state was stored, but data of at least one shard wasn't stored successfully.
  273. The `failure` section in this case should contain more detailed information about shards
  274. that were not processed correctly.
  275. `INCOMPATIBLE`::
  276. The snapshot was created with an old version of Elasticsearch and therefore is incompatible with
  277. the current version of the cluster.
  278. Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well:
  279. [source,sh]
  280. -----------------------------------
  281. GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
  282. -----------------------------------
  283. // CONSOLE
  284. // TEST[continued]
  285. All snapshots currently stored in the repository can be listed using the following command:
  286. [source,sh]
  287. -----------------------------------
  288. GET /_snapshot/my_backup/_all
  289. -----------------------------------
  290. // CONSOLE
  291. // TEST[continued]
  292. The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to
  293. return all snapshots that are currently available.
  294. Getting all snapshots in the repository can be costly on cloud-based repositories,
  295. both from a cost and performance perspective. If the only information required is
  296. the snapshot names/uuids in the repository and the indices in each snapshot, then
  297. the optional boolean parameter `verbose` can be set to `false` to execute a more
  298. performant and cost-effective retrieval of the snapshots in the repository. Note
  299. that setting `verbose` to `false` will omit all other information about the snapshot
  300. such as status information, the number of snapshotted shards, etc. The default
  301. value of the `verbose` parameter is `true`.
  302. A currently running snapshot can be retrieved using the following command:
  303. [source,sh]
  304. -----------------------------------
  305. GET /_snapshot/my_backup/_current
  306. -----------------------------------
  307. // CONSOLE
  308. // TEST[continued]
  309. A snapshot can be deleted from the repository using the following command:
  310. [source,sh]
  311. -----------------------------------
  312. DELETE /_snapshot/my_backup/snapshot_2
  313. -----------------------------------
  314. // CONSOLE
  315. // TEST[continued]
  316. When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
  317. snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
  318. created the snapshotting process will be aborted and all files created as part of the snapshotting process will be
  319. cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were
  320. started by mistake.
  321. A repository can be unregistered using the following command:
  322. [source,sh]
  323. -----------------------------------
  324. DELETE /_snapshot/my_fs_backup
  325. -----------------------------------
  326. // CONSOLE
  327. // TEST[continued]
  328. When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing
  329. the snapshots. The snapshots themselves are left untouched and in place.
  330. [float]
  331. === Restore
  332. A snapshot can be restored using the following command:
  333. [source,sh]
  334. -----------------------------------
  335. POST /_snapshot/my_backup/snapshot_1/_restore
  336. -----------------------------------
  337. // CONSOLE
  338. // TEST[continued]
  339. By default, all indices in the snapshot are restored, and the cluster state is
  340. *not* restored. It's possible to select indices that should be restored as well
  341. as to allow the global cluster state from being restored by using `indices` and
  342. `include_global_state` options in the restore request body. The list of indices
  343. supports <<multi-index,multi index syntax>>. The `rename_pattern`
  344. and `rename_replacement` options can be also used to rename indices on restore
  345. using regular expression that supports referencing the original text as
  346. explained
  347. http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
  348. Set `include_aliases` to `false` to prevent aliases from being restored together
  349. with associated indices
  350. [source,js]
  351. -----------------------------------
  352. POST /_snapshot/my_backup/snapshot_1/_restore
  353. {
  354. "indices": "index_1,index_2",
  355. "ignore_unavailable": true,
  356. "include_global_state": true,
  357. "rename_pattern": "index_(.+)",
  358. "rename_replacement": "restored_index_$1"
  359. }
  360. -----------------------------------
  361. // CONSOLE
  362. // TEST[continued]
  363. The restore operation can be performed on a functioning cluster. However, an
  364. existing index can be only restored if it's <<indices-open-close,closed>> and
  365. has the same number of shards as the index in the snapshot. The restore
  366. operation automatically opens restored indices if they were closed and creates
  367. new indices if they didn't exist in the cluster. If cluster state is restored
  368. with `include_global_state` (defaults to `false`), the restored templates that
  369. don't currently exist in the cluster are added and existing templates with the
  370. same name are replaced by the restored templates. The restored persistent
  371. settings are added to the existing persistent settings.
  372. [float]
  373. ==== Partial restore
  374. By default, the entire restore operation will fail if one or more indices participating in the operation don't have
  375. snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
  376. restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be
  377. restored in this case and all missing shards will be recreated empty.
  378. [float]
  379. ==== Changing index settings during restore
  380. Most of index settings can be overridden during the restore process. For example, the following command will restore
  381. the index `index_1` without creating any replicas while switching back to default refresh interval:
  382. [source,js]
  383. -----------------------------------
  384. POST /_snapshot/my_backup/snapshot_1/_restore
  385. {
  386. "indices": "index_1",
  387. "index_settings": {
  388. "index.number_of_replicas": 0
  389. },
  390. "ignore_index_settings": [
  391. "index.refresh_interval"
  392. ]
  393. }
  394. -----------------------------------
  395. // CONSOLE
  396. // TEST[continued]
  397. Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation.
  398. [float]
  399. ==== Restoring to a different cluster
  400. The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it's possible to
  401. restore a snapshot made from one cluster into another cluster. All that is required is registering the repository
  402. containing the snapshot in the new cluster and starting the restore process. The new cluster doesn't have to have the
  403. same size or topology. However, the version of the new cluster should be the same or newer (only 1 major version newer) than the cluster that was used to create the snapshot. For example, you can restore a 1.x snapshot to a 2.x cluster, but not a 1.x snapshot to a 5.x cluster.
  404. If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure
  405. that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings
  406. during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also
  407. possible to select only subset of the indices using the `indices` parameter.
  408. If indices in the original cluster were assigned to particular nodes using
  409. <<shard-allocation-filtering,shard allocation filtering>>, the same rules will be enforced in the new cluster. Therefore
  410. if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such
  411. index will not be successfully restored unless these index allocation settings are changed during restore operation.
  412. The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally
  413. restoring an incompatible settings such as `discovery.zen.minimum_master_nodes` and as a result disable a smaller cluster until the
  414. required number of master eligible nodes is added. If you need to restore a snapshot with incompatible persistent settings, try
  415. restoring it without the global cluster state.
  416. [float]
  417. === Snapshot status
  418. A list of currently running snapshots with their detailed status information can be obtained using the following command:
  419. [source,sh]
  420. -----------------------------------
  421. GET /_snapshot/_status
  422. -----------------------------------
  423. // CONSOLE
  424. // TEST[continued]
  425. In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
  426. to limit the results to a particular repository:
  427. [source,sh]
  428. -----------------------------------
  429. GET /_snapshot/my_backup/_status
  430. -----------------------------------
  431. // CONSOLE
  432. // TEST[continued]
  433. If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
  434. if it's not currently running:
  435. [source,sh]
  436. -----------------------------------
  437. GET /_snapshot/my_backup/snapshot_1/_status
  438. -----------------------------------
  439. // CONSOLE
  440. // TEST[continued]
  441. Multiple ids are also supported:
  442. [source,sh]
  443. -----------------------------------
  444. GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
  445. -----------------------------------
  446. // CONSOLE
  447. // TEST[continued]
  448. [float]
  449. === Monitoring snapshot/restore progress
  450. There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both
  451. operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
  452. the simplest method that can be used to get notified about operation completion.
  453. The snapshot operation can be also monitored by periodic calls to the snapshot info:
  454. [source,sh]
  455. -----------------------------------
  456. GET /_snapshot/my_backup/snapshot_1
  457. -----------------------------------
  458. // CONSOLE
  459. // TEST[continued]
  460. Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
  461. executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
  462. for available resources before returning the result. On very large shards the wait time can be significant.
  463. To get more immediate and complete information about snapshots the snapshot status command can be used instead:
  464. [source,sh]
  465. -----------------------------------
  466. GET /_snapshot/my_backup/snapshot_1/_status
  467. -----------------------------------
  468. // CONSOLE
  469. // TEST[continued]
  470. While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
  471. complete breakdown of the current state for each shard participating in the snapshot.
  472. The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
  473. monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
  474. typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
  475. restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
  476. state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
  477. creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
  478. are created, the cluster switches to the `green` states.
  479. The cluster health operation provides only a high level status of the restore process. It's possible to get more
  480. detailed insight into the current state of the recovery process by using <<indices-recovery, indices recovery>> and
  481. <<cat-recovery, cat recovery>> APIs.
  482. [float]
  483. === Stopping currently running snapshot and restore operations
  484. The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently
  485. running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation.
  486. The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
  487. that snapshot before deleting the snapshot data from the repository.
  488. [source,sh]
  489. -----------------------------------
  490. DELETE /_snapshot/my_backup/snapshot_1
  491. -----------------------------------
  492. // CONSOLE
  493. // TEST[continued]
  494. The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can
  495. be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
  496. from the cluster as a result of this operation.
  497. [float]
  498. === Effect of cluster blocks on snapshot and restore operations
  499. Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
  500. repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as
  501. well as the global metadata were readable. The restore operation requires the global metadata to be writable, however
  502. the index level blocks are ignored during restore because indices are essentially recreated during restore.
  503. Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal
  504. repository operations such as listing or deleting snapshots from an already registered repository.