restore-snapshot.asciidoc 15 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541
  1. [[snapshots-restore-snapshot]]
  2. == Restore a snapshot
  3. This guide shows you how to restore a snapshot. Snapshots are a convenient way
  4. to store a copy of your data outside of a cluster. You can restore a snapshot
  5. to recover indices and data streams after deletion or a hardware failure. You
  6. can also use snapshots to transfer data between clusters.
  7. In this guide, you'll learn how to:
  8. * Get a list of available snapshots
  9. * Restore an index or data stream from a snapshot
  10. * Restore an entire cluster
  11. * Monitor the restore operation
  12. * Cancel an ongoing restore
  13. This guide also provides tips for <<restore-different-cluster,restoring to
  14. another cluster>> and <<troubleshoot-restore,troubleshooting common restore
  15. errors>>.
  16. [discrete]
  17. [[restore-snapshot-prereqs]]
  18. === Prerequisites
  19. include::apis/restore-snapshot-api.asciidoc[tag=restore-prereqs]
  20. [discrete]
  21. [[restore-snapshot-considerations]]
  22. === Considerations
  23. When restoring data from a snapshot, keep the following in mind:
  24. * If you restore a data stream, you also restore its backing indices.
  25. * You can only restore an existing index if it's <<indices-close,closed>> and
  26. the index in the snapshot has the same number of primary shards.
  27. * You can't restore an existing open index. This includes
  28. backing indices for a data stream.
  29. * The restore operation automatically opens restored indices, including backing
  30. indices.
  31. * You can restore only a specific backing index from a data stream. However, the
  32. restore operation doesn't add the restored backing index to any existing data
  33. stream.
  34. [discrete]
  35. [[get-snapshot-list]]
  36. === Get a list of available snapshots
  37. To view a list of available snapshots in {kib}, go to the main menu and click
  38. *Stack Management > Snapshot and Restore*.
  39. You can also use the <<get-snapshot-repo-api,get repository API>> and the
  40. <<get-snapshot-api,get snapshot API>> to find snapshots that are available to
  41. restore. First, use the get repository API to fetch a list of registered
  42. snapshot repositories.
  43. [source,console]
  44. ----
  45. GET _snapshot
  46. ----
  47. // TEST[setup:setup-snapshots]
  48. Then use the get snapshot API to get a list of snapshots in a specific
  49. repository.
  50. [source,console]
  51. ----
  52. GET _snapshot/my_repository/*?verbose=false
  53. ----
  54. // TEST[setup:setup-snapshots]
  55. [discrete]
  56. [[restore-index-data-stream]]
  57. === Restore an index or data stream
  58. You can restore a snapshot using {kib}'s *Snapshot and Restore* feature or the
  59. <<restore-snapshot-api,restore snapshot API>>.
  60. In most cases, you only need to restore a specific index or data stream from a
  61. snapshot. However, you can't restore an existing open index.
  62. To avoid conflicts with existing indices and data streams, use one of the
  63. following methods:
  64. * <<delete-restore>>
  65. * <<rename-on-restore>>
  66. [discrete]
  67. [[delete-restore]]
  68. ==== Delete and restore
  69. The simplest way to avoid conflicts is to delete an existing index or data
  70. stream before restoring it. To prevent the accidental re-creation of the index
  71. or data stream, we recommend you temporarily stop all indexing until the restore
  72. operation is complete.
  73. WARNING: If the
  74. <<action-destructive-requires-name,`action.destructive_requires_name`>> cluster
  75. setting is `false`, don't use the <<indices-delete-index,delete index API>> to
  76. target the `*` or `.*` wildcard pattern. If you use {es}'s security features,
  77. this will delete system indices required for authentication. Instead, target the
  78. `*,-.*` wildcard pattern to exclude these system indices and other index names
  79. that begin with a dot (`.`).
  80. [source,console]
  81. ----
  82. # Delete an index
  83. DELETE my-index
  84. # Delete a data stream
  85. DELETE _data_stream/logs-my_app-default
  86. ----
  87. // TEST[setup:setup-snapshots]
  88. By default, a restore request attempts to restore all indices and data
  89. streams in the snapshot, including system indices. If your cluster already
  90. contains one or more of these system indices, the request will return an error.
  91. To avoid this error, specify the indices and data streams to restore. To exclude
  92. system indices and other index names that begin with a dot (`.`), append the
  93. `-.*` wildcard pattern. To restore all indices and data streams except dot
  94. indices, use `*,-.*`.
  95. [source,console]
  96. ----
  97. POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore
  98. {
  99. "indices": "my-index,logs-my_app-default"
  100. }
  101. ----
  102. // TEST[continued]
  103. // TEST[s/_restore/_restore?wait_for_completion=true/]
  104. [discrete]
  105. [[rename-on-restore]]
  106. ==== Rename on restore
  107. If you want to avoid deleting existing data, you can instead
  108. rename the indices and data streams you restore. You typically use this method
  109. to compare existing data to historical data from a snapshot. For example, you
  110. can use this method to review documents after an accidental update or deletion.
  111. Before you start, ensure the cluster has enough capacity for both the existing
  112. and restored data.
  113. The following restore snapshot API request prepends `restored-` to the name of
  114. any restored index or data stream.
  115. [source,console]
  116. ----
  117. POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore
  118. {
  119. "indices": "my-index,logs-my_app-default",
  120. "rename_pattern": "(.+)",
  121. "rename_replacement": "restored-$1"
  122. }
  123. ----
  124. // TEST[setup:setup-snapshots]
  125. // TEST[s/_restore/_restore?wait_for_completion=true/]
  126. If the rename options produce two or more indices or data streams with the same
  127. name, the restore operation fails.
  128. If you rename a data stream, its backing indices are also renamed. For example,
  129. if you rename the `logs-my_app-default` data stream to
  130. `restored-logs-my_app-default`, the backing index
  131. `.ds-logs-my_app-default-2099.03.09-000005` is renamed to
  132. `.ds-restored-logs-my_app-default-2099.03.09-000005`.
  133. When the restore operation is complete, you can compare the original and
  134. restored data. If you no longer need an original index or data stream, you can
  135. delete it and use a <<docs-reindex,reindex>> to rename the restored one.
  136. [source,console]
  137. ----
  138. # Delete the original index
  139. DELETE my-index
  140. # Reindex the restored index to rename it
  141. POST _reindex
  142. {
  143. "source": {
  144. "index": "restored-my-index"
  145. },
  146. "dest": {
  147. "index": "my-index"
  148. }
  149. }
  150. # Delete the original data stream
  151. DELETE _data_stream/logs-my_app-default
  152. # Reindex the restored data stream to rename it
  153. POST _reindex
  154. {
  155. "source": {
  156. "index": "restored-logs-my_app-default"
  157. },
  158. "dest": {
  159. "index": "logs-my_app-default",
  160. "op_type": "create"
  161. }
  162. }
  163. ----
  164. // TEST[continued]
  165. [discrete]
  166. [[restore-entire-cluster]]
  167. === Restore an entire cluster
  168. In some cases, you need to restore an entire cluster from a snapshot, including
  169. the cluster state and all system indices. These cases should be rare, such as in
  170. the event of a catastrophic failure.
  171. Restoring an entire cluster involves deleting important system indices, including
  172. those used for authentication. Consider whether you can restore specific indices
  173. or data streams instead.
  174. If you're restoring to a different cluster, see <<restore-different-cluster>>
  175. before you start.
  176. . If you <<backup-cluster-configuration,backed up the cluster's configuration
  177. files>>, you can restore them to each node. This step is optional and requires a
  178. <<restart-upgrade,full cluster restart>>.
  179. +
  180. After you shut down a node, copy the backed-up configuration files over to the
  181. node's `$ES_PATH_CONF` directory. Before restarting the node, ensure
  182. `elasticsearch.yml` contains the appropriate node roles, node name, and
  183. other node-specific settings.
  184. +
  185. If you choose to perform this step, you must repeat this process on each node in
  186. the cluster.
  187. . Temporarily stop indexing and turn off the following features:
  188. +
  189. --
  190. * ILM
  191. +
  192. [source,console]
  193. ----
  194. POST _ilm/stop
  195. ----
  196. ////
  197. [source,console]
  198. ----
  199. POST _ilm/start
  200. ----
  201. // TEST[continued]
  202. ////
  203. * Machine Learning
  204. +
  205. [source,console]
  206. ----
  207. POST _ml/set_upgrade_mode?enabled=true
  208. ----
  209. ////
  210. [source,console]
  211. ----
  212. POST _ml/set_upgrade_mode?enabled=false
  213. ----
  214. // TEST[continued]
  215. ////
  216. ////
  217. [source,console]
  218. ----
  219. PUT _cluster/settings
  220. {
  221. "persistent": {
  222. "xpack.monitoring.collection.enabled": true
  223. }
  224. }
  225. ----
  226. // TEST[warning:[xpack.monitoring.collection.enabled] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.]
  227. // TEST[continued]
  228. ////
  229. * Watcher
  230. +
  231. [source,console]
  232. ----
  233. POST _watcher/_stop
  234. ----
  235. ////
  236. [source,console]
  237. ----
  238. POST _watcher/_start
  239. ----
  240. // TEST[continued]
  241. ////
  242. --
  243. . If you use {es} security features, log in to a node host, navigate to the {es}
  244. installation directory, and add a user with the `superuser` role to the file
  245. realm using the <<users-command,`elasticsearch-users`>> tool.
  246. +
  247. For example, the following command creates a user named `restore_user`.
  248. +
  249. [source,sh]
  250. ----
  251. ./bin/elasticsearch-users useradd restore_user -p my_password -r superuser
  252. ----
  253. +
  254. Use this file realm user to authenticate requests until the restore operation is
  255. complete.
  256. . Use the <<cluster-update-settings,cluster update settings API>> to set
  257. <<action-destructive-requires-name,`action.destructive_requires_name`>> to
  258. `false`. This lets you delete indices and data streams using wildcards.
  259. +
  260. [source,console]
  261. ----
  262. PUT _cluster/settings
  263. {
  264. "persistent": {
  265. "action.destructive_requires_name": false
  266. }
  267. }
  268. ----
  269. // TEST[setup:setup-snapshots]
  270. . Delete existing indices and data streams on the cluster.
  271. +
  272. [source,console]
  273. ----
  274. # Delete all indices
  275. DELETE *
  276. # Delete all data streams
  277. DELETE _data_stream/*
  278. ----
  279. // TEST[continued]
  280. . Restore the entire snapshot, including the cluster state. This also restores
  281. any system indices in the snapshot.
  282. +
  283. [source,console]
  284. ----
  285. POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore
  286. {
  287. "indices": "*",
  288. "include_global_state": true
  289. }
  290. ----
  291. // TEST[continued]
  292. // TEST[s/_restore/_restore?wait_for_completion=true/]
  293. . When the restore operation is complete, resume indexing and restart any
  294. features you stopped:
  295. +
  296. --
  297. * ILM
  298. +
  299. [source,console]
  300. ----
  301. POST _ilm/start
  302. ----
  303. * Machine Learning
  304. +
  305. [source,console]
  306. ----
  307. POST _ml/set_upgrade_mode?enabled=false
  308. ----
  309. * Watcher
  310. +
  311. [source,console]
  312. ----
  313. POST _watcher/_start
  314. ----
  315. --
  316. . If wanted, reset the `action.destructive_requires_name` cluster setting.
  317. +
  318. [source,console]
  319. ----
  320. PUT _cluster/settings
  321. {
  322. "persistent": {
  323. "action.destructive_requires_name": null
  324. }
  325. }
  326. ----
  327. [discrete]
  328. [[monitor-restore]]
  329. === Monitor a restore
  330. The restore operation uses the <<indices-recovery,shard recovery process>> to
  331. restore an index's primary shards from a snapshot. While the restore operation
  332. recovers primary shards, the cluster will have a `yellow`
  333. <<cluster-health,health status>>.
  334. After all primary shards are recovered, the replication process creates and
  335. distributes replicas across eligible data nodes. When replication is complete,
  336. the cluster health status typically becomes `green`.
  337. You can monitor the cluster health status using the <<cluster-health,cluster
  338. health API>>.
  339. [source,console]
  340. ----
  341. GET _cluster/health
  342. ----
  343. To get detailed information about ongoing shard recoveries, use the
  344. <<indices-recovery,index recovery API>>.
  345. [source,console]
  346. ----
  347. GET my-index/_recovery
  348. ----
  349. // TEST[setup:setup-snapshots]
  350. To view any unassigned shards, use the <<cat-shards,cat shards API>>.
  351. [source,console]
  352. ----
  353. GET _cat/shards?v=true&h=index,shard,prirep,state,node,unassigned.reason&s=state
  354. ----
  355. Unassigned shards have a `state` of `UNASSIGNED`. The `prirep` value is `p` for
  356. primary shards and `r` for replicas. The `unassigned.reason` describes why the
  357. shard remains unassigned.
  358. To get a more in-depth explanation of an unassigned shard's allocation status,
  359. use the <<cluster-allocation-explain,cluster allocation explanation API>>.
  360. [source,console]
  361. ----
  362. GET _cluster/allocation/explain?filter_path=index,node_allocation_decisions.node_name,node_allocation_decisions.deciders.*
  363. {
  364. "index": "my-index",
  365. "shard": 0,
  366. "primary": false,
  367. "current_node": "my-node"
  368. }
  369. ----
  370. // TEST[s/^/PUT my-index\n/]
  371. // TEST[s/"primary": false,/"primary": false/]
  372. // TEST[s/"current_node": "my-node"//]
  373. [discrete]
  374. [[cancel-restore]]
  375. === Cancel a restore
  376. You can delete an index or data stream to cancel its ongoing restore. This also
  377. deletes any existing data in the cluster for the index or data stream. Deleting
  378. an index or data stream doesn't affect the snapshot or its data.
  379. [source,console]
  380. ----
  381. # Delete an index
  382. DELETE my-index
  383. # Delete a data stream
  384. DELETE _data_stream/logs-my_app-default
  385. ----
  386. // TEST[setup:setup-snapshots]
  387. [discrete]
  388. [[restore-different-cluster]]
  389. === Restore to a different cluster
  390. TIP: {ess} can help you restore snapshots from other deployments. See
  391. {cloud}/ec-restoring-snapshots.html#ec-restore-across-clusters[Restore across
  392. clusters].
  393. Snapshots aren't tied to a particular cluster or a cluster name. You can create
  394. a snapshot in one cluster and restore it in another
  395. <<snapshot-restore-version-compatibility,compatible cluster>>. The topology of
  396. the clusters doesn't need to match.
  397. To restore a snapshot, its repository must be
  398. <<snapshots-register-repository,registered>> and available to the new cluster.
  399. If the original cluster still has write access to the repository, register the
  400. repository in `readonly` mode. This prevents multiple clusters from writing to
  401. the repository at the same time and corrupting the repository's contents.
  402. Before you start a restore operation, ensure the new cluster has enough capacity
  403. for any data streams or indices you want to restore. If the new cluster has a
  404. smaller capacity, you can:
  405. * Add nodes or upgrade your hardware to increase capacity.
  406. * Restore fewer indices and data streams.
  407. * Reduce the <<dynamic-index-number-of-replicas,number of replicas>> for
  408. restored indices.
  409. +
  410. For example, the following restore snapshot API request uses the
  411. `index_settings` option to set `index.number_of_replicas` to `1`.
  412. +
  413. [source,console]
  414. ----
  415. POST _snapshot/my_repository/my_snapshot_2099.05.06/_restore
  416. {
  417. "indices": "my-index,logs-my_app-default",
  418. "index_settings": {
  419. "index.number_of_replicas": 1
  420. }
  421. }
  422. ----
  423. // TEST[setup:setup-snapshots]
  424. // TEST[s/^/DELETE my-index\nDELETE _data_stream\/logs-my_app-default\n/]
  425. // TEST[s/_restore/_restore?wait_for_completion=true/]
  426. If indices or backing indices in the original cluster were assigned to particular nodes using
  427. <<shard-allocation-filtering,shard allocation filtering>>, the same rules will be enforced in the new cluster. If the new cluster does not contain nodes with appropriate attributes that a restored index can be allocated on, the
  428. index will not be successfully restored unless these index allocation settings are changed during the restore operation.
  429. The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally
  430. restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without
  431. the <<restore-snapshot-api-include-global-state,global cluster state>>.
  432. [discrete]
  433. [[troubleshoot-restore]]
  434. === Troubleshoot restore errors
  435. Here's how to resolve common errors returned by restore requests.
  436. [discrete]
  437. ==== Cannot restore index [<index>] because an open index with same name already exists in the cluster
  438. You can't restore an open index that already exists. To resolve this error, try
  439. one of the methods in <<restore-index-data-stream>>.
  440. [discrete]
  441. ==== Cannot restore index [<index>] with [x] shards from a snapshot of index [<snapshot-index>] with [y] shards
  442. You can only restore an existing index if it's closed and the index in the
  443. snapshot has the same number of primary shards. This error indicates the index
  444. in the snapshot has a different number of primary shards.
  445. To resolve this error, try one of the methods in <<restore-index-data-stream>>.