Browse Source

[DOCS] Move snapshot-restore out of modules. (#49618)

* [DOCS] Move snapshot-restore docs out of modules.

* [DOCS] Incorporates comments from @jrodewig.

* [DOCS] Fix snippet tests
debadair 5 years ago
parent
commit
a3b851e9b9

+ 1 - 1
docs/reference/glossary.asciidoc

@@ -223,7 +223,7 @@ during the following processes:
   This type of recovery is called a *local store recovery*.
 * <<glossary-replica-shard,Primary shard replication>>.
 * Relocation of a shard to a different node in the same cluster.
-* {ref}/modules-snapshots.html#restore-snapshot[Snapshot restoration].
+* {ref}/snapshots-restore-snapshot.html[Snapshot restoration].
 // end::recovery-triggers[]
 // end::recovery-def[]
 --

+ 1 - 1
docs/reference/high-availability/backup-cluster-data.asciidoc

@@ -6,7 +6,7 @@
 
 To back up your cluster's data, you can use the <<modules-snapshots,snapshot API>>.
 
-include::{es-repo-dir}/modules/snapshots.asciidoc[tag=snapshot-intro]
+include::../snapshot-restore/index.asciidoc[tag=snapshot-intro]
 
 [TIP]
 ====

+ 1 - 1
docs/reference/high-availability/backup-cluster.asciidoc

@@ -1,7 +1,7 @@
 [[backup-cluster]]
 == Back up a cluster
 
-include::{es-repo-dir}/modules/snapshots.asciidoc[tag=backup-warning]
+include::../snapshot-restore/index.asciidoc[tag=backup-warning]
 
 To have a complete backup for your cluster:
 

+ 1 - 1
docs/reference/high-availability/restore-cluster-data.asciidoc

@@ -4,7 +4,7 @@
 <titleabbrev>Restore the data</titleabbrev>
 ++++
 
-include::{es-repo-dir}/modules/snapshots.asciidoc[tag=restore-intro]
+include::../snapshot-restore/index.asciidoc[tag=restore-intro]
 
 [TIP]
 ====

+ 2 - 2
docs/reference/ilm/getting-started-slm.asciidoc

@@ -5,7 +5,7 @@
 
 Let's get started with snapshot lifecycle management (SLM) by working through a
 hands-on scenario. The goal of this example is to automatically back up {es}
-indices using the <<modules-snapshots,snapshots>> every day at a particular
+indices using the <<snapshot-restore,snapshots>> every day at a particular
 time. Once these snapshots have been created, they are kept for a configured
 amount of time and then deleted per a configured retention policy.
 
@@ -59,7 +59,7 @@ POST /_security/role/slm-read-only
 === Setting up a repository
 
 Before we can set up an SLM policy, we'll need to set up a
-<<snapshots-repositories,snapshot repository>> where the snapshots will be
+snapshot repository where the snapshots will be
 stored. Repositories can use {plugins}/repository.html[many different backends],
 including cloud storage providers. You'll probably want to use one of these in
 production, but for this example we'll use a shared file system repository:

+ 2 - 0
docs/reference/index.asciidoc

@@ -50,6 +50,8 @@ include::data-rollup-transform.asciidoc[]
 
 include::high-availability.asciidoc[]
 
+include::snapshot-restore/index.asciidoc[]
+
 include::{xes-repo-dir}/security/index.asciidoc[]
 
 include::{xes-repo-dir}/watcher/index.asciidoc[]

+ 1 - 1
docs/reference/indices/recovery.asciidoc

@@ -77,7 +77,7 @@ This type of recovery is called a local store recovery.
 
 `SNAPSHOT`::
 The recovery is related to
-a <<restore-snapshot,snapshot restoration>>.
+a <<snapshots-restore-snapshot,snapshot restoration>>.
 
 `REPLICA`::
 The recovery is related to

+ 0 - 2
docs/reference/modules.asciidoc

@@ -91,8 +91,6 @@ include::modules/node.asciidoc[]
 
 include::modules/plugins.asciidoc[]
 
-include::modules/snapshots.asciidoc[]
-
 include::modules/threadpool.asciidoc[]
 
 include::modules/transport.asciidoc[]

+ 0 - 804
docs/reference/modules/snapshots.asciidoc

@@ -1,804 +0,0 @@
-[[modules-snapshots]]
-== Snapshot And Restore
-
-// tag::snapshot-intro[]
-A snapshot is a backup taken from a running Elasticsearch cluster. You can take
-a snapshot of individual indices or of the entire cluster and store it in a
-repository on a shared filesystem, and there are plugins that support remote
-repositories on S3, HDFS, Azure, Google Cloud Storage and more.
-
-Snapshots are taken incrementally. This means that when it creates a snapshot of
-an index, Elasticsearch avoids copying any data that is already stored in the
-repository as part of an earlier snapshot of the same index. Therefore it can be
-efficient to take snapshots of your cluster quite frequently.
-// end::snapshot-intro[]
-
-// tag::restore-intro[]
-You can restore snapshots into a running cluster via the
-<<restore-snapshot,restore API>>. When you restore an index, you can alter the
-name of the restored index as well as some of its settings. There is a great
-deal of flexibility in how the snapshot and restore functionality can be used.
-// end::restore-intro[]
-
-You can automate your snapshot backup and restore process by using
-<<getting-started-snapshot-lifecycle-management, snapshot lifecycle management>>.
-
-// tag::backup-warning[]
-WARNING: You cannot back up an Elasticsearch cluster by simply taking a copy of
-the data directories of all of its nodes. Elasticsearch may be making changes to
-the contents of its data directories while it is running; copying its data
-directories cannot be expected to capture a consistent picture of their contents.
-If you try to restore a cluster from such a backup, it may fail and report
-corruption and/or missing files. Alternatively, it may appear to have succeeded
-though it silently lost some of its data. The only reliable way to back up a
-cluster is by using the snapshot and restore functionality.
-
-// end::backup-warning[]
-
-[float]
-=== Version compatibility
-
-IMPORTANT: Version compatibility refers to the underlying Lucene index
-compatibility. Follow the <<setup-upgrade,Upgrade documentation>>
-when migrating between versions.
-
-A snapshot contains a copy of the on-disk data structures that make up an
-index. This means that snapshots can only be restored to versions of
-Elasticsearch that can read the indices:
-
-* A snapshot of an index created in 6.x can be restored to 7.x.
-* A snapshot of an index created in 5.x can be restored to 6.x.
-* A snapshot of an index created in 2.x can be restored to 5.x.
-* A snapshot of an index created in 1.x can be restored to 2.x.
-
-Conversely, snapshots of indices created in 1.x **cannot** be restored to 5.x
-or 6.x, snapshots of indices created in 2.x **cannot** be restored to 6.x
-or 7.x, and snapshots of indices created in 5.x **cannot** be restored to 7.x
-or 8.x.
-
-Each snapshot can contain indices created in various versions of Elasticsearch,
-and when restoring a snapshot it must be possible to restore all of the indices
-into the target cluster. If any indices in a snapshot were created in an
-incompatible version, you will not be able restore the snapshot.
-
-IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you
-won't be able to restore snapshots after you upgrade if they contain indices
-created in a version that's incompatible with the upgrade version.
-
-If you end up in a situation where you need to restore a snapshot of an index
-that is incompatible with the version of the cluster you are currently running,
-you can restore it on the latest compatible version and use
-<<reindex-from-remote,reindex-from-remote>> to rebuild the index on the current
-version. Reindexing from remote is only possible if the original index has
-source enabled. Retrieving and reindexing the data can take significantly
-longer than simply restoring a snapshot. If you have a large amount of data, we
-recommend testing the reindex from remote process with a subset of your data to
-understand the time requirements before proceeding.
-
-[float]
-[[snapshots-repositories]]
-=== Repositories
-
-You must register a snapshot repository before you can perform snapshot and
-restore operations. We recommend creating a new snapshot repository for each
-major version. The valid repository settings depend on the repository type.
-
-If you register same snapshot repository with multiple clusters, only
-one cluster should have write access to the repository. All other clusters
-connected to that repository should set the repository to `readonly` mode.
-
-IMPORTANT: The snapshot format can change across major versions, so if you have
-clusters on different versions trying to write the same repository, snapshots
-written by one version may not be visible to the other and the repository could
-be corrupted. While setting the repository to `readonly` on all but one of the
-clusters should work with multiple clusters differing by one major version, it
-is not a supported configuration.
-
-[source,console]
------------------------------------
-PUT /_snapshot/my_backup
-{
-  "type": "fs",
-  "settings": {
-    "location": "my_backup_location"
-  }
-}
------------------------------------
-// TESTSETUP
-
-To retrieve information about a registered repository, use a GET request:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup
------------------------------------
-
-which returns:
-
-[source,console-result]
------------------------------------
-{
-  "my_backup": {
-    "type": "fs",
-    "settings": {
-      "location": "my_backup_location"
-    }
-  }
-}
------------------------------------
-
-To retrieve information about multiple repositories, specify a comma-delimited
-list of repositories. You can also use the * wildcard when
-specifying repository names. For example, the following request retrieves
-information about all of the snapshot repositories that start with `repo` or
-contain `backup`:
-
-[source,console]
------------------------------------
-GET /_snapshot/repo*,*backup*
------------------------------------
-
-To retrieve information about all registered snapshot repositories, omit the
-repository name or specify `_all`:
-
-[source,console]
------------------------------------
-GET /_snapshot
------------------------------------
-
-or
-
-[source,console]
------------------------------------
-GET /_snapshot/_all
------------------------------------
-
-[float]
-===== Shared File System Repository
-
-The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register
-the shared file system repository it is necessary to mount the same shared filesystem to the same location on all
-master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo`
-setting on all master and data nodes.
-
-Assuming that the shared filesystem is mounted to `/mount/backups/my_fs_backup_location`, the following setting should
-be added to `elasticsearch.yml` file:
-
-[source,yaml]
---------------
-path.repo: ["/mount/backups", "/mount/longterm_backups"]
---------------
-
-The `path.repo` setting supports Microsoft Windows UNC paths as long as at least server name and share are specified as
-a prefix and back slashes are properly escaped:
-
-[source,yaml]
---------------
-path.repo: ["\\\\MY_SERVER\\Snapshots"]
---------------
-
-After all nodes are restarted, the following command can be used to register the shared file system repository with
-the name `my_fs_backup`:
-
-[source,console]
------------------------------------
-PUT /_snapshot/my_fs_backup
-{
-    "type": "fs",
-    "settings": {
-        "location": "/mount/backups/my_fs_backup_location",
-        "compress": true
-    }
-}
------------------------------------
-// TEST[skip:no access to absolute path]
-
-If the repository location is specified as a relative path this path will be resolved against the first path specified
-in `path.repo`:
-
-[source,console]
------------------------------------
-PUT /_snapshot/my_fs_backup
-{
-    "type": "fs",
-    "settings": {
-        "location": "my_fs_backup_location",
-        "compress": true
-    }
-}
------------------------------------
-// TEST[continued]
-
-The following settings are supported:
-
-[horizontal]
-`location`:: Location of the snapshots. Mandatory.
-`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`.
-`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. Specify the chunk size as a value and
-unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited chunk size).
-`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
-`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
-`readonly`:: Makes repository read-only.  Defaults to `false`.
-
-[float]
-===== Read-only URL Repository
-
-The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file
-system repository. The URL specified in the `url` parameter should point to the root of the shared filesystem repository.
-The following settings are supported:
-
-[horizontal]
-`url`:: Location of the snapshots. Mandatory.
-
-URL Repository supports the following protocols: "http", "https", "ftp", "file" and "jar". URL repositories with `http:`,
-`https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the `repositories.url.allowed_urls` setting.
-This setting supports wildcards in the place of host, path, query, and fragment. For example:
-
-[source,yaml]
------------------------------------
-repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"]
------------------------------------
-
-URL repositories with `file:` URLs can only point to locations registered in the `path.repo` setting similar to
-shared file system repository.
-
-[float]
-[role="xpack"]
-[testenv="basic"]
-===== Source Only Repository
-
-A source repository enables you to create minimal, source-only snapshots that take up to 50% less space on disk.
-Source only snapshots contain stored fields and index metadata. They do not include index or doc values structures
-and are not searchable when restored. After restoring a source-only snapshot, you must <<docs-reindex,reindex>>
-the data into a new index.
-
-Source repositories delegate to another snapshot repository for storage.
-
-
-[IMPORTANT]
-==================================================
-
-Source only snapshots are only supported if the `_source` field is enabled and no source-filtering is applied.
-When you restore a source only snapshot:
-
- * The restored index is read-only and can only serve `match_all` search or scroll requests to enable reindexing.
-
- * Queries other than `match_all` and `_get` requests are not supported.
-
- * The mapping of the restored index is empty, but the original mapping is available from the types top
-   level `meta` element.
-
-==================================================
-
-When you create a source repository, you must specify the type and name of the delegate repository
-where the snapshots will be stored:
-
-[source,console]
------------------------------------
-PUT _snapshot/my_src_only_repository
-{
-  "type": "source",
-  "settings": {
-    "delegate_type": "fs",
-    "location": "my_backup_location"
-  }
-}
------------------------------------
-// TEST[continued]
-
-[float]
-===== Repository plugins
-
-Other repository backends are available in these official plugins:
-
-* {plugins}/repository-s3.html[repository-s3] for S3 repository support
-* {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments
-* {plugins}/repository-azure.html[repository-azure] for Azure storage repositories
-* {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories
-
-[float]
-===== Repository Verification
-When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional
-on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository
-verification when registering or updating a repository:
-
-[source,console]
------------------------------------
-PUT /_snapshot/my_unverified_backup?verify=false
-{
-  "type": "fs",
-  "settings": {
-    "location": "my_unverified_backup_location"
-  }
-}
------------------------------------
-// TEST[continued]
-
-The verification process can also be executed manually by running the following command:
-
-[source,console]
------------------------------------
-POST /_snapshot/my_unverified_backup/_verify
------------------------------------
-// TEST[continued]
-
-It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
-
-[float]
-===== Repository Cleanup
-Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees
-the snapshot functionality provides in failure scenarios during snapshot creation and the decentralized nature of the snapshot creation
-process. This unreferenced data does in no way negatively impact the performance or safety of a snapshot repository but leads to higher
-than necessary storage use. In order to clean up this unreferenced data, users can call the cleanup endpoint for a repository which will
-trigger a complete accounting of the repositories contents and subsequent deletion of all unreferenced data that was found.
-
-[source,console]
------------------------------------
-POST /_snapshot/my_repository/_cleanup
------------------------------------
-// TEST[continued]
-
-The response to a cleanup request looks as follows:
-
-[source,console-result]
---------------------------------------------------
-{
-  "results": {
-    "deleted_bytes": 20,
-    "deleted_blobs": 5
-  }
-}
---------------------------------------------------
-
-Depending on the concrete repository implementation the numbers shown for bytes free as well as the number of blobs removed will either
-be an approximation or an exact result. Any non-zero value for the number of blobs removed implies that unreferenced blobs were found and
-subsequently cleaned up.
-
-Please note that most of the cleanup operations executed by this endpoint are automatically executed when deleting any snapshot from a
-repository. If you regularly delete snapshots, you will in most cases not get any or only minor space savings from using this functionality
-and should lower your frequency of invoking it accordingly.
-
-[float]
-[[snapshots-take-snapshot]]
-=== Snapshot
-
-A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
-cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
-command:
-
-[source,console]
------------------------------------
-PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
------------------------------------
-// TEST[continued]
-
-The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
-initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
-previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or
-even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`.
-
-By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by
-specifying the list of indices in the body of the snapshot request.
-
-[source,console]
------------------------------------
-PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
-{
-  "indices": "index_1,index_2",
-  "ignore_unavailable": true,
-  "include_global_state": false,
-  "metadata": {
-    "taken_by": "kimchy",
-    "taken_because": "backup before upgrading"
-  }
-}
------------------------------------
-// TEST[continued]
-
-The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
-supports <<multi-index,multi index syntax>>. The snapshot request also supports the
-`ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot
-creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail.
-By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of
-the snapshot. By default, the entire snapshot will fail if one or more indices participating in the snapshot don't have
-all primary shards available. This behaviour can be changed by setting `partial` to `true`.
-
-The `metadata` field can be used to attach arbitrary metadata to the snapshot. This may be a record of who took the snapshot,
-why it was taken, or any other data that might be useful.
-
-Snapshot names can be automatically derived using <<date-math-index-names,date math expressions>>, similarly as when creating
-new indices. Note that special characters need to be URI encoded.
-
-For example, creating a snapshot with the current day in the name, like `snapshot-2018.05.11`, can be achieved with
-the following command:
-
-[source,console]
------------------------------------
-# PUT /_snapshot/my_backup/<snapshot-{now/d}>
-PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E
------------------------------------
-// TEST[continued]
-
-
-The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses
-the list of the index files that are already stored in the repository and copies only files that were created or
-changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form.
-Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be
-executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index
-at the moment when snapshot was created, so no records that were added to the index after the snapshot process was started
-will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started
-and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or
-initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for
-relocation or initialization of shards to complete before snapshotting them.
-
-Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent
-cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of
-the snapshot.
-
-Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being
-created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation
-filtering. Elasticsearch will only be able to move a shard to another node (according to the current allocation
-filtering settings and rebalancing algorithm) once the snapshot is finished.
-
-Once a snapshot is created information about this snapshot can be obtained using the following command:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/snapshot_1
------------------------------------
-// TEST[continued]
-
-This command returns basic information about the snapshot including start and end time, version of
-Elasticsearch that created the snapshot, the list of included indices, the current state of the
-snapshot and the list of failures that occurred during the snapshot. The snapshot `state` can be
-
-[horizontal]
-`IN_PROGRESS`::
-
-  The snapshot is currently running.
-
-`SUCCESS`::
-
-  The snapshot finished and all shards were stored successfully.
-
-`FAILED`::
-
-  The snapshot finished with an error and failed to store any data.
-
-`PARTIAL`::
-
-  The global cluster state was stored, but data of at least one shard wasn't stored successfully.
-  The `failure` section in this case should contain more detailed information about shards
-  that were not processed correctly.
-
-`INCOMPATIBLE`::
-
-  The snapshot was created with an old version of Elasticsearch and therefore is incompatible with
-  the current version of the cluster.
-
-
-Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
------------------------------------
-// TEST[continued]
-
-All snapshots currently stored in the repository can be listed using the following command:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/_all
------------------------------------
-// TEST[continued]
-
-The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to
-return all snapshots that are currently available.
-
-Getting all snapshots in the repository can be costly on cloud-based repositories,
-both from a cost and performance perspective.  If the only information required is
-the snapshot names/uuids in the repository and the indices in each snapshot, then
-the optional boolean parameter `verbose` can be set to `false` to execute a more
-performant and cost-effective retrieval of the snapshots in the repository.  Note
-that setting `verbose` to `false` will omit all other information about the snapshot
-such as status information, the number of snapshotted shards, etc.  The default
-value of the `verbose` parameter is `true`.
-
-It is also possible to retrieve snapshots from multiple repositories in one go, for example:
-
-[source,console]
------------------------------------
-GET /_snapshot/_all
-GET /_snapshot/my_backup,my_fs_backup
-GET /_snapshot/my*/snap*
------------------------------------
-// TEST[skip:no my_fs_backup]
-
-A currently running snapshot can be retrieved using the following command:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/_current
------------------------------------
-// TEST[continued]
-
-A snapshot can be deleted from the repository using the following command:
-
-[source,console]
------------------------------------
-DELETE /_snapshot/my_backup/snapshot_2
------------------------------------
-// TEST[continued]
-
-When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
-snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
-created the snapshotting process will be aborted and all files created as part of the snapshotting process will be
-cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were
-started by mistake.
-
-A repository can be unregistered using the following command:
-
-[source,console]
------------------------------------
-DELETE /_snapshot/my_backup
------------------------------------
-// TEST[continued]
-
-When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing
-the snapshots. The snapshots themselves are left untouched and in place.
-
-[float]
-[[restore-snapshot]]
-=== Restore
-
-A snapshot can be restored using the following command:
-
-[source,console]
------------------------------------
-POST /_snapshot/my_backup/snapshot_1/_restore
------------------------------------
-// TEST[continued]
-
-By default, all indices in the snapshot are restored, and the cluster state is
-*not* restored. It's possible to select indices that should be restored as well
-as to allow the global cluster state from being restored by using `indices` and
-`include_global_state` options in the restore request body. The list of indices
-supports <<multi-index,multi index syntax>>. The `rename_pattern`
-and `rename_replacement` options can be also used to rename indices on restore
-using regular expression that supports referencing the original text as
-explained
-http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
-Set `include_aliases` to `false` to prevent aliases from being restored together
-with associated indices
-
-[source,console]
------------------------------------
-POST /_snapshot/my_backup/snapshot_1/_restore
-{
-  "indices": "index_1,index_2",
-  "ignore_unavailable": true,
-  "include_global_state": true,
-  "rename_pattern": "index_(.+)",
-  "rename_replacement": "restored_index_$1"
-}
------------------------------------
-// TEST[continued]
-
-The restore operation can be performed on a functioning cluster. However, an
-existing index can be only restored if it's <<indices-open-close,closed>> and
-has the same number of shards as the index in the snapshot. The restore
-operation automatically opens restored indices if they were closed and creates
-new indices if they didn't exist in the cluster. If cluster state is restored
-with `include_global_state` (defaults to `false`), the restored templates that
-don't currently exist in the cluster are added and existing templates with the
-same name are replaced by the restored templates. The restored persistent
-settings are added to the existing persistent settings.
-
-[float]
-==== Partial restore
-
-By default, the entire restore operation will fail if one or more indices participating in the operation don't have
-snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
-restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be
-restored in this case and all missing shards will be recreated empty.
-
-
-[float]
-==== Changing index settings during restore
-
-Most of index settings can be overridden during the restore process. For example, the following command will restore
-the index `index_1` without creating any replicas while switching back to default refresh interval:
-
-[source,console]
------------------------------------
-POST /_snapshot/my_backup/snapshot_1/_restore
-{
-  "indices": "index_1",
-  "index_settings": {
-    "index.number_of_replicas": 0
-  },
-  "ignore_index_settings": [
-    "index.refresh_interval"
-  ]
-}
------------------------------------
-// TEST[continued]
-
-Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation.
-
-[float]
-==== Restoring to a different cluster
-
-The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it's possible to
-restore a snapshot made from one cluster into another cluster. All that is required is registering the repository
-containing the snapshot in the new cluster and starting the restore process. The new cluster doesn't have to have the
-same size or topology.  However, the version of the new cluster should be the same or newer (only 1 major version newer) than the cluster that was used to create the snapshot.  For example, you can restore a 1.x snapshot to a 2.x cluster, but not a 1.x snapshot to a 5.x cluster.
-
-If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure
-that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings
-during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also
-possible to select only subset of the indices using the `indices` parameter.
-
-If indices in the original cluster were assigned to particular nodes using
-<<shard-allocation-filtering,shard allocation filtering>>, the same rules will be enforced in the new cluster. Therefore
-if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such
-index will not be successfully restored unless these index allocation settings are changed during restore operation.
-
-The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally
-restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without
-the global cluster state.
-
-[float]
-=== Snapshot status
-
-A list of currently running snapshots with their detailed status information can be obtained using the following command:
-
-[source,console]
------------------------------------
-GET /_snapshot/_status
------------------------------------
-// TEST[continued]
-
-In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
-to limit the results to a particular repository:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/_status
------------------------------------
-// TEST[continued]
-
-If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
-if it's not currently running:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/snapshot_1/_status
------------------------------------
-// TEST[continued]
-
-The output looks similar to the following:
-
-[source,console-result]
---------------------------------------------------
-{
-  "snapshots": [
-    {
-      "snapshot": "snapshot_1",
-      "repository": "my_backup",
-      "uuid": "XuBo4l4ISYiVg0nYUen9zg",
-      "state": "SUCCESS",
-      "include_global_state": true,
-      "shards_stats": {
-        "initializing": 0,
-        "started": 0,
-        "finalizing": 0,
-        "done": 5,
-        "failed": 0,
-        "total": 5
-      },
-      "stats": {
-        "incremental": {
-          "file_count": 8,
-          "size_in_bytes": 4704
-        },
-        "processed": {
-          "file_count": 7,
-          "size_in_bytes": 4254
-        },
-        "total": {
-          "file_count": 8,
-          "size_in_bytes": 4704
-        },
-        "start_time_in_millis": 1526280280355,
-        "time_in_millis": 358
-      }
-    }
-  ]
-}
---------------------------------------------------
-
-The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were
-snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository,
-the `stats` object contains a `total` section for all the files that are referenced by the snapshot, as well as an `incremental` section
-for those files that actually needed to be copied over as part of the incremental snapshotting. In case of a snapshot that's still
-in progress, there's also a `processed` section that contains information about the files that are in the process of being copied.
-
-Multiple ids are also supported:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
------------------------------------
-// TEST[continued]
-
-[float]
-[[monitor-snapshot-restore-progress]]
-=== Monitoring snapshot/restore progress
-
-There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both
-operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
-the simplest method that can be used to get notified about operation completion.
-
-The snapshot operation can be also monitored by periodic calls to the snapshot info:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/snapshot_1
------------------------------------
-// TEST[continued]
-
-Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
-executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
-for available resources before returning the result. On very large shards the wait time can be significant.
-
-To get more immediate and complete information about snapshots the snapshot status command can be used instead:
-
-[source,console]
------------------------------------
-GET /_snapshot/my_backup/snapshot_1/_status
------------------------------------
-// TEST[continued]
-
-While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
-complete breakdown of the current state for each shard participating in the snapshot.
-
-The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
-monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
-typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
-restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
-state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
-creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
-are created, the cluster switches to the `green` states.
-
-The cluster health operation provides only a high level status of the restore process. It's possible to get more
-detailed insight into the current state of the recovery process by using <<indices-recovery, index recovery>> and
-<<cat-recovery, cat recovery>> APIs.
-
-[float]
-=== Stopping currently running snapshot and restore operations
-
-The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently
-running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation.
-The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
-that snapshot before deleting the snapshot data from the repository.
-
-[source,console]
------------------------------------
-DELETE /_snapshot/my_backup/snapshot_1
------------------------------------
-// TEST[continued]
-
-The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can
-be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
-from the cluster as a result of this operation.
-
-[float]
-=== Effect of cluster blocks on snapshot and restore operations
-Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
-repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as
-well as the global metadata were readable. The restore operation requires the global metadata to be writable, however
-the index level blocks are ignored during restore because indices are essentially recreated during restore.
-Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal
-repository operations such as listing or deleting snapshots from an already registered repository.

+ 16 - 1
docs/reference/redirects.asciidoc

@@ -300,8 +300,23 @@ See <<ml-get-bucket>>,
 [[ml-results-overall-buckets]]
 <<ml-get-overall-buckets>>.
 
+[role="exclude",id="modules-snapshots"]
+=== Snapshot module
+
+See <<snapshot-restore>>.
+
+[role="exclude",id="restore-snapshot"]
+=== Restore snapshot
+
+See <<snapshots-restore-snapshot>>.
+
+[role="exclude",id="snapshots-repositories"]
+=== Snapshot repositories
+
+See <<snapshots-register-repository>>.
+
 [role="exclude",id="ml-dfa-analysis-objects"]
 === Analysis configuration objects
 
 This page was deleted. 
-See <<put-dfanalytics>>.
+See <<put-dfanalytics>>.

+ 90 - 0
docs/reference/snapshot-restore/index.asciidoc

@@ -0,0 +1,90 @@
+[[snapshot-restore]]
+= Snapshot and restore
+
+[partintro]
+--
+
+// tag::snapshot-intro[]
+A _snapshot_ is a backup taken from a running {es} cluster. 
+You can take snapshots of individual indices or of the entire cluster. 
+Snapshots can be stored in either local or remote repositories. 
+Remote repositories can reside on S3, HDFS, Azure, Google Cloud Storage, 
+and other platforms supported by a repository plugin.
+
+Snapshots are incremental: each snapshot of an index only stores data that 
+is not part of an earlier snapshot.  
+This enables you to take frequent snapshots with minimal overhead.
+// end::snapshot-intro[] 
+
+// tag::restore-intro[]
+You can restore snapshots to a running cluster with the <<snapshots-restore-snapshot,restore API>>. 
+By default, all indices in the snapshot are restored. 
+Alternatively, you can restore specific indices or restore the cluster state from a snapshot. 
+When restoring indices, you can modify the index name and selected index settings. 
+// end::restore-intro[]
+
+You must <<snapshots-register-repository, register a snapshot repository>> 
+before you can <<snapshots-take-snapshot, take snapshots>>.
+
+You can use <<getting-started-snapshot-lifecycle-management, snapshot lifecycle management>>
+to automatically take and manage snapshots.
+
+// tag::backup-warning[]
+WARNING: You cannot back up an {es} cluster by simply copying
+the data directories of all of its nodes. {es} may be making changes to
+the contents of its data directories while it is running; copying its data
+directories cannot be expected to capture a consistent picture of their contents.
+If you try to restore a cluster from such a backup, it may fail and report
+corruption and/or missing files. Alternatively, it may appear to have succeeded
+though it silently lost some of its data. The only reliable way to back up a
+cluster is by using the snapshot and restore functionality.
+
+// end::backup-warning[]
+
+[float]
+=== Version compatibility
+
+IMPORTANT: Version compatibility refers to the underlying Lucene index
+compatibility. Follow the <<setup-upgrade,Upgrade documentation>>
+when migrating between versions.
+
+A snapshot contains a copy of the on-disk data structures that make up an
+index. This means that snapshots can only be restored to versions of
+{es} that can read the indices:
+
+* A snapshot of an index created in 6.x can be restored to 7.x.
+* A snapshot of an index created in 5.x can be restored to 6.x.
+* A snapshot of an index created in 2.x can be restored to 5.x.
+* A snapshot of an index created in 1.x can be restored to 2.x.
+
+Conversely, snapshots of indices created in 1.x **cannot** be restored to 5.x
+or 6.x, snapshots of indices created in 2.x **cannot** be restored to 6.x
+or 7.x, and snapshots of indices created in 5.x **cannot** be restored to 7.x
+or 8.x.
+
+Each snapshot can contain indices created in various versions of {es},
+and when restoring a snapshot it must be possible to restore all of the indices
+into the target cluster. If any indices in a snapshot were created in an
+incompatible version, you will not be able restore the snapshot.
+
+IMPORTANT: When backing up your data prior to an upgrade, keep in mind that you
+won't be able to restore snapshots after you upgrade if they contain indices
+created in a version that's incompatible with the upgrade version.
+
+If you end up in a situation where you need to restore a snapshot of an index
+that is incompatible with the version of the cluster you are currently running,
+you can restore it on the latest compatible version and use
+<<reindex-from-remote,reindex-from-remote>> to rebuild the index on the current
+version. Reindexing from remote is only possible if the original index has
+source enabled. Retrieving and reindexing the data can take significantly
+longer than simply restoring a snapshot. If you have a large amount of data, we
+recommend testing the reindex from remote process with a subset of your data to
+understand the time requirements before proceeding.
+
+--
+
+include::register-repository.asciidoc[]
+include::take-snapshot.asciidoc[]
+include::restore-snapshot.asciidoc[]
+include::monitor-snapshot-restore.asciidoc[]
+

+ 89 - 0
docs/reference/snapshot-restore/monitor-snapshot-restore.asciidoc

@@ -0,0 +1,89 @@
+[[snapshots-monitor-snapshot-restore]]
+== Monitor snapshot and restore progress
+
+++++
+<titleabbrev>Monitor snapshot and restore</titleabbrev>
+++++
+
+There are several ways to monitor the progress of the snapshot and restore processes while they are running. Both
+operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
+the simplest method that can be used to get notified about operation completion.
+
+////
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_backup
+{
+  "type": "fs",
+  "settings": {
+    "location": "my_backup_location"
+  }
+}
+
+PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
+-----------------------------------
+// TESTSETUP
+
+////
+
+The snapshot operation can be also monitored by periodic calls to the snapshot info:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/snapshot_1
+-----------------------------------
+
+Please note that snapshot info operation uses the same resources and thread pool as the snapshot operation. So,
+executing a snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
+for available resources before returning the result. On very large shards the wait time can be significant.
+
+To get more immediate and complete information about snapshots the snapshot status command can be used instead:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/snapshot_1/_status
+-----------------------------------
+// TEST[continued]
+
+While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
+complete breakdown of the current state for each shard participating in the snapshot.
+
+The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
+monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
+typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
+restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
+state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
+creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
+are created, the cluster switches to the `green` states.
+
+The cluster health operation provides only a high level status of the restore process. It's possible to get more
+detailed insight into the current state of the recovery process by using <<indices-recovery, index recovery>> and
+<<cat-recovery, cat recovery>> APIs.
+
+[float]
+=== Stop snapshot and restore operations
+
+The snapshot and restore framework allows running only one snapshot or one restore operation at a time. If a currently
+running snapshot was executed by mistake, or takes unusually long, it can be terminated using the snapshot delete operation.
+The snapshot delete operation checks if the deleted snapshot is currently running and if it does, the delete operation stops
+that snapshot before deleting the snapshot data from the repository.
+
+[source,console]
+-----------------------------------
+DELETE /_snapshot/my_backup/snapshot_1
+-----------------------------------
+// TEST[continued]
+
+The restore operation uses the standard shard recovery mechanism. Therefore, any currently running restore operation can
+be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
+from the cluster as a result of this operation.
+
+[float]
+=== Effect of cluster blocks on snapshot and restore 
+
+Many snapshot and restore operations are affected by cluster and index blocks. For example, registering and unregistering
+repositories require write global metadata access. The snapshot operation requires that all indices and their metadata as
+well as the global metadata were readable. The restore operation requires the global metadata to be writable, however
+the index level blocks are ignored during restore because indices are essentially recreated during restore.
+Please note that a repository content is not part of the cluster and therefore cluster blocks don't affect internal
+repository operations such as listing or deleting snapshots from an already registered repository.

+ 290 - 0
docs/reference/snapshot-restore/register-repository.asciidoc

@@ -0,0 +1,290 @@
+[[snapshots-register-repository]]
+== Register a snapshot repository
+
+++++
+<titleabbrev>Register repository</titleabbrev>
+++++
+
+You must register a snapshot repository before you can perform snapshot and
+restore operations. We recommend creating a new snapshot repository for each
+major version. The valid repository settings depend on the repository type.
+
+If you register same snapshot repository with multiple clusters, only
+one cluster should have write access to the repository. All other clusters
+connected to that repository should set the repository to `readonly` mode.
+
+IMPORTANT: The snapshot format can change across major versions, so if you have
+clusters on different versions trying to write the same repository, snapshots
+written by one version may not be visible to the other and the repository could
+be corrupted. While setting the repository to `readonly` on all but one of the
+clusters should work with multiple clusters differing by one major version, it
+is not a supported configuration.
+
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_backup
+{
+  "type": "fs",
+  "settings": {
+    "location": "my_backup_location"
+  }
+}
+-----------------------------------
+// TESTSETUP
+
+To retrieve information about a registered repository, use a GET request:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup
+-----------------------------------
+
+which returns:
+
+[source,console-result]
+-----------------------------------
+{
+  "my_backup": {
+    "type": "fs",
+    "settings": {
+      "location": "my_backup_location"
+    }
+  }
+}
+-----------------------------------
+
+To retrieve information about multiple repositories, specify a comma-delimited
+list of repositories. You can also use the * wildcard when
+specifying repository names. For example, the following request retrieves
+information about all of the snapshot repositories that start with `repo` or
+contain `backup`:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/repo*,*backup*
+-----------------------------------
+
+To retrieve information about all registered snapshot repositories, omit the
+repository name or specify `_all`:
+
+[source,console]
+-----------------------------------
+GET /_snapshot
+-----------------------------------
+
+or
+
+[source,console]
+-----------------------------------
+GET /_snapshot/_all
+-----------------------------------
+
+[float]
+[[snapshots-filesystem-repository]]
+=== Shared file system repository
+
+The shared file system repository (`"type": "fs"`) uses the shared file system to store snapshots. In order to register
+the shared file system repository it is necessary to mount the same shared filesystem to the same location on all
+master and data nodes. This location (or one of its parent directories) must be registered in the `path.repo`
+setting on all master and data nodes.
+
+Assuming that the shared filesystem is mounted to `/mount/backups/my_fs_backup_location`, the following setting should
+be added to `elasticsearch.yml` file:
+
+[source,yaml]
+--------------
+path.repo: ["/mount/backups", "/mount/longterm_backups"]
+--------------
+
+The `path.repo` setting supports Microsoft Windows UNC paths as long as at least server name and share are specified as
+a prefix and back slashes are properly escaped:
+
+[source,yaml]
+--------------
+path.repo: ["\\\\MY_SERVER\\Snapshots"]
+--------------
+
+After all nodes are restarted, the following command can be used to register the shared file system repository with
+the name `my_fs_backup`:
+
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_fs_backup
+{
+    "type": "fs",
+    "settings": {
+        "location": "/mount/backups/my_fs_backup_location",
+        "compress": true
+    }
+}
+-----------------------------------
+// TEST[skip:no access to absolute path]
+
+If the repository location is specified as a relative path this path will be resolved against the first path specified
+in `path.repo`:
+
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_fs_backup
+{
+    "type": "fs",
+    "settings": {
+        "location": "my_fs_backup_location",
+        "compress": true
+    }
+}
+-----------------------------------
+// TEST[continued]
+
+The following settings are supported:
+
+[horizontal]
+`location`:: Location of the snapshots. Mandatory.
+`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`.
+`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. Specify the chunk size as a value and
+unit, for example: `1GB`, `10MB`, `5KB`, `500B`. Defaults to `null` (unlimited chunk size).
+`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `40mb` per second.
+`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `40mb` per second.
+`readonly`:: Makes repository read-only.  Defaults to `false`.
+
+[float]
+[[snapshots-read-only-repository]]
+=== Read-only URL repository
+
+The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by the shared file
+system repository. The URL specified in the `url` parameter should point to the root of the shared filesystem repository.
+The following settings are supported:
+
+[horizontal]
+`url`:: Location of the snapshots. Mandatory.
+
+URL Repository supports the following protocols: "http", "https", "ftp", "file" and "jar". URL repositories with `http:`,
+`https:`, and `ftp:` URLs has to be whitelisted by specifying allowed URLs in the `repositories.url.allowed_urls` setting.
+This setting supports wildcards in the place of host, path, query, and fragment. For example:
+
+[source,yaml]
+-----------------------------------
+repositories.url.allowed_urls: ["http://www.example.org/root/*", "https://*.mydomain.com/*?*#*"]
+-----------------------------------
+
+URL repositories with `file:` URLs can only point to locations registered in the `path.repo` setting similar to
+shared file system repository.
+
+[float]
+[role="xpack"]
+[testenv="basic"]
+[[snapshots-source-only-repository]]
+=== Source only repository
+
+A source repository enables you to create minimal, source-only snapshots that take up to 50% less space on disk.
+Source only snapshots contain stored fields and index metadata. They do not include index or doc values structures
+and are not searchable when restored. After restoring a source-only snapshot, you must <<docs-reindex,reindex>>
+the data into a new index.
+
+Source repositories delegate to another snapshot repository for storage.
+
+[IMPORTANT]
+==================================================
+
+Source only snapshots are only supported if the `_source` field is enabled and no source-filtering is applied.
+When you restore a source only snapshot:
+
+ * The restored index is read-only and can only serve `match_all` search or scroll requests to enable reindexing.
+
+ * Queries other than `match_all` and `_get` requests are not supported.
+
+ * The mapping of the restored index is empty, but the original mapping is available from the types top
+   level `meta` element.
+
+==================================================
+
+When you create a source repository, you must specify the type and name of the delegate repository
+where the snapshots will be stored:
+
+[source,console]
+-----------------------------------
+PUT _snapshot/my_src_only_repository
+{
+  "type": "source",
+  "settings": {
+    "delegate_type": "fs",
+    "location": "my_backup_location"
+  }
+}
+-----------------------------------
+// TEST[continued]
+
+[float]
+[[snapshots-repository-plugins]]
+=== Repository plugins
+
+Other repository backends are available in these official plugins:
+
+* {plugins}/repository-s3.html[repository-s3] for S3 repository support
+* {plugins}/repository-hdfs.html[repository-hdfs] for HDFS repository support in Hadoop environments
+* {plugins}/repository-azure.html[repository-azure] for Azure storage repositories
+* {plugins}/repository-gcs.html[repository-gcs] for Google Cloud Storage repositories
+
+[float]
+[[snapshots-repository-verification]]
+=== Repository verification
+When a repository is registered, it's immediately verified on all master and data nodes to make sure that it is functional
+on all nodes currently present in the cluster. The `verify` parameter can be used to explicitly disable the repository
+verification when registering or updating a repository:
+
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_unverified_backup?verify=false
+{
+  "type": "fs",
+  "settings": {
+    "location": "my_unverified_backup_location"
+  }
+}
+-----------------------------------
+// TEST[continued]
+
+The verification process can also be executed manually by running the following command:
+
+[source,console]
+-----------------------------------
+POST /_snapshot/my_unverified_backup/_verify
+-----------------------------------
+// TEST[continued]
+
+It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
+
+[float]
+[[snapshots-repository-cleanup]]
+=== Repository cleanup
+Repositories can over time accumulate data that is not referenced by any existing snapshot. This is a result of the data safety guarantees
+the snapshot functionality provides in failure scenarios during snapshot creation and the decentralized nature of the snapshot creation
+process. This unreferenced data does in no way negatively impact the performance or safety of a snapshot repository but leads to higher
+than necessary storage use. In order to clean up this unreferenced data, users can call the cleanup endpoint for a repository which will
+trigger a complete accounting of the repositories contents and subsequent deletion of all unreferenced data that was found.
+
+[source,console]
+-----------------------------------
+POST /_snapshot/my_repository/_cleanup
+-----------------------------------
+// TEST[continued]
+
+The response to a cleanup request looks as follows:
+
+[source,console-result]
+--------------------------------------------------
+{
+  "results": {
+    "deleted_bytes": 20,
+    "deleted_blobs": 5
+  }
+}
+--------------------------------------------------
+
+Depending on the concrete repository implementation the numbers shown for bytes free as well as the number of blobs removed will either
+be an approximation or an exact result. Any non-zero value for the number of blobs removed implies that unreferenced blobs were found and
+subsequently cleaned up.
+
+Please note that most of the cleanup operations executed by this endpoint are automatically executed when deleting any snapshot from a
+repository. If you regularly delete snapshots, you will in most cases not get any or only minor space savings from using this functionality
+and should lower your frequency of invoking it accordingly.

+ 205 - 0
docs/reference/snapshot-restore/restore-snapshot.asciidoc

@@ -0,0 +1,205 @@
+[[snapshots-restore-snapshot]]
+== Restore indices from a snapshot
+
+++++
+<titleabbrev>Restore a snapshot</titleabbrev>
+++++
+
+////
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_backup
+{
+  "type": "fs",
+  "settings": {
+    "location": "my_backup_location"
+  }
+}
+
+PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
+-----------------------------------
+// TESTSETUP
+
+////
+
+A snapshot can be restored using the following command:
+
+[source,console]
+-----------------------------------
+POST /_snapshot/my_backup/snapshot_1/_restore
+-----------------------------------
+
+By default, all indices in the snapshot are restored, and the cluster state is
+*not* restored. It's possible to select indices that should be restored as well
+as to allow the global cluster state from being restored by using `indices` and
+`include_global_state` options in the restore request body. The list of indices
+supports <<multi-index,multi index syntax>>. The `rename_pattern`
+and `rename_replacement` options can be also used to rename indices on restore
+using regular expression that supports referencing the original text as
+explained
+http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
+Set `include_aliases` to `false` to prevent aliases from being restored together
+with associated indices
+
+[source,console]
+-----------------------------------
+POST /_snapshot/my_backup/snapshot_1/_restore
+{
+  "indices": "index_1,index_2",
+  "ignore_unavailable": true,
+  "include_global_state": true,
+  "rename_pattern": "index_(.+)",
+  "rename_replacement": "restored_index_$1"
+}
+-----------------------------------
+// TEST[continued]
+
+The restore operation can be performed on a functioning cluster. However, an
+existing index can be only restored if it's <<indices-open-close,closed>> and
+has the same number of shards as the index in the snapshot. The restore
+operation automatically opens restored indices if they were closed and creates
+new indices if they didn't exist in the cluster. If cluster state is restored
+with `include_global_state` (defaults to `false`), the restored templates that
+don't currently exist in the cluster are added and existing templates with the
+same name are replaced by the restored templates. The restored persistent
+settings are added to the existing persistent settings.
+
+[float]
+=== Partial restore
+
+By default, the entire restore operation will fail if one or more indices participating in the operation don't have
+snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
+restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be
+restored in this case and all missing shards will be recreated empty.
+
+
+[float]
+=== Changing index settings during restore
+
+Most of index settings can be overridden during the restore process. For example, the following command will restore
+the index `index_1` without creating any replicas while switching back to default refresh interval:
+
+[source,console]
+-----------------------------------
+POST /_snapshot/my_backup/snapshot_1/_restore
+{
+  "indices": "index_1",
+  "ignore_unavailable": true,
+  "index_settings": {
+    "index.number_of_replicas": 0
+  },
+  "ignore_index_settings": [
+    "index.refresh_interval"
+  ]
+}
+-----------------------------------
+// TEST[continued]
+
+Please note, that some settings such as `index.number_of_shards` cannot be changed during restore operation.
+
+[float]
+=== Restoring to a different cluster
+
+The information stored in a snapshot is not tied to a particular cluster or a cluster name. Therefore it's possible to
+restore a snapshot made from one cluster into another cluster. All that is required is registering the repository
+containing the snapshot in the new cluster and starting the restore process. The new cluster doesn't have to have the
+same size or topology.  However, the version of the new cluster should be the same or newer (only 1 major version newer) than the cluster that was used to create the snapshot.  For example, you can restore a 1.x snapshot to a 2.x cluster, but not a 1.x snapshot to a 5.x cluster.
+
+If the new cluster has a smaller size additional considerations should be made. First of all it's necessary to make sure
+that new cluster have enough capacity to store all indices in the snapshot. It's possible to change indices settings
+during restore to reduce the number of replicas, which can help with restoring snapshots into smaller cluster. It's also
+possible to select only subset of the indices using the `indices` parameter.
+
+If indices in the original cluster were assigned to particular nodes using
+<<shard-allocation-filtering,shard allocation filtering>>, the same rules will be enforced in the new cluster. Therefore
+if the new cluster doesn't contain nodes with appropriate attributes that a restored index can be allocated on, such
+index will not be successfully restored unless these index allocation settings are changed during restore operation.
+
+The restore operation also checks that restored persistent settings are compatible with the current cluster to avoid accidentally
+restoring incompatible settings. If you need to restore a snapshot with incompatible persistent settings, try restoring it without
+the global cluster state.
+
+[float]
+=== Snapshot status
+
+A list of currently running snapshots with their detailed status information can be obtained using the following command:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/_status
+-----------------------------------
+// TEST[continued]
+
+In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
+to limit the results to a particular repository:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/_status
+-----------------------------------
+// TEST[continued]
+
+If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
+if it's not currently running:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/snapshot_1/_status
+-----------------------------------
+// TEST[continued]
+
+The output looks similar to the following:
+
+[source,console-result]
+--------------------------------------------------
+{
+  "snapshots": [
+    {
+      "snapshot": "snapshot_1",
+      "repository": "my_backup",
+      "uuid": "XuBo4l4ISYiVg0nYUen9zg",
+      "state": "SUCCESS",
+      "include_global_state": true,
+      "shards_stats": {
+        "initializing": 0,
+        "started": 0,
+        "finalizing": 0,
+        "done": 5,
+        "failed": 0,
+        "total": 5
+      },
+      "stats": {
+        "incremental": {
+          "file_count": 8,
+          "size_in_bytes": 4704
+        },
+        "processed": {
+          "file_count": 7,
+          "size_in_bytes": 4254
+        },
+        "total": {
+          "file_count": 8,
+          "size_in_bytes": 4704
+        },
+        "start_time_in_millis": 1526280280355,
+        "time_in_millis": 358
+      }
+    }
+  ]
+}
+--------------------------------------------------
+// TESTRESPONSE[skip: No snapshot status to validate.]
+
+The output is composed of different sections. The `stats` sub-object provides details on the number and size of files that were
+snapshotted. As snapshots are incremental, copying only the Lucene segments that are not already in the repository,
+the `stats` object contains a `total` section for all the files that are referenced by the snapshot, as well as an `incremental` section
+for those files that actually needed to be copied over as part of the incremental snapshotting. In case of a snapshot that's still
+in progress, there's also a `processed` section that contains information about the files that are in the process of being copied.
+
+Multiple ids are also supported:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/snapshot_1,snapshot_2/_status
+-----------------------------------
+// TEST[skip: no snapshot_2 to get]

+ 207 - 0
docs/reference/snapshot-restore/take-snapshot.asciidoc

@@ -0,0 +1,207 @@
+[[snapshots-take-snapshot]]
+== Take a snapshot of one or more indices
+
+++++
+<titleabbrev>Take a snapshot</titleabbrev>
+++++
+
+A repository can contain multiple snapshots of the same cluster. Snapshots are identified by unique names within the
+cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
+command:
+
+////
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_backup
+{
+  "type": "fs",
+  "settings": {
+    "location": "my_backup_location"
+  }
+}
+-----------------------------------
+// TESTSETUP
+////
+
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
+-----------------------------------
+
+The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
+initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
+previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or
+even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`.
+
+By default a snapshot of all open and started indices in the cluster is created. This behavior can be changed by
+specifying the list of indices in the body of the snapshot request.
+
+[source,console]
+-----------------------------------
+PUT /_snapshot/my_backup/snapshot_2?wait_for_completion=true
+{
+  "indices": "index_1,index_2",
+  "ignore_unavailable": true,
+  "include_global_state": false,
+  "metadata": {
+    "taken_by": "kimchy",
+    "taken_because": "backup before upgrading"
+  }
+}
+-----------------------------------
+// TEST[skip:cannot complete subsequent snapshot]
+
+The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
+supports <<multi-index,multi index syntax>>. The snapshot request also supports the
+`ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot
+creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail.
+By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of
+the snapshot. By default, the entire snapshot will fail if one or more indices participating in the snapshot don't have
+all primary shards available. This behaviour can be changed by setting `partial` to `true`.
+
+The `metadata` field can be used to attach arbitrary metadata to the snapshot. This may be a record of who took the snapshot,
+why it was taken, or any other data that might be useful.
+
+Snapshot names can be automatically derived using <<date-math-index-names,date math expressions>>, similarly as when creating
+new indices. Note that special characters need to be URI encoded.
+
+For example, creating a snapshot with the current day in the name, like `snapshot-2018.05.11`, can be achieved with
+the following command:
+
+[source,console]
+-----------------------------------
+# PUT /_snapshot/my_backup/<snapshot-{now/d}>
+PUT /_snapshot/my_backup/%3Csnapshot-%7Bnow%2Fd%7D%3E
+-----------------------------------
+// TEST[continued]
+
+
+The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses
+the list of the index files that are already stored in the repository and copies only files that were created or
+changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form.
+Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be
+executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index
+at the moment when snapshot was created, so no records that were added to the index after the snapshot process was started
+will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started
+and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or
+initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for
+relocation or initialization of shards to complete before snapshotting them.
+
+Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent
+cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of
+the snapshot.
+
+Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being
+created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation
+filtering. Elasticsearch will only be able to move a shard to another node (according to the current allocation
+filtering settings and rebalancing algorithm) once the snapshot is finished.
+
+Once a snapshot is created information about this snapshot can be obtained using the following command:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/snapshot_1
+-----------------------------------
+// TEST[continued]
+
+This command returns basic information about the snapshot including start and end time, version of
+Elasticsearch that created the snapshot, the list of included indices, the current state of the
+snapshot and the list of failures that occurred during the snapshot. The snapshot `state` can be
+
+[horizontal]
+`IN_PROGRESS`::
+
+  The snapshot is currently running.
+
+`SUCCESS`::
+
+  The snapshot finished and all shards were stored successfully.
+
+`FAILED`::
+
+  The snapshot finished with an error and failed to store any data.
+
+`PARTIAL`::
+
+  The global cluster state was stored, but data of at least one shard wasn't stored successfully.
+  The `failure` section in this case should contain more detailed information about shards
+  that were not processed correctly.
+
+`INCOMPATIBLE`::
+
+  The snapshot was created with an old version of Elasticsearch and therefore is incompatible with
+  the current version of the cluster.
+
+
+Similar as for repositories, information about multiple snapshots can be queried in one go, supporting wildcards as well:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/snapshot_*,some_other_snapshot
+-----------------------------------
+// TEST[continued]
+
+All snapshots currently stored in the repository can be listed using the following command:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/_all
+-----------------------------------
+// TEST[continued]
+
+The command fails if some of the snapshots are unavailable. The boolean parameter `ignore_unavailable` can be used to
+return all snapshots that are currently available.
+
+Getting all snapshots in the repository can be costly on cloud-based repositories,
+both from a cost and performance perspective.  If the only information required is
+the snapshot names/uuids in the repository and the indices in each snapshot, then
+the optional boolean parameter `verbose` can be set to `false` to execute a more
+performant and cost-effective retrieval of the snapshots in the repository.  Note
+that setting `verbose` to `false` will omit all other information about the snapshot
+such as status information, the number of snapshotted shards, etc.  The default
+value of the `verbose` parameter is `true`.
+
+It is also possible to retrieve snapshots from multiple repositories in one go, for example:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/_all
+GET /_snapshot/my_backup,my_fs_backup
+GET /_snapshot/my*/snap*
+-----------------------------------
+// TEST[skip:no my_fs_backup]
+
+A currently running snapshot can be retrieved using the following command:
+
+[source,console]
+-----------------------------------
+GET /_snapshot/my_backup/_current
+-----------------------------------
+// TEST[continued]
+
+A snapshot can be deleted from the repository using the following command:
+
+[source,console]
+-----------------------------------
+DELETE /_snapshot/my_backup/snapshot_2
+-----------------------------------
+// TEST[continued]
+
+When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
+snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
+created the snapshotting process will be aborted and all files created as part of the snapshotting process will be
+cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were
+started by mistake.
+
+A repository can be unregistered using the following command:
+
+[source,console]
+-----------------------------------
+DELETE /_snapshot/my_backup
+-----------------------------------
+// TEST[continued]
+
+When a repository is unregistered, Elasticsearch only removes the reference to the location where the repository is storing
+the snapshots. The snapshots themselves are left untouched and in place.
+
+