getting-started-slm.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[getting-started-snapshot-lifecycle-management]]
  4. == Getting started with snapshot lifecycle management
  5. Let's get started with snapshot lifecycle management (SLM) by working through a
  6. hands-on scenario. The goal of this example is to automatically back up {es}
  7. indices using the <<modules-snapshots,snapshots>> every day at a particular
  8. time. Once these snapshots have been created, they are kept for a configured
  9. amount of time and then deleted per a configured retention policy.
  10. [float]
  11. [[slm-and-security]]
  12. === Security and SLM
  13. Before starting, it's important to understand the privileges that are needed
  14. when configuring SLM if you are using the security plugin. There are two
  15. built-in cluster privileges that can be used to assist: `manage_slm` and
  16. `read_slm`. It's also good to note that the `cluster:admin/snapshot/*`
  17. permission allows taking and deleting snapshots even for indices the role may
  18. not have access to.
  19. An example of configuring an administrator role for SLM follows:
  20. [source,console]
  21. -----------------------------------
  22. POST /_security/role/slm-admin
  23. {
  24. "cluster": ["manage_slm", "cluster:admin/snapshot/*"],
  25. "indices": [
  26. {
  27. "names": [".slm-history-*"],
  28. "privileges": ["all"]
  29. }
  30. ]
  31. }
  32. -----------------------------------
  33. // TEST[skip:security is not enabled here]
  34. Or, for a read-only role that can retrieve policies (but not update, execute, or
  35. delete them), as well as only view the history index:
  36. [source,console]
  37. -----------------------------------
  38. POST /_security/role/slm-read-only
  39. {
  40. "cluster": ["read_slm"],
  41. "indices": [
  42. {
  43. "names": [".slm-history-*"],
  44. "privileges": ["read"]
  45. }
  46. ]
  47. }
  48. -----------------------------------
  49. // TEST[skip:security is not enabled here]
  50. [float]
  51. [[slm-gs-create-policy]]
  52. === Setting up a repository
  53. Before we can set up an SLM policy, we'll need to set up a
  54. <<snapshots-repositories,snapshot repository>> where the snapshots will be
  55. stored. Repositories can use {plugins}/repository.html[many different backends],
  56. including cloud storage providers. You'll probably want to use one of these in
  57. production, but for this example we'll use a shared file system repository:
  58. [source,console]
  59. -----------------------------------
  60. PUT /_snapshot/my_repository
  61. {
  62. "type": "fs",
  63. "settings": {
  64. "location": "my_backup_location"
  65. }
  66. }
  67. -----------------------------------
  68. [float]
  69. === Setting up a policy
  70. Now that we have a repository in place, we can create a policy to automatically
  71. take snapshots. Policies are written in JSON and will define when to take
  72. snapshots, what the snapshots should be named, and which indices should be
  73. included, among other things. We'll use the <<slm-api-put,Put Policy>> API
  74. to create the policy.
  75. When configurating a policy, retention can also optionally be configured. See
  76. the <<slm-retention,SLM retention>> documentation for the full documentation of
  77. how retention works.
  78. [source,console]
  79. --------------------------------------------------
  80. PUT /_slm/policy/nightly-snapshots
  81. {
  82. "schedule": "0 30 1 * * ?", <1>
  83. "name": "<nightly-snap-{now/d}>", <2>
  84. "repository": "my_repository", <3>
  85. "config": { <4>
  86. "indices": ["*"] <5>
  87. },
  88. "retention": { <6>
  89. "expire_after": "30d", <7>
  90. "min_count": 5, <8>
  91. "max_count": 50 <9>
  92. }
  93. }
  94. --------------------------------------------------
  95. // TEST[continued]
  96. <1> when the snapshot should be taken, using
  97. <<schedule-cron,Cron syntax>>, in this
  98. case at 1:30AM each day
  99. <2> whe name each snapshot should be given, using
  100. <<date-math-index-names,date math>> to include the current date in the name
  101. of the snapshot
  102. <3> the repository the snapshot should be stored in
  103. <4> the configuration to be used for the snapshot requests (see below)
  104. <5> which indices should be included in the snapshot, in this case, every index
  105. <6> Optional retention configuration
  106. <7> Keep snapshots for 30 days
  107. <8> Always keep at least 5 successful snapshots
  108. <9> Keep no more than 50 successful snapshots, even if they're less than 30 days old
  109. This policy will take a snapshot of every index each day at 1:30AM UTC.
  110. Snapshots are incremental, allowing frequent snapshots to be stored efficiently,
  111. so don't be afraid to configure a policy to take frequent snapshots.
  112. In addition to specifying the indices that should be included in the snapshot,
  113. the `config` field can be used to customize other aspects of the snapshot. You
  114. can use any option allowed in <<snapshots-take-snapshot,a regular snapshot
  115. request>>, so you can specify, for example, whether the snapshot should fail in
  116. special cases, such as if one of the specified indices cannot be found.
  117. [float]
  118. === Making sure the policy works
  119. While snapshots taken by SLM policies can be viewed through the standard snapshot
  120. API, SLM also keeps track of policy successes and failures in ways that are a bit
  121. easier to use to make sure the policy is working. Once a policy has executed at
  122. least once, when you view the policy using the <<slm-api-get,Get Policy API>>,
  123. some metadata will be returned indicating whether the snapshot was sucessfully
  124. initiated or not.
  125. Instead of waiting for our policy to run, let's tell SLM to take a snapshot
  126. as using the configuration from our policy right now instead of waiting for
  127. 1:30AM.
  128. [source,console]
  129. --------------------------------------------------
  130. POST /_slm/policy/nightly-snapshots/_execute
  131. --------------------------------------------------
  132. // TEST[skip:we can't easily handle snapshots from docs tests]
  133. This request will kick off a snapshot for our policy right now, regardless of
  134. the schedule in the policy. This is useful for taking snapshots before making
  135. a configuration change, upgrading, or for our purposes, making sure our policy
  136. is going to work successfully. The policy will continue to run on its configured
  137. schedule after this execution of the policy.
  138. [source,console]
  139. --------------------------------------------------
  140. GET /_slm/policy/nightly-snapshots?human
  141. --------------------------------------------------
  142. // TEST[continued]
  143. This request will return a response that includes the policy, as well as
  144. information about the last time the policy succeeded and failed, as well as the
  145. next time the policy will be executed.
  146. [source,console-result]
  147. --------------------------------------------------
  148. {
  149. "nightly-snapshots" : {
  150. "version": 1,
  151. "modified_date": "2019-04-23T01:30:00.000Z",
  152. "modified_date_millis": 1556048137314,
  153. "policy" : {
  154. "schedule": "0 30 1 * * ?",
  155. "name": "<nightly-snap-{now/d}>",
  156. "repository": "my_repository",
  157. "config": {
  158. "indices": ["*"],
  159. },
  160. "retention": {
  161. "expire_after": "30d",
  162. "min_count": 5,
  163. "max_count": 50
  164. }
  165. },
  166. "last_success": { <1>
  167. "snapshot_name": "nightly-snap-2019.04.24-tmtnyjtrsxkhbrrdcgg18a", <2>
  168. "time_string": "2019-04-24T16:43:49.316Z",
  169. "time": 1556124229316
  170. } ,
  171. "last_failure": { <3>
  172. "snapshot_name": "nightly-snap-2019.04.02-lohisb5ith2n8hxacaq3mw",
  173. "time_string": "2019-04-02T01:30:00.000Z",
  174. "time": 1556042030000,
  175. "details": "{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [important]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"important\",\"index_uuid\":\"_na_\",\"index\":\"important\",\"stack_trace\":\"[important] IndexNotFoundException[no such index [important]]\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.indexNotFoundException(IndexNameExpressionResolver.java:762)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.innerResolve(IndexNameExpressionResolver.java:714)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:670)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:163)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:142)\\n\\tat org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndexNames(IndexNameExpressionResolver.java:102)\\n\\tat org.elasticsearch.snapshots.SnapshotsService$1.execute(SnapshotsService.java:280)\\n\\tat org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:47)\\n\\tat org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:687)\\n\\tat org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:310)\\n\\tat org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:210)\\n\\tat org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:142)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150)\\n\\tat org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188)\\n\\tat org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:688)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:252)\\n\\tat org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:215)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\\n\\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\\n\\tat java.base/java.lang.Thread.run(Thread.java:834)\\n\"}"
  176. } ,
  177. "next_execution": "2019-04-24T01:30:00.000Z", <4>
  178. "next_execution_millis": 1556048160000
  179. }
  180. }
  181. --------------------------------------------------
  182. // TESTRESPONSE[skip:the presence of last_failure and last_success is asynchronous and will be present for users, but is untestable]
  183. <1> information about the last time the policy successfully initated a snapshot
  184. <2> the name of the snapshot that was successfully initiated
  185. <3> information about the last time the policy failed to initiate a snapshot
  186. <4> the is the next time the policy will execute
  187. NOTE: This metadata only indicates whether the request to initiate the snapshot was
  188. made successfully or not - after the snapshot has been successfully started, it
  189. is possible for the snapshot to fail if, for example, the connection to a remote
  190. repository is lost while copying files.
  191. If you're following along, the returned SLM policy shouldn't have a `last_failure`
  192. field - it's included above only as an example. You should, however, see a
  193. `last_success` field and a snapshot name. If you do, you've successfully taken
  194. your first snapshot using SLM!
  195. While only the most recent sucess and failure are available through the Get Policy
  196. API, all policy executions are recorded to a history index, which may be queried
  197. by searching the index pattern `.slm-history*`.
  198. That's it! We have our first SLM policy set up to periodically take snapshots
  199. so that our backups are always up to date. You can read more details in the
  200. <<snapshot-lifecycle-management-api,SLM API documentation>> and the
  201. <<modules-snapshots,general snapshot documentation.>>