Browse Source

[DOCS] Align with ILM changes. (#55953)

* [DOCS] Align with ILM changes.

* Apply suggestions from code review

Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>

* Incorporated review comments.
debadair 5 years ago
parent
commit
f7cd772402

+ 2 - 3
docs/reference/ilm/ilm-tutorial.asciidoc

@@ -43,9 +43,8 @@ A lifecycle policy specifies the phases in the index lifecycle
 and the actions to perform in each phase. A lifecycle can have up to four phases:
 `hot`, `warm`, `cold`, and `delete`. 
 
-You can define and manage policies through the {kib} Management UI, 
-which invokes the {ilm-init} <<ilm-put-lifecycle, put policy>> API to create policies
-according to the options you specify.
+You can define and manage policies through {kib} Management or with the 
+<<ilm-put-lifecycle, put policy>> API.
 
 For example, you might define a `timeseries_policy` that has two phases:
  

+ 18 - 7
docs/reference/settings/ilm-settings.asciidoc

@@ -1,38 +1,47 @@
 [role="xpack"]
 [[ilm-settings]]
-=== {ilm-cap} settings
+=== {ilm-cap} settings in {es}
+[subs="attributes"]
+++++
+<titleabbrev>{ilm-cap} settings</titleabbrev>
+++++
 
-These are the settings available for configuring Index Lifecycle Management
+These are the settings available for configuring <<index-lifecycle-management, {ilm}>> ({ilm-init}).
 
 ==== Cluster level settings
 
 `xpack.ilm.enabled`::
+(boolean)
 deprecated:[7.8.0,Basic License features are always enabled] +
 This deprecated setting has no effect and will be removed in Elasticsearch 8.0.
 
-`indices.lifecycle.poll_interval`::
-(<<time-units, time units>>) How often {ilm} checks for indices that meet policy
-criteria. Defaults to `10m`.
-
 `indices.lifecycle.history_index_enabled`::
+(boolean)
 Whether ILM's history index is enabled. If enabled, ILM will record the
 history of actions taken as part of ILM policies to the `ilm-history-*`
 indices. Defaults to `true`.
 
+`indices.lifecycle.poll_interval`::
+(<<cluster-update-settings,Dynamic>>, <<time-units, time unit value>>) 
+How often {ilm} checks for indices that meet policy criteria. Defaults to `10m`.
+
 ==== Index level settings
 These index-level {ilm-init} settings are typically configured through index
 templates. For more information, see <<ilm-gs-create-policy>>.
 
 `index.lifecycle.name`::
+(<<indices-update-settings, Dynamic>>, string) 
 The name of the policy to use to manage the index.
 
 `index.lifecycle.rollover_alias`::
+(<<indices-update-settings,Dynamic>>, string) 
 The index alias to update when the index rolls over. Specify when using a
 policy that contains a rollover action. When the index rolls over, the alias is
 updated to reflect that the index is no longer the write index. For more
 information about rollover, see <<using-policies-rollover>>.
 
 `index.lifecycle.parse_origination_date`::
+(<<indices-update-settings,Dynamic>>, boolean) 
 When configured to `true` the origination date will be parsed from the index
 name. The index format must match the pattern `^.*-{date_format}-\\d+`, where
 the `date_format` is `yyyy.MM.dd` and the trailing digits are optional (an
@@ -41,6 +50,8 @@ index that was rolled over would normally match the full format eg.
 the index creation will fail.
 
 `index.lifecycle.origination_date`::
+(<<indices-update-settings,Dynamic>>, long) 
 The timestamp that will be used to calculate the index age for its phase
 transitions. This allows the users to create an index containing old data and
-use the original creation date of the old data to calculate the index age.  Must be a long (Unix epoch) value.
+use the original creation date of the old data to calculate the index age.  
+Must be a long (Unix epoch) value.

+ 33 - 0
docs/reference/settings/slm-settings.asciidoc

@@ -0,0 +1,33 @@
+[role="xpack"]
+[[slm-settings]]
+=== {slm-cap} settings in {es}
+[subs="attributes"]
+++++
+<titleabbrev>{slm-cap} settings</titleabbrev>
+++++
+
+These are the settings available for configuring 
+<<snapshot-lifecycle-management, {slm}>> ({slm-init}).
+
+==== Cluster-level settings
+
+[[slm-history-index-enabled]]
+`slm.history_index_enabled`::
+(boolean)
+Controls whether {slm-init} records the history of actions taken as part of {slm-init} policies
+to the `slm-history-*` indices. Defaults to `true`.
+
+[[slm-retention-schedule]]
+`slm.retention_schedule`::
+(<<cluster-update-settings,Dynamic>>, <<schedule-cron,cron scheduler value>>) 
+Controls when the <<slm-retention,retention task>> runs.
+Can be a periodic or absolute time schedule.
+Supports all values supported by the <<schedule-cron,cron scheduler>>.
+Defaults to daily at 1:30am UTC: `0 30 1 * * ?`.
+
+[[slm-retention-duration]]
+`slm.retention_duration`::
+(<<cluster-update-settings,Dynamic>>, <<time-units,time value>>)
+Limits how long {slm-init} should spend deleting old snapshots.
+Defaults to one hour: `1h`.
+

+ 9 - 7
docs/reference/setup.asciidoc

@@ -45,18 +45,20 @@ include::setup/jvm-options.asciidoc[]
 
 include::setup/secure-settings.asciidoc[]
 
-include::settings/ccr-settings.asciidoc[]
+include::settings/audit-settings.asciidoc[]
 
 include::modules/indices/circuit_breaker.asciidoc[]
 
-include::modules/indices/recovery.asciidoc[]
-
-include::modules/indices/indexing_buffer.asciidoc[]
+include::settings/ccr-settings.asciidoc[]
 
 include::modules/indices/fielddata.asciidoc[]
 
 include::settings/ilm-settings.asciidoc[]
 
+include::modules/indices/recovery.asciidoc[]
+
+include::modules/indices/indexing_buffer.asciidoc[]
+
 include::settings/license-settings.asciidoc[]
 
 include::setup/logging-config.asciidoc[]
@@ -69,13 +71,13 @@ include::modules/network.asciidoc[]
 
 include::modules/indices/query_cache.asciidoc[]
 
-include::modules/indices/request_cache.asciidoc[]
-
 include::modules/indices/search-settings.asciidoc[]
 
 include::settings/security-settings.asciidoc[]
 
-include::settings/audit-settings.asciidoc[]
+include::modules/indices/request_cache.asciidoc[]
+
+include::settings/slm-settings.asciidoc[]
 
 include::settings/sql-settings.asciidoc[]
 

+ 1 - 0
docs/reference/slm/apis/slm-put.asciidoc

@@ -83,6 +83,7 @@ Repository used to store snapshots created by this policy. This repository must
 exist prior to the policy's creation. You can create a repository using the 
 <<modules-snapshots,snapshot repository API>>.
 
+[[slm-api-put-retention]]
 `retention`::
 (Optional, object)
 Retention rules used to retain and delete snapshots created by the policy.

File diff suppressed because it is too large
+ 83 - 71
docs/reference/slm/getting-started-slm.asciidoc


+ 8 - 58
docs/reference/slm/index.asciidoc

@@ -1,71 +1,21 @@
 [role="xpack"]
 [testenv="basic"]
 [[snapshot-lifecycle-management]]
-== Manage the snapshot lifecycle
+== {slm-init}: Manage the snapshot lifecycle
 
 You can set up snapshot lifecycle policies to automate the timing, frequency, and retention of snapshots.
 Snapshot policies can apply to multiple indices.
 
-The snapshot lifecycle management (SLM) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
-the building blocks for the snapshot policy features that are part of the Management application in {kib}.
-The  Snapshot and Restore UI makes it easy to set up policies, register snapshot repositories,
-view and manage snapshots, and restore indices.
+The {slm} ({slm-init}) <<snapshot-lifecycle-management-api, CRUD APIs>> provide
+the building blocks for the snapshot policy features that are part of {kib} Management.
+{kibana-ref}/snapshot-repositories.html[Snapshot and Restore] makes it easy to 
+set up policies, register snapshot repositories, view and manage snapshots, and restore indices.
 
-You can stop and restart SLM to temporarily pause automatic backups while performing
+You can stop and restart {slm-init} to temporarily pause automatic backups while performing
 upgrades or other maintenance.
 
-[float]
-[[slm-and-security]]
-=== Security and SLM
-
-Two built-in cluster privileges control access to the SLM actions when
-{es} {security-features} are enabled:
-
-`manage_slm`:: Allows a user to perform all SLM actions, including creating and updating policies
-and starting and stopping SLM.
-
-`read_slm`:: Allows a user to perform all read-only SLM actions,
-such as getting policies and checking the SLM status.
-
-`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
-index, whether or not they have access to that index.
-
-For example, the following request configures an `slm-admin` role that grants the privileges
-necessary for administering SLM.
-
-[source,console]
------------------------------------
-POST /_security/role/slm-admin
-{
-  "cluster": ["manage_slm", "cluster:admin/snapshot/*"],
-  "indices": [
-    {
-      "names": [".slm-history-*"],
-      "privileges": ["all"]
-    }
-  ]
-}
------------------------------------
-// TEST[skip:security is not enabled here]
-
-Or, for a read-only role that can retrieve policies (but not update, execute, or
-delete them), as well as only view the history index:
-
-[source,console]
------------------------------------
-POST /_security/role/slm-read-only
-{
-  "cluster": ["read_slm"],
-  "indices": [
-    {
-      "names": [".slm-history-*"],
-      "privileges": ["read"]
-    }
-  ]
-}
------------------------------------
-// TEST[skip:security is not enabled here]
-
 include::getting-started-slm.asciidoc[]
 
+include::slm-security.asciidoc[]
+
 include::slm-retention.asciidoc[]

+ 30 - 54
docs/reference/slm/slm-retention.asciidoc

@@ -3,30 +3,34 @@
 [[slm-retention]]
 === Snapshot retention
 
-Automatic deletion of older snapshots is an optional feature of snapshot lifecycle management.
-Retention is run as a cluster level task that is not associated with a particular policy's schedule
-(though the configuration of which snapshots to keep is done on a per-policy basis). Retention
-configuration consists of two parts—The first a cluster-level configuration for when retention is
-run and for how long, the second configured on a policy for which snapshots should be eligible for
-retention.
-
-The cluster level settings for retention are shown below, and can be changed dynamically using the
-<<cluster-update-settings>> API:
-
-|=====================================
-| Setting | Default value | Description
-
-| `slm.retention_schedule` | `0 30 1 * * ?` | A periodic or absolute time schedule for when
-  retention should be run. Supports all values supported by the cron scheduler: <<schedule-cron,Cron
-  scheduler configuration>>. Retention can also be manually run using the
-  <<slm-api-execute-retention>> API. Defaults to daily at 1:30am UTC.
-
-| `slm.retention_duration` | `"1h"` | A limit of how long SLM should spend deleting old snapshots.
-|=====================================
-
-Policy level configuration for retention is done inside the `retention` object when creating or
-updating a policy. All of the retention configurations options are optional.
-
+You can include a retention policy in an {slm-init} policy to automatically delete old snapshots. 
+Retention runs as a cluster-level task and is not associated with a particular policy's schedule.
+The retention criteria are evaluated as part of the retention task, not when the policy executes.
+For the retention task to automatically delete snapshots, 
+you need to include a <<slm-api-put-retention,`retention`>> object in your {slm-init} policy.
+
+To control when the retention task runs, configure 
+<<slm-retention-schedule,`slm.retention_schedule`>> in the cluster settings.
+You can define the schedule as a periodic or absolute <<schedule-cron, cron schedule>>.
+The <<slm-retention-duration,`slm.retention_duration`>> setting limits how long 
+{slm-init} should spend deleting old snapshots.
+
+You can update the schedule and duration dynamically with the 
+<<cluster-update-settings, update settings>> API.
+You can run the retention task manually with the 
+<<slm-api-execute-retention, execute retention >> API. 
+
+The retention task only considers snapshots initiated through {slm-init} policies,  
+either according to the policy schedule or through the 
+<<slm-api-execute-lifecycle, execute lifecycle>> API. 
+Manual snapshots are ignored and don't count toward the retention limits.
+
+If multiple policies snapshot to the same repository, they can define differing retention criteria. 
+
+To retrieve information about the snapshot retention task history, 
+use the  <<slm-api-get-stats, get stats>> API:
+
+////
 [source,console]
 --------------------------------------------------
 PUT /_slm/policy/daily-snapshots
@@ -46,35 +50,7 @@ PUT /_slm/policy/daily-snapshots
 <2> Keep snapshots for 30 days
 <3> Always keep at least 5 successful snapshots
 <4> Keep no more than 50 successful snapshots
-
-Supported configuration for retention from within a policy are as follows. The default value for
-each is unset unless specified by the user in the policy configuration.
-
-NOTE: The oldest snapshots are always deleted first, in the case of a `max_count` of 5 for a policy
-with 6 snapshots, the oldest snapshot will be deleted.
-
-|=====================================
-| Setting | Description
-| `expire_after` | A timevalue for how old a snapshot must be in order to be eligible for deletion.
-| `min_count` | A minimum number of snapshots to keep, regardless of age.
-| `max_count` | The maximum number of snapshots to keep, regardless of age.
-|=====================================
-
-As an example, the retention setting in the policy configured about would read in English as:
-
-____
-Remove snapshots older than thirty days, but always keep the latest five snapshots. If there are
-more than fifty snapshots, remove the oldest surplus snapshots until there are no more than fifty
-successful snapshots.
-____
-
-If multiple policies are configured to snapshot to the same repository, or manual snapshots have
-been taken without using the <<slm-api-execute-lifecycle>> API, they are treated as not
-eligible for retention, and do not count towards any limits. This allows multiple policies to have
-differing retention configuration while using the same snapshot repository.
-
-Statistics for snapshot retention can be retrieved using the 
-<<slm-api-get-stats>> API:
+////
 
 [source,console]
 --------------------------------------------------
@@ -82,7 +58,7 @@ GET /_slm/stats
 --------------------------------------------------
 // TEST[continued]
 
-Which returns a response
+The response includes the following statistics:
 
 [source,js]
 --------------------------------------------------

+ 58 - 0
docs/reference/slm/slm-security.asciidoc

@@ -0,0 +1,58 @@
+[[slm-and-security]]
+=== Security and {slm-init}
+
+Two built-in cluster privileges control access to the {slm-init} actions when
+{es} {security-features} are enabled:
+
+`manage_slm`:: Allows a user to perform all {slm-init} actions, including creating and updating policies
+and starting and stopping {slm-init}.
+
+`read_slm`:: Allows a user to perform all read-only {slm-init} actions,
+such as getting policies and checking the {slm-init} status.
+
+`cluster:admin/snapshot/*`:: Allows a user to take and delete snapshots of any
+index, whether or not they have access to that index.
+
+You can create and manage roles to assign these privileges through {kib} Management.
+
+To grant the privileges necessary to create and manage {slm-init} policies and snapshots,
+you can set up a role with the `manage_slm` and `cluster:admin/snapshot/*` cluster privileges
+and full access to the {slm-init} history indices. 
+
+For example, the following request creates an `slm-admin` role:
+
+[source,console]
+-----------------------------------
+POST /_security/role/slm-admin
+{
+  "cluster": ["manage_slm", "cluster:admin/snapshot/*"],
+  "indices": [
+    {
+      "names": [".slm-history-*"],
+      "privileges": ["all"]
+    }
+  ]
+}
+-----------------------------------
+// TEST[skip:security is not enabled here]
+
+To grant read-only access to {slm-init} policies and the snapshot history, 
+you can set up a role with the `read_slm` cluster privilege and read access
+to the {slm} history indices. 
+
+For example, the following request creates a `slm-read-only` role:
+
+[source,console]
+-----------------------------------
+POST /_security/role/slm-read-only
+{
+  "cluster": ["read_slm"],
+  "indices": [
+    {
+      "names": [".slm-history-*"],
+      "privileges": ["read"]
+    }
+  ]
+}
+-----------------------------------
+// TEST[skip:security is not enabled here]

Some files were not shown because too many files changed in this diff