Преглед на файлове

[DOCS] Adds missing query parameters in get influencer and get snapshot APIs (#80801)

Lisa Cawley преди 3 години
родител
ревизия
fffac5bd08

+ 7 - 25
docs/reference/ml/anomaly-detection/apis/get-influencer.asciidoc

@@ -40,6 +40,11 @@ include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
 (Optional, Boolean)
 (Optional, Boolean)
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results]
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results]
 
 
+`end`::
+(Optional, string) Returns influencers with timestamps earlier than this time.
+Defaults to `-1`, which means it is unset and results are not limited to 
+specific timestamps.
+
 `exclude_interim`::
 `exclude_interim`::
 (Optional, Boolean)
 (Optional, Boolean)
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results]
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results]
@@ -69,22 +74,8 @@ timestamps.
 [[ml-get-influencer-request-body]]
 [[ml-get-influencer-request-body]]
 == {api-request-body-title}
 == {api-request-body-title}
 
 
-`desc`::
-(Optional, Boolean)
-include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results]
-
-`end`::
-(Optional, string) Returns influencers with timestamps earlier than this time.
-Defaults to `-1`, which means it is unset and results are not limited to 
-specific timestamps.
-
-`exclude_interim`::
-(Optional, Boolean)
-include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results]
-
-`influencer_score`::
-(Optional, double) Returns influencers with anomaly scores greater than or
-equal to this value. Defaults to `0.0`.
+You can also specify the query parameters (such as `desc` and
+`end`) in the request body.
 
 
 `page`.`from`::
 `page`.`from`::
 (Optional, integer) Skips the specified number of influencers. Defaults to `0`.
 (Optional, integer) Skips the specified number of influencers. Defaults to `0`.
@@ -93,15 +84,6 @@ equal to this value. Defaults to `0.0`.
 (Optional, integer) Specifies the maximum number of influencers to obtain. 
 (Optional, integer) Specifies the maximum number of influencers to obtain. 
 Defaults to `100`.
 Defaults to `100`.
 
 
-`sort`::
-(Optional, string) Specifies the sort field for the requested influencers. By
-default, the influencers are sorted by the `influencer_score` value.
-
-`start`::
-(Optional, string) Returns influencers with timestamps after this time. Defaults 
-to `-1`, which means it is unset and results are not limited to specific 
-timestamps.
-
 [[ml-get-influencer-results]]
 [[ml-get-influencer-results]]
 == {api-response-body-title}
 == {api-response-body-title}
 
 

+ 19 - 8
docs/reference/ml/anomaly-detection/apis/get-snapshot.asciidoc

@@ -33,15 +33,13 @@ include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id]
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=snapshot-id]
 +
 +
 --
 --
-You can multiple snapshots for a single job in a single API request
-by using a comma-separated list of `<snapshot_id>` or a wildcard expression.
-You can get all snapshots for all calendars by using `_all`,
-by specifying `*` as the `<snapshot_id>`, or by omitting the `<snapshot_id>`.
+You can get information for multiple snapshots by using a comma-separated list
+or a wildcard expression. You can get all snapshots by using `_all`, by
+specifying `*` as the snapshot ID, or by omitting the snapshot ID.
 --
 --
 
 
-
-[[ml-get-snapshot-request-body]]
-== {api-request-body-title}
+[[ml-get-snapshot-query-parms]]
+== {api-query-parms-title}
 
 
 `desc`::
 `desc`::
   (Optional, Boolean) If true, the results are sorted in descending order. 
   (Optional, Boolean) If true, the results are sorted in descending order. 
@@ -66,6 +64,19 @@ by specifying `*` as the `<snapshot_id>`, or by omitting the `<snapshot_id>`.
   (Optional, string) Returns snapshots with timestamps after this time. Defaults
   (Optional, string) Returns snapshots with timestamps after this time. Defaults
   to unset, which means results are not limited to specific timestamps.
   to unset, which means results are not limited to specific timestamps.
 
 
+[[ml-get-snapshot-request-body]]
+== {api-request-body-title}
+
+You can also specify the query parameters (such as `desc` and
+`end`) in the request body.
+
+`page`.`from`::
+(Optional, integer) Skips the specified number of snapshots. Defaults to `0`.
+
+`page`.`size`::
+(Optional, integer) Specifies the maximum number of snapshots to obtain. 
+Defaults to `100`.
+
 [role="child_attributes"]
 [role="child_attributes"]
 [[ml-get-snapshot-results]]
 [[ml-get-snapshot-results]]
 == {api-response-body-title}
 == {api-response-body-title}
@@ -150,7 +161,7 @@ server-time.
 Contains one of the following values.
 Contains one of the following values.
 +
 +
 --
 --
-* `hard_limit`: The internal models require more space that the configured
+* `hard_limit`: The internal models require more space than the configured
 memory limit. Some incoming data could not be processed.
 memory limit. Some incoming data could not be processed.
 * `ok`: The internal models stayed below the configured value.
 * `ok`: The internal models stayed below the configured value.
 * `soft_limit`: The internal models require more than 60% of the configured
 * `soft_limit`: The internal models require more than 60% of the configured

+ 11 - 11
docs/reference/ml/ml-shared.asciidoc

@@ -109,19 +109,19 @@ Contains messages relating to the selection of a node.
 end::assignment-explanation-dfanalytics[]
 end::assignment-explanation-dfanalytics[]
 
 
 tag::assignment-memory-basis[]
 tag::assignment-memory-basis[]
-Where should the memory requirement used for deciding which node the job
-will run on come from? The possible values are:
+Indicates where to find the memory requirement that is used to decide where the
+job runs. The possible values are:
 +
 +
 --
 --
-* `model_memory_limit`: The job's memory requirement will be calculated on
-the basis that its model memory will grow to the `model_memory_limit`
-specified in the `analysis_limits` of its config.
-* `current_model_bytes`: The job's memory requirement will be calculated on
-the basis that its current model memory size is a good reflection of what
-it will be in the future.
-* `peak_model_bytes`: The job's memory requirement will be calculated on
-the basis that its peak model memory size is a good reflection of what
-the model size will be in the future.
+* `model_memory_limit`: The job's memory requirement is calculated on the basis
+that its model memory will grow to the `model_memory_limit` specified in the
+`analysis_limits` of its config.
+* `current_model_bytes`: The job's memory requirement is calculated on the basis
+that its current model memory size is a good reflection of what it will be in
+the future.
+* `peak_model_bytes`: The job's memory requirement is calculated on the basis
+that its peak model memory size is a good reflection of what the model size will
+be in the future.
 --
 --
 end::assignment-memory-basis[]
 end::assignment-memory-basis[]