浏览代码

[ML] Update API documentation for anomaly score explanation (#91177)

This PR updates the API documentation to match the UI.

Co-authored-by: lcawl <lcawley@elastic.co>
Valeriy Khakhutskyy 3 年之前
父节点
当前提交
7c4186ddbc
共有 1 个文件被更改,包括 20 次插入12 次删除
  1. 20 12
      docs/reference/ml/anomaly-detection/apis/get-record.asciidoc

+ 20 - 12
docs/reference/ml/anomaly-detection/apis/get-record.asciidoc

@@ -76,6 +76,7 @@ default, the records are sorted by the `record_score` value.
 (Optional, string) Returns records with timestamps after this time. Defaults to
 `-1`, which means it is unset and results are not limited to specific timestamps.
 
+[role="child_attributes"]
 [[ml-get-record-request-body]]
 == {api-request-body-title}
 
@@ -96,6 +97,7 @@ You can also specify the query parameters in the request body; the exception are
 to `100`.
 ====
 
+[role="child_attributes"]
 [[ml-get-record-results]]
 == {api-response-body-title}
 
@@ -113,7 +115,8 @@ initial anomaly score.
 [%collapsible%open]
 ====
 `anomaly_characteristics_impact`::::
-(Optional, integer) Impact of the statistical properties of the detected anomalous interval.
+(Optional, integer) Impact from the duration and magnitude of the detected anomaly 
+relative to the historical average.
 
 `anomaly_length`::::
 (Optional, integer) Length of the detected anomaly in the number of buckets.
@@ -122,19 +125,23 @@ initial anomaly score.
 (Optional, string) Type of the detected anomaly: spike or dip.
 
 `high_variance_penalty`::::
-(Optional, boolean) Indicates reduction of anomaly score for the bucket with large confidence intervals.
+(Optional, boolean) Indicates reduction of anomaly score for the bucket with large 
+confidence intervals. If a bucket has large confidence intervals, the score is reduced.
 
 `incomplete_bucket_penalty`::::
-(Optional, boolean) Indicates reduction of anomaly score if the bucket contains fewer samples than historically expected.
+(Optional, boolean) If the bucket contains fewer samples than expected, the score is 
+reduced. If the bucket contains fewer samples than expected, the score is reduced.
 
 `lower_confidence_bound`::::
 (Optional, double) Lower bound of the 95% confidence interval.
 
 `multi_bucket_impact`::::
-(Optional, integer) Impact of the deviation between actual and typical in the past 12 buckets."
+(Optional, integer) Impact of the deviation between actual and typical values in the 
+past 12 buckets.
 
 `single_bucket_impact`::::
-(Optional, integer) Impact of the deviation between actual and typical in the current bucket.
+(Optional, integer) Impact of the deviation between actual and typical values in the 
+current bucket.
 
 `typical_value`::::
 (Optional, double) Typical (expected) value for this bucket.
@@ -161,7 +168,8 @@ This property contains an array of anomaly records that are the causes for the
 anomaly that has been identified for the over field. If no over fields exist,
 this field is not present. This sub-resource contains the most anomalous records
 for the `over_field_name`. For scalability reasons, a maximum of the 10 most
-significant causes of the anomaly are returned. As part of the core analytical modeling, these low-level anomaly records are aggregated for their parent over
+significant causes of the anomaly are returned. As part of the core analytical 
+modeling, these low-level anomaly records are aggregated for their parent over
 field record. The causes resource contains similar elements to the record
 resource, namely `actual`, `typical`, `geo_results.actual_point`,
 `geo_results.typical_point`, `*_field_name` and `*_field_value`. Probability and
@@ -209,6 +217,12 @@ include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim]
 (string)
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
 
+`multi_bucket_impact`::
+(number) An indication of how strongly an anomaly is multi bucket or single
+bucket. The value is on a scale of `-5.0` to `+5.0` where `-5.0` means the
+anomaly is purely single bucket and `+5.0` means the anomaly is purely multi
+bucket.
+
 `over_field_name`::
 (string)
 include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=over-field-name]
@@ -229,12 +243,6 @@ include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=partition-field-name]
 of over 300 decimal places, so the `record_score` is provided as a
 human-readable and friendly interpretation of this.
 
-`multi_bucket_impact`::
-(number) An indication of how strongly an anomaly is multi bucket or single
-bucket. The value is on a scale of `-5.0` to `+5.0` where `-5.0` means the
-anomaly is purely single bucket and `+5.0` means the anomaly is purely multi
-bucket.
-
 `record_score`::
 (number) A normalized score between 0-100, which is based on the probability of
 the anomalousness of this record. Unlike `initial_record_score`, this value will