1
0

get-record.asciidoc 7.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[ml-get-record]]
  4. = Get records API
  5. ++++
  6. <titleabbrev>Get records</titleabbrev>
  7. ++++
  8. Retrieves anomaly records for an {anomaly-job}.
  9. [[ml-get-record-request]]
  10. == {api-request-title}
  11. `GET _ml/anomaly_detectors/<job_id>/results/records`
  12. [[ml-get-record-prereqs]]
  13. == {api-prereq-title}
  14. * If the {es} {security-features} are enabled, you must have `monitor_ml`,
  15. `monitor`, `manage_ml`, or `manage` cluster privileges to use this API. You also
  16. need `read` index privilege on the index that stores the results. The
  17. `machine_learning_admin` and `machine_learning_user` roles provide these
  18. privileges. See <<security-privileges>>, <<built-in-roles>>, and
  19. {ml-docs-setup-privileges}.
  20. [[ml-get-record-desc]]
  21. == {api-description-title}
  22. Records contain the detailed analytical results. They describe the anomalous
  23. activity that has been identified in the input data based on the detector
  24. configuration.
  25. There can be many anomaly records depending on the characteristics and size of
  26. the input data. In practice, there are often too many to be able to manually
  27. process them. The {ml-features} therefore perform a sophisticated aggregation of
  28. the anomaly records into buckets.
  29. The number of record results depends on the number of anomalies found in each
  30. bucket, which relates to the number of time series being modeled and the number
  31. of detectors.
  32. [[ml-get-record-path-parms]]
  33. == {api-path-parms-title}
  34. `<job_id>`::
  35. (Required, string)
  36. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
  37. [[ml-get-record-request-body]]
  38. == {api-request-body-title}
  39. `desc`::
  40. (Optional, boolean)
  41. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results]
  42. `end`::
  43. (Optional, string) Returns records with timestamps earlier than this time.
  44. `exclude_interim`::
  45. (Optional, boolean)
  46. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results]
  47. `page`.`from`::
  48. (Optional, integer) Skips the specified number of records.
  49. `page`.`size`::
  50. (Optional, integer) Specifies the maximum number of records to obtain.
  51. `record_score`::
  52. (Optional, double) Returns records with anomaly scores greater or equal than
  53. this value.
  54. `sort`::
  55. (Optional, string) Specifies the sort field for the requested records. By
  56. default, the records are sorted by the `anomaly_score` value.
  57. `start`::
  58. (Optional, string) Returns records with timestamps after this time.
  59. [[ml-get-record-results]]
  60. == {api-response-body-title}
  61. The API returns an array of record objects, which have the following properties:
  62. `actual`::
  63. (array) The actual value for the bucket.
  64. `bucket_span`::
  65. (number)
  66. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span-results]
  67. `by_field_name`::
  68. (string)
  69. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=by-field-name]
  70. `by_field_value`::
  71. (string) The value of `by_field_name`.
  72. `causes`::
  73. (array) For population analysis, an over field must be specified in the detector.
  74. This property contains an array of anomaly records that are the causes for the
  75. anomaly that has been identified for the over field. If no over fields exist,
  76. this field is not present. This sub-resource contains the most anomalous records
  77. for the `over_field_name`. For scalability reasons, a maximum of the 10 most
  78. significant causes of the anomaly are returned. As part of the core analytical modeling, these low-level anomaly records are aggregated for their parent over
  79. field record. The causes resource contains similar elements to the record
  80. resource, namely `actual`, `typical`, `geo_results.actual_point`,
  81. `geo_results.typical_point`, `*_field_name` and `*_field_value`. Probability and
  82. scores are not applicable to causes.
  83. `detector_index`::
  84. (number) A unique identifier for the detector.
  85. `field_name`::
  86. (string) Certain functions require a field to operate on, for example, `sum()`.
  87. For those functions, this value is the name of the field to be analyzed.
  88. `function`::
  89. (string) The function in which the anomaly occurs, as specified in the detector
  90. configuration. For example, `max`.
  91. `function_description`::
  92. (string) The description of the function in which the anomaly occurs, as
  93. specified in the detector configuration.
  94. `geo_results.actual_point`::
  95. (string) The actual value for the bucket formatted as a `geo_point`. If the
  96. detector function is `lat_long`, this is a comma delimited string of the
  97. latitude and longitude.
  98. `geo_results.typical_point`::
  99. (string) The typical value for the bucket formatted as a `geo_point`. If the
  100. detector function is `lat_long`, this is a comma delimited string of the
  101. latitude and longitude.
  102. `influencers`::
  103. (array) If `influencers` was specified in the detector configuration, this array
  104. contains influencers that contributed to or were to blame for an anomaly.
  105. `initial_record_score`::
  106. (number) A normalized score between 0-100, which is based on the probability of
  107. the anomalousness of this record. This is the initial value that was calculated
  108. at the time the bucket was processed.
  109. `is_interim`::
  110. (boolean)
  111. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim]
  112. `job_id`::
  113. (string)
  114. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
  115. `over_field_name`::
  116. (string)
  117. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=over-field-name]
  118. `over_field_value`::
  119. (string) The value of `over_field_name`.
  120. `partition_field_name`::
  121. (string)
  122. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=partition-field-name]
  123. `partition_field_value`::
  124. (string) The value of `partition_field_name`.
  125. `probability`::
  126. (number) The probability of the individual anomaly occurring, in the range
  127. 0 to 1. For example, 0.0000772031. This value can be held to a high precision
  128. of over 300 decimal places, so the `record_score` is provided as a
  129. human-readable and friendly interpretation of this.
  130. `multi_bucket_impact`::
  131. (number) An indication of how strongly an anomaly is multi bucket or single
  132. bucket. The value is on a scale of `-5.0` to `+5.0` where `-5.0` means the
  133. anomaly is purely single bucket and `+5.0` means the anomaly is purely multi
  134. bucket.
  135. `record_score`::
  136. (number) A normalized score between 0-100, which is based on the probability of
  137. the anomalousness of this record. Unlike `initial_record_score`, this value will
  138. be updated by a re-normalization process as new data is analyzed.
  139. `result_type`::
  140. (string) Internal. This is always set to `record`.
  141. `timestamp`::
  142. (date)
  143. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timestamp-results]
  144. `typical`::
  145. (array) The typical value for the bucket, according to analytical modeling.
  146. NOTE: Additional record properties are added, depending on the fields being
  147. analyzed. For example, if it's analyzing `hostname` as a _by field_, then a field
  148. `hostname` is added to the result document. This information enables you to
  149. filter the anomaly results more easily.
  150. [[ml-get-record-example]]
  151. == {api-examples-title}
  152. [source,console]
  153. --------------------------------------------------
  154. GET _ml/anomaly_detectors/low_request_rate/results/records
  155. {
  156. "sort": "record_score",
  157. "desc": true,
  158. "start": "1454944100000"
  159. }
  160. --------------------------------------------------
  161. // TEST[skip:Kibana sample data]
  162. [source,js]
  163. ----
  164. {
  165. "count" : 4,
  166. "records" : [
  167. {
  168. "job_id" : "low_request_rate",
  169. "result_type" : "record",
  170. "probability" : 1.3882308899968812E-4,
  171. "multi_bucket_impact" : -5.0,
  172. "record_score" : 94.98554565630553,
  173. "initial_record_score" : 94.98554565630553,
  174. "bucket_span" : 3600,
  175. "detector_index" : 0,
  176. "is_interim" : false,
  177. "timestamp" : 1577793600000,
  178. "function" : "low_count",
  179. "function_description" : "count",
  180. "typical" : [
  181. 28.254208230188834
  182. ],
  183. "actual" : [
  184. 0.0
  185. ]
  186. },
  187. ...
  188. ]
  189. }
  190. ----