get-record.asciidoc 9.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318
  1. [role="xpack"]
  2. [[ml-get-record]]
  3. = Get records API
  4. ++++
  5. <titleabbrev>Get records</titleabbrev>
  6. ++++
  7. Retrieves anomaly records for an {anomaly-job}.
  8. [[ml-get-record-request]]
  9. == {api-request-title}
  10. `GET _ml/anomaly_detectors/<job_id>/results/records`
  11. [[ml-get-record-prereqs]]
  12. == {api-prereq-title}
  13. Requires the `monitor_ml` cluster privilege. This privilege is included in the
  14. `machine_learning_user` built-in role.
  15. [[ml-get-record-desc]]
  16. == {api-description-title}
  17. Records contain the detailed analytical results. They describe the anomalous
  18. activity that has been identified in the input data based on the detector
  19. configuration.
  20. There can be many anomaly records depending on the characteristics and size of
  21. the input data. In practice, there are often too many to be able to manually
  22. process them. The {ml-features} therefore perform a sophisticated aggregation of
  23. the anomaly records into buckets.
  24. The number of record results depends on the number of anomalies found in each
  25. bucket, which relates to the number of time series being modeled and the number
  26. of detectors.
  27. [[ml-get-record-path-parms]]
  28. == {api-path-parms-title}
  29. `<job_id>`::
  30. (Required, string)
  31. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
  32. [[ml-get-record-query-parms]]
  33. == {api-query-parms-title}
  34. `desc`::
  35. (Optional, Boolean)
  36. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=desc-results]
  37. `end`::
  38. (Optional, string) Returns records with timestamps earlier than this time.
  39. Defaults to `-1`, which means it is unset and results are not limited to
  40. specific timestamps.
  41. `exclude_interim`::
  42. (Optional, Boolean)
  43. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=exclude-interim-results]
  44. `from`::
  45. (Optional, integer) Skips the specified number of records. Defaults to `0`.
  46. `record_score`::
  47. (Optional, double) Returns records with anomaly scores greater or equal than
  48. this value. Defaults to `0.0`.
  49. `size`::
  50. (Optional, integer) Specifies the maximum number of records to obtain. Defaults
  51. to `100`.
  52. `sort`::
  53. (Optional, string) Specifies the sort field for the requested records. By
  54. default, the records are sorted by the `record_score` value.
  55. `start`::
  56. (Optional, string) Returns records with timestamps after this time. Defaults to
  57. `-1`, which means it is unset and results are not limited to specific timestamps.
  58. [role="child_attributes"]
  59. [[ml-get-record-request-body]]
  60. == {api-request-body-title}
  61. You can also specify the query parameters in the request body; the exception are
  62. `from` and `size`, use `page` instead:
  63. `page`::
  64. +
  65. .Properties of `page`
  66. [%collapsible%open]
  67. ====
  68. `from`:::
  69. (Optional, integer) Skips the specified number of records. Defaults to `0`.
  70. `size`:::
  71. (Optional, integer) Specifies the maximum number of records to obtain. Defaults
  72. to `100`.
  73. ====
  74. [role="child_attributes"]
  75. [[ml-get-record-results]]
  76. == {api-response-body-title}
  77. The API returns an array of record objects, which have the following properties:
  78. `actual`::
  79. (array) The actual value for the bucket.
  80. //Begin anomaly_score_explanation
  81. `anomaly_score_explanation`::
  82. (object) When present, it provides information about the factors impacting the
  83. initial anomaly score.
  84. +
  85. .Properties of `anomaly_score_explanation`
  86. [%collapsible%open]
  87. ====
  88. `anomaly_characteristics_impact`::::
  89. (Optional, integer) Impact from the duration and magnitude of the detected anomaly
  90. relative to the historical average.
  91. `anomaly_length`::::
  92. (Optional, integer) Length of the detected anomaly in the number of buckets.
  93. `anomaly_type`::::
  94. (Optional, string) Type of the detected anomaly: spike or dip.
  95. `high_variance_penalty`::::
  96. (Optional, boolean) Indicates reduction of anomaly score for the bucket with large
  97. confidence intervals. If a bucket has large confidence intervals, the score is reduced.
  98. `incomplete_bucket_penalty`::::
  99. (Optional, boolean) If the bucket contains fewer samples than expected, the score is
  100. reduced. If the bucket contains fewer samples than expected, the score is reduced.
  101. `lower_confidence_bound`::::
  102. (Optional, double) Lower bound of the 95% confidence interval.
  103. `multimodal_distribution`::::
  104. (Optional, boolean) Indicates whether the bucket values' probability distribution has
  105. several modes. When there are multiple modes, the typical value may not be the most
  106. likely.
  107. `multi_bucket_impact`::::
  108. (Optional, integer) Impact of the deviation between actual and typical values in the
  109. past 12 buckets.
  110. `single_bucket_impact`::::
  111. (Optional, integer) Impact of the deviation between actual and typical values in the
  112. current bucket.
  113. `typical_value`::::
  114. (Optional, double) Typical (expected) value for this bucket.
  115. `upper_confidence_bound`::::
  116. (Optional, double) Upper bound of the 95% confidence interval.
  117. ====
  118. //End anomaly_score_explanation
  119. `bucket_span`::
  120. (number)
  121. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=bucket-span-results]
  122. `by_field_name`::
  123. (string)
  124. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=by-field-name]
  125. `by_field_value`::
  126. (string) The value of `by_field_name`.
  127. `causes`::
  128. (array) For population analysis, an over field must be specified in the detector.
  129. This property contains an array of anomaly records that are the causes for the
  130. anomaly that has been identified for the over field. If no over fields exist,
  131. this field is not present. This sub-resource contains the most anomalous records
  132. for the `over_field_name`. For scalability reasons, a maximum of the 10 most
  133. significant causes of the anomaly are returned. As part of the core analytical
  134. modeling, these low-level anomaly records are aggregated for their parent over
  135. field record. The causes resource contains similar elements to the record
  136. resource, namely `actual`, `typical`, `geo_results.actual_point`,
  137. `geo_results.typical_point`, `*_field_name` and `*_field_value`. Probability and
  138. scores are not applicable to causes.
  139. `detector_index`::
  140. (number) A unique identifier for the detector.
  141. `field_name`::
  142. (string) Certain functions require a field to operate on, for example, `sum()`.
  143. For those functions, this value is the name of the field to be analyzed.
  144. `function`::
  145. (string) The function in which the anomaly occurs, as specified in the detector
  146. configuration. For example, `max`.
  147. `function_description`::
  148. (string) The description of the function in which the anomaly occurs, as
  149. specified in the detector configuration.
  150. `geo_results`::
  151. (optional, object) If the detector function is `lat_long`, this object contains
  152. comma delimited strings for the latitude and longitude of the actual and typical values.
  153. +
  154. .Properties of `geo_results`
  155. [%collapsible%open]
  156. ====
  157. `actual_point`::
  158. (string) The actual value for the bucket formatted as a `geo_point`.
  159. `typical_point`::
  160. (string) The typical value for the bucket formatted as a `geo_point`.
  161. ====
  162. `influencers`::
  163. (array) If `influencers` was specified in the detector configuration, this array
  164. contains influencers that contributed to or were to blame for an anomaly.
  165. `initial_record_score`::
  166. (number) A normalized score between 0-100, which is based on the probability of
  167. the anomalousness of this record. This is the initial value that was calculated
  168. at the time the bucket was processed.
  169. `is_interim`::
  170. (Boolean)
  171. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=is-interim]
  172. `job_id`::
  173. (string)
  174. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-anomaly-detection]
  175. `multi_bucket_impact`::
  176. (number) An indication of how strongly an anomaly is multi bucket or single
  177. bucket. The value is on a scale of `-5.0` to `+5.0` where `-5.0` means the
  178. anomaly is purely single bucket and `+5.0` means the anomaly is purely multi
  179. bucket.
  180. `over_field_name`::
  181. (string)
  182. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=over-field-name]
  183. `over_field_value`::
  184. (string) The value of `over_field_name`.
  185. `partition_field_name`::
  186. (string)
  187. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=partition-field-name]
  188. `partition_field_value`::
  189. (string) The value of `partition_field_name`.
  190. `probability`::
  191. (number) The probability of the individual anomaly occurring, in the range
  192. 0 to 1. For example, 0.0000772031. This value can be held to a high precision
  193. of over 300 decimal places, so the `record_score` is provided as a
  194. human-readable and friendly interpretation of this.
  195. `record_score`::
  196. (number) A normalized score between 0-100, which is based on the probability of
  197. the anomalousness of this record. Unlike `initial_record_score`, this value will
  198. be updated by a re-normalization process as new data is analyzed.
  199. `result_type`::
  200. (string) Internal. This is always set to `record`.
  201. `timestamp`::
  202. (date)
  203. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timestamp-results]
  204. `typical`::
  205. (array) The typical value for the bucket, according to analytical modeling.
  206. NOTE: Additional record properties are added, depending on the fields being
  207. analyzed. For example, if it's analyzing `hostname` as a _by field_, then a field
  208. `hostname` is added to the result document. This information enables you to
  209. filter the anomaly results more easily.
  210. [[ml-get-record-example]]
  211. == {api-examples-title}
  212. [source,console]
  213. --------------------------------------------------
  214. GET _ml/anomaly_detectors/low_request_rate/results/records
  215. {
  216. "sort": "record_score",
  217. "desc": true,
  218. "start": "1454944100000"
  219. }
  220. --------------------------------------------------
  221. // TEST[skip:Kibana sample data]
  222. [source,js]
  223. ----
  224. {
  225. "count" : 4,
  226. "records" : [
  227. {
  228. "job_id" : "low_request_rate",
  229. "result_type" : "record",
  230. "probability" : 1.3882308899968812E-4,
  231. "multi_bucket_impact" : -5.0,
  232. "record_score" : 94.98554565630553,
  233. "initial_record_score" : 94.98554565630553,
  234. "bucket_span" : 3600,
  235. "detector_index" : 0,
  236. "is_interim" : false,
  237. "timestamp" : 1577793600000,
  238. "function" : "low_count",
  239. "function_description" : "count",
  240. "typical" : [
  241. 28.254208230188834
  242. ],
  243. "actual" : [
  244. 0.0
  245. ]
  246. },
  247. ...
  248. ]
  249. }
  250. ----