evaluate-dfanalytics.asciidoc 3.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[evaluate-dfanalytics]]
  4. === Evaluate {dfanalytics} API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Evaluate {dfanalytics}</titleabbrev>
  8. ++++
  9. Evaluates the {dfanalytics} for an annotated index.
  10. experimental[]
  11. [[ml-evaluate-dfanalytics-request]]
  12. ==== {api-request-title}
  13. `POST _ml/data_frame/_evaluate`
  14. [[ml-evaluate-dfanalytics-prereq]]
  15. ==== {api-prereq-title}
  16. * You must have `monitor_ml` privilege to use this API. For more
  17. information, see {stack-ov}/security-privileges.html[Security privileges] and
  18. {stack-ov}/built-in-roles.html[Built-in roles].
  19. [[ml-evaluate-dfanalytics-desc]]
  20. ==== {api-description-title}
  21. This API evaluates the executed analysis on an index that is already annotated
  22. with a field that contains the results of the analytics (the `ground truth`)
  23. for each {dataframe} row.
  24. Evaluation is typically done by calculating a set of metrics that capture various aspects of the quality of the results over the data for which you have the
  25. `ground truth`.
  26. For different types of analyses different metrics are suitable. This API
  27. packages together commonly used metrics for various analyses.
  28. [[ml-evaluate-dfanalytics-request-body]]
  29. ==== {api-request-body-title}
  30. `index`::
  31. (Required, object) Defines the `index` in which the evaluation will be
  32. performed.
  33. `query`::
  34. (Optional, object) Query used to select data from the index.
  35. The {es} query domain-specific language (DSL). This value corresponds to the query
  36. object in an {es} search POST body. By default, this property has the following
  37. value: `{"match_all": {}}`.
  38. `evaluation`::
  39. (Required, object) Defines the type of evaluation you want to perform. For example:
  40. `binary_soft_classification`. See <<ml-evaluate-dfanalytics-resources>>.
  41. ////
  42. [[ml-evaluate-dfanalytics-results]]
  43. ==== {api-response-body-title}
  44. `binary_soft_classification`::
  45. (object) If you chose to do binary soft classification, the API returns the
  46. following evaluation metrics:
  47. `auc_roc`::: TBD
  48. `confusion_matrix`::: TBD
  49. `precision`::: TBD
  50. `recall`::: TBD
  51. ////
  52. [[ml-evaluate-dfanalytics-example]]
  53. ==== {api-examples-title}
  54. [source,js]
  55. --------------------------------------------------
  56. POST _ml/data_frame/_evaluate
  57. {
  58. "index": "my_analytics_dest_index",
  59. "evaluation": {
  60. "binary_soft_classification": {
  61. "actual_field": "is_outlier",
  62. "predicted_probability_field": "ml.outlier_score"
  63. }
  64. }
  65. }
  66. --------------------------------------------------
  67. // CONSOLE
  68. // TEST[skip:TBD]
  69. The API returns the following results:
  70. [source,js]
  71. ----
  72. {
  73. "binary_soft_classification": {
  74. "auc_roc": {
  75. "score": 0.92584757746414444
  76. },
  77. "confusion_matrix": {
  78. "0.25": {
  79. "tp": 5,
  80. "fp": 9,
  81. "tn": 204,
  82. "fn": 5
  83. },
  84. "0.5": {
  85. "tp": 1,
  86. "fp": 5,
  87. "tn": 208,
  88. "fn": 9
  89. },
  90. "0.75": {
  91. "tp": 0,
  92. "fp": 4,
  93. "tn": 209,
  94. "fn": 10
  95. }
  96. },
  97. "precision": {
  98. "0.25": 0.35714285714285715,
  99. "0.5": 0.16666666666666666,
  100. "0.75": 0
  101. },
  102. "recall": {
  103. "0.25": 0.5,
  104. "0.5": 0.1,
  105. "0.75": 0
  106. }
  107. }
  108. }
  109. ----
  110. // TESTRESPONSE