evaluateresources.asciidoc 2.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[ml-evaluate-dfanalytics-resources]]
  4. === {dfanalytics-cap} evaluation resources
  5. Evaluation configuration objects relate to the <<evaluate-dfanalytics>>.
  6. [discrete]
  7. [[ml-evaluate-dfanalytics-properties]]
  8. ==== {api-definitions-title}
  9. `evaluation`::
  10. (object) Defines the type of evaluation you want to perform. The value of this
  11. object can be different depending on the type of evaluation you want to
  12. perform. For example, it can contain <<binary-sc-resources>>.
  13. [[binary-sc-resources]]
  14. ==== Binary soft classification configuration objects
  15. Binary soft classification evaluates the results of an analysis which outputs
  16. the probability that each {dataframe} row belongs to a certain class. For
  17. example, in the context of outlier detection, the analysis outputs the
  18. probability whether each row is an outlier.
  19. [discrete]
  20. [[binary-sc-resources-properties]]
  21. ===== {api-definitions-title}
  22. `actual_field`::
  23. (string) The field of the `index` which contains the `ground
  24. truth`. The data type of this field can be boolean or integer. If the data
  25. type is integer, the value has to be either `0` (false) or `1` (true).
  26. `predicted_probability_field`::
  27. (string) The field of the `index` that defines the probability of whether the
  28. item belongs to the class in question or not. It's the field that contains the
  29. results of the analysis.
  30. `metrics`::
  31. (object) Specifies the metrics that are used for the evaluation. Available
  32. metrics:
  33. `auc_roc`::
  34. (object) The AUC ROC (area under the curve of the receiver operating
  35. characteristic) score and optionally the curve.
  36. Default value is {"includes_curve": false}.
  37. `precision`::
  38. (object) Set the different thresholds of the {olscore} at where the metric
  39. is calculated.
  40. Default value is {"at": [0.25, 0.50, 0.75]}.
  41. `recall`::
  42. (object) Set the different thresholds of the {olscore} at where the metric
  43. is calculated.
  44. Default value is {"at": [0.25, 0.50, 0.75]}.
  45. `confusion_matrix`::
  46. (object) Set the different thresholds of the {olscore} at where the metrics
  47. (`tp` - true positive, `fp` - false positive, `tn` - true negative, `fn` -
  48. false negative) are calculated.
  49. Default value is {"at": [0.25, 0.50, 0.75]}.