evaluate-data-frame.asciidoc 2.1 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
  1. --
  2. :api: evaluate-data-frame
  3. :request: EvaluateDataFrameRequest
  4. :response: EvaluateDataFrameResponse
  5. --
  6. [role="xpack"]
  7. [id="{upid}-{api}"]
  8. === Evaluate Data Frame API
  9. The Evaluate Data Frame API is used to evaluate an ML algorithm that ran on a {dataframe}.
  10. The API accepts an +{request}+ object and returns an +{response}+.
  11. [id="{upid}-{api}-request"]
  12. ==== Evaluate Data Frame Request
  13. ["source","java",subs="attributes,callouts,macros"]
  14. --------------------------------------------------
  15. include-tagged::{doc-tests-file}[{api}-request]
  16. --------------------------------------------------
  17. <1> Constructing a new evaluation request
  18. <2> Reference to an existing index
  19. <3> The query with which to select data from indices
  20. <4> Kind of evaluation to perform
  21. <5> Name of the field in the index. Its value denotes the actual (i.e. ground truth) label for an example. Must be either true or false
  22. <6> Name of the field in the index. Its value denotes the probability (as per some ML algorithm) of the example being classified as positive
  23. <7> The remaining parameters are the metrics to be calculated based on the two fields described above.
  24. <8> https://en.wikipedia.org/wiki/Precision_and_recall[Precision] calculated at thresholds: 0.4, 0.5 and 0.6
  25. <9> https://en.wikipedia.org/wiki/Precision_and_recall[Recall] calculated at thresholds: 0.5 and 0.7
  26. <10> https://en.wikipedia.org/wiki/Confusion_matrix[Confusion matrix] calculated at threshold 0.5
  27. <11> https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve[AuC ROC] calculated and the curve points returned
  28. include::../execution.asciidoc[]
  29. [id="{upid}-{api}-response"]
  30. ==== Response
  31. The returned +{response}+ contains the requested evaluation metrics.
  32. ["source","java",subs="attributes,callouts,macros"]
  33. --------------------------------------------------
  34. include-tagged::{doc-tests-file}[{api}-response]
  35. --------------------------------------------------
  36. <1> Fetching all the calculated metrics results
  37. <2> Fetching precision metric by name
  38. <3> Fetching precision at a given (0.4) threshold
  39. <4> Fetching confusion matrix metric by name
  40. <5> Fetching confusion matrix at a given (0.5) threshold