|
@@ -1,7 +1,7 @@
|
|
|
[role="xpack"]
|
|
|
[testenv="platinum"]
|
|
|
[[evaluate-dfanalytics]]
|
|
|
-=== Evaluate {dfanalytics} API
|
|
|
+= Evaluate {dfanalytics} API
|
|
|
|
|
|
[subs="attributes"]
|
|
|
++++
|
|
@@ -14,13 +14,13 @@ experimental[]
|
|
|
|
|
|
|
|
|
[[ml-evaluate-dfanalytics-request]]
|
|
|
-==== {api-request-title}
|
|
|
+== {api-request-title}
|
|
|
|
|
|
`POST _ml/data_frame/_evaluate`
|
|
|
|
|
|
|
|
|
[[ml-evaluate-dfanalytics-prereq]]
|
|
|
-==== {api-prereq-title}
|
|
|
+== {api-prereq-title}
|
|
|
|
|
|
If the {es} {security-features} are enabled, you must have the following
|
|
|
privileges:
|
|
@@ -31,7 +31,7 @@ For more information, see <<security-privileges>> and <<built-in-roles>>.
|
|
|
|
|
|
|
|
|
[[ml-evaluate-dfanalytics-desc]]
|
|
|
-==== {api-description-title}
|
|
|
+== {api-description-title}
|
|
|
|
|
|
The API packages together commonly used evaluation metrics for various types of
|
|
|
machine learning features. This has been designed for use on indexes created by
|
|
@@ -40,7 +40,7 @@ result field to be present.
|
|
|
|
|
|
|
|
|
[[ml-evaluate-dfanalytics-request-body]]
|
|
|
-==== {api-request-body-title}
|
|
|
+== {api-request-body-title}
|
|
|
|
|
|
`evaluation`::
|
|
|
(Required, object) Defines the type of evaluation you want to perform.
|
|
@@ -64,10 +64,10 @@ performed.
|
|
|
source index. See <<query-dsl>>.
|
|
|
|
|
|
[[ml-evaluate-dfanalytics-resources]]
|
|
|
-==== {dfanalytics-cap} evaluation resources
|
|
|
+== {dfanalytics-cap} evaluation resources
|
|
|
|
|
|
[[binary-sc-resources]]
|
|
|
-===== Binary soft classification evaluation objects
|
|
|
+=== Binary soft classification evaluation objects
|
|
|
|
|
|
Binary soft classification evaluates the results of an analysis which outputs
|
|
|
the probability that each document belongs to a certain class. For example, in
|
|
@@ -109,7 +109,7 @@ document is an outlier.
|
|
|
|
|
|
|
|
|
[[regression-evaluation-resources]]
|
|
|
-===== {regression-cap} evaluation objects
|
|
|
+=== {regression-cap} evaluation objects
|
|
|
|
|
|
{regression-cap} evaluation evaluates the results of a {regression} analysis
|
|
|
which outputs a prediction of values.
|
|
@@ -145,7 +145,7 @@ which outputs a prediction of values.
|
|
|
|
|
|
|
|
|
[[classification-evaluation-resources]]
|
|
|
-==== {classification-cap} evaluation objects
|
|
|
+== {classification-cap} evaluation objects
|
|
|
|
|
|
{classification-cap} evaluation evaluates the results of a {classanalysis} which
|
|
|
outputs a prediction that identifies to which of the classes each document
|
|
@@ -178,7 +178,7 @@ belongs.
|
|
|
|
|
|
////
|
|
|
[[ml-evaluate-dfanalytics-results]]
|
|
|
-==== {api-response-body-title}
|
|
|
+== {api-response-body-title}
|
|
|
|
|
|
`binary_soft_classification`::
|
|
|
(object) If you chose to do binary soft classification, the API returns the
|
|
@@ -195,11 +195,11 @@ belongs.
|
|
|
|
|
|
|
|
|
[[ml-evaluate-dfanalytics-example]]
|
|
|
-==== {api-examples-title}
|
|
|
+== {api-examples-title}
|
|
|
|
|
|
|
|
|
[[ml-evaluate-binary-soft-class-example]]
|
|
|
-===== Binary soft classification
|
|
|
+=== Binary soft classification
|
|
|
|
|
|
[source,console]
|
|
|
--------------------------------------------------
|
|
@@ -261,7 +261,7 @@ The API returns the following results:
|
|
|
|
|
|
|
|
|
[[ml-evaluate-regression-example]]
|
|
|
-===== {regression-cap}
|
|
|
+=== {regression-cap}
|
|
|
|
|
|
[source,console]
|
|
|
--------------------------------------------------
|
|
@@ -375,7 +375,7 @@ calculated by the {reganalysis}.
|
|
|
|
|
|
|
|
|
[[ml-evaluate-classification-example]]
|
|
|
-===== {classification-cap}
|
|
|
+=== {classification-cap}
|
|
|
|
|
|
|
|
|
[source,console]
|