dfanalyticsresources.asciidoc 9.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[ml-dfanalytics-resources]]
  4. === {dfanalytics-cap} job resources
  5. {dfanalytics-cap} resources relate to APIs such as <<put-dfanalytics>> and
  6. <<get-dfanalytics>>.
  7. [discrete]
  8. [[ml-dfanalytics-properties]]
  9. ==== {api-definitions-title}
  10. `analysis`::
  11. (object) The type of analysis that is performed on the `source`. For example:
  12. `outlier_detection` or `regression`. For more information, see
  13. <<dfanalytics-types>>.
  14. `analyzed_fields`::
  15. (object) You can specify both `includes` and/or `excludes` patterns. If
  16. `analyzed_fields` is not set, only the relevant fields will be included. For
  17. example, all the numeric fields for {oldetection}. For the supported field
  18. types, see <<ml-put-dfanalytics-supported-fields>>.
  19. `includes`:::
  20. (array) An array of strings that defines the fields that will be included in
  21. the analysis.
  22. `excludes`:::
  23. (array) An array of strings that defines the fields that will be excluded
  24. from the analysis.
  25. [source,console]
  26. --------------------------------------------------
  27. PUT _ml/data_frame/analytics/loganalytics
  28. {
  29. "source": {
  30. "index": "logdata"
  31. },
  32. "dest": {
  33. "index": "logdata_out"
  34. },
  35. "analysis": {
  36. "outlier_detection": {
  37. }
  38. },
  39. "analyzed_fields": {
  40. "includes": [ "request.bytes", "response.counts.error" ],
  41. "excludes": [ "source.geo" ]
  42. }
  43. }
  44. --------------------------------------------------
  45. // TEST[setup:setup_logdata]
  46. `description`::
  47. (Optional, string) A description of the job.
  48. `dest`::
  49. (object) The destination configuration of the analysis.
  50. `index`:::
  51. (Required, string) Defines the _destination index_ to store the results of
  52. the {dfanalytics-job}.
  53. `results_field`:::
  54. (Optional, string) Defines the name of the field in which to store the
  55. results of the analysis. Default to `ml`.
  56. `id`::
  57. (string) The unique identifier for the {dfanalytics-job}. This identifier can
  58. contain lowercase alphanumeric characters (a-z and 0-9), hyphens, and
  59. underscores. It must start and end with alphanumeric characters. This property
  60. is informational; you cannot change the identifier for existing jobs.
  61. `model_memory_limit`::
  62. (string) The approximate maximum amount of memory resources that are
  63. permitted for analytical processing. The default value for {dfanalytics-jobs}
  64. is `1gb`. If your `elasticsearch.yml` file contains an
  65. `xpack.ml.max_model_memory_limit` setting, an error occurs when you try to
  66. create {dfanalytics-jobs} that have `model_memory_limit` values greater than
  67. that setting. For more information, see <<ml-settings>>.
  68. `source`::
  69. (object) The source configuration consisting an `index` and optionally a
  70. `query` object.
  71. `index`:::
  72. (Required, string or array) Index or indices on which to perform the
  73. analysis. It can be a single index or index pattern as well as an array of
  74. indices or patterns.
  75. `query`:::
  76. (Optional, object) The {es} query domain-specific language
  77. (<<query-dsl,DSL>>). This value corresponds to the query object in an {es}
  78. search POST body. All the options that are supported by {es} can be used,
  79. as this object is passed verbatim to {es}. By default, this property has
  80. the following value: `{"match_all": {}}`.
  81. [[dfanalytics-types]]
  82. ==== Analysis objects
  83. {dfanalytics-cap} resources contain `analysis` objects. For example, when you
  84. create a {dfanalytics-job}, you must define the type of analysis it performs.
  85. [discrete]
  86. [[oldetection-resources]]
  87. ==== {oldetection-cap} configuration objects
  88. An `outlier_detection` configuration object has the following properties:
  89. `compute_feature_influence`::
  90. (boolean) If `true`, the feature influence calculation is enabled. Defaults to
  91. `true`.
  92. `feature_influence_threshold`::
  93. (double) The minimum {olscore} that a document needs to have in order to
  94. calculate its {fiscore}. Value range: 0-1 (`0.1` by default).
  95. `method`::
  96. (string) Sets the method that {oldetection} uses. If the method is not set
  97. {oldetection} uses an ensemble of different methods and normalises and
  98. combines their individual {olscores} to obtain the overall {olscore}. We
  99. recommend to use the ensemble method. Available methods are `lof`, `ldof`,
  100. `distance_kth_nn`, `distance_knn`.
  101. `n_neighbors`::
  102. (integer) Defines the value for how many nearest neighbors each method of
  103. {oldetection} will use to calculate its {olscore}. When the value is not set,
  104. different values will be used for different ensemble members. This helps
  105. improve diversity in the ensemble. Therefore, only override this if you are
  106. confident that the value you choose is appropriate for the data set.
  107. `outlier_fraction`::
  108. (double) Sets the proportion of the data set that is assumed to be outlying prior to
  109. {oldetection}. For example, 0.05 means it is assumed that 5% of values are real outliers
  110. and 95% are inliers.
  111. `standardization_enabled`::
  112. (boolean) If `true`, then the following operation is performed on the columns
  113. before computing outlier scores: (x_i - mean(x_i)) / sd(x_i). Defaults to
  114. `true`. For more information, see
  115. https://en.wikipedia.org/wiki/Feature_scaling#Standardization_(Z-score_Normalization)[this wiki page about standardization].
  116. [discrete]
  117. [[regression-resources]]
  118. ==== {regression-cap} configuration objects
  119. [source,console]
  120. --------------------------------------------------
  121. PUT _ml/data_frame/analytics/house_price_regression_analysis
  122. {
  123. "source": {
  124. "index": "houses_sold_last_10_yrs" <1>
  125. },
  126. "dest": {
  127. "index": "house_price_predictions" <2>
  128. },
  129. "analysis":
  130. {
  131. "regression": { <3>
  132. "dependent_variable": "price" <4>
  133. }
  134. }
  135. }
  136. --------------------------------------------------
  137. // TEST[skip:TBD]
  138. <1> Training data is taken from source index `houses_sold_last_10_yrs`.
  139. <2> Analysis results will be output to destination index
  140. `house_price_predictions`.
  141. <3> The regression analysis configuration object.
  142. <4> Regression analysis will use field `price` to train on. As no other
  143. parameters have been specified it will train on 100% of eligible data, store its
  144. prediction in destination index field `price_prediction` and use in-built
  145. hyperparameter optimization to give minimum validation errors.
  146. [float]
  147. [[regression-resources-standard]]
  148. ===== Standard parameters
  149. include::{docdir}/ml/ml-shared.asciidoc[tag=dependent_variable]
  150. +
  151. --
  152. The data type of the field must be numeric.
  153. --
  154. include::{docdir}/ml/ml-shared.asciidoc[tag=prediction_field_name]
  155. include::{docdir}/ml/ml-shared.asciidoc[tag=training_percent]
  156. [float]
  157. [[regression-resources-advanced]]
  158. ===== Advanced parameters
  159. Advanced parameters are for fine-tuning {reganalysis}. They are set
  160. automatically by <<ml-hyperparameter-optimization,hyperparameter optimization>>
  161. to give minimum validation error. It is highly recommended to use the default
  162. values unless you fully understand the function of these parameters. If these
  163. parameters are not supplied, their values are automatically tuned to give
  164. minimum validation error.
  165. include::{docdir}/ml/ml-shared.asciidoc[tag=eta]
  166. include::{docdir}/ml/ml-shared.asciidoc[tag=feature_bag_fraction]
  167. include::{docdir}/ml/ml-shared.asciidoc[tag=maximum_number_trees]
  168. include::{docdir}/ml/ml-shared.asciidoc[tag=gamma]
  169. include::{docdir}/ml/ml-shared.asciidoc[tag=lambda]
  170. [discrete]
  171. [[classification-resources]]
  172. ==== {classification-cap} configuration objects
  173. [float]
  174. [[classification-resources-standard]]
  175. ===== Standard parameters
  176. include::{docdir}/ml/ml-shared.asciidoc[tag=dependent_variable]
  177. +
  178. --
  179. The data type of the field must be numeric or boolean.
  180. --
  181. `num_top_classes`::
  182. (Optional, integer) Defines the number of categories for which the predicted
  183. probabilities are reported. It must be non-negative. If it is greater than the
  184. total number of categories (in the {version} version of the {stack}, it's two)
  185. to predict then we will report all category probabilities. Defaults to 2.
  186. include::{docdir}/ml/ml-shared.asciidoc[tag=prediction_field_name]
  187. include::{docdir}/ml/ml-shared.asciidoc[tag=training_percent]
  188. [float]
  189. [[classification-resources-advanced]]
  190. ===== Advanced parameters
  191. Advanced parameters are for fine-tuning {classanalysis}. They are set
  192. automatically by <<ml-hyperparameter-optimization,hyperparameter optimization>>
  193. to give minimum validation error. It is highly recommended to use the default
  194. values unless you fully understand the function of these parameters. If these
  195. parameters are not supplied, their values are automatically tuned to give
  196. minimum validation error.
  197. include::{docdir}/ml/ml-shared.asciidoc[tag=eta]
  198. include::{docdir}/ml/ml-shared.asciidoc[tag=feature_bag_fraction]
  199. include::{docdir}/ml/ml-shared.asciidoc[tag=maximum_number_trees]
  200. include::{docdir}/ml/ml-shared.asciidoc[tag=gamma]
  201. include::{docdir}/ml/ml-shared.asciidoc[tag=lambda]
  202. [[ml-hyperparameter-optimization]]
  203. ===== Hyperparameter optimization
  204. If you don't supply {regression} or {classification} parameters, hyperparameter
  205. optimization will be performed by default to set a value for the undefined
  206. parameters. The starting point is calculated for data dependent parameters by
  207. examining the loss on the training data. Subject to the size constraint, this
  208. operation provides an upper bound on the improvement in validation loss.
  209. A fixed number of rounds is used for optimization which depends on the number of
  210. parameters being optimized. The optimitazion starts with random search, then
  211. Bayesian Optimisation is performed that is targeting maximum expected
  212. improvement. If you override any parameters, then the optimization will
  213. calculate the value of the remaining parameters accordingly and use the value
  214. you provided for the overridden parameter. The number of rounds are reduced
  215. respectively. The validation error is estimated in each round by using 4-fold
  216. cross validation.