put-dfanalytics.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[put-dfanalytics]]
  4. === Create {dfanalytics-jobs} API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Create {dfanalytics-jobs}</titleabbrev>
  8. ++++
  9. Instantiates a {dfanalytics-job}.
  10. experimental[]
  11. [[ml-put-dfanalytics-request]]
  12. ==== {api-request-title}
  13. `PUT _ml/data_frame/analytics/<data_frame_analytics_id>`
  14. [[ml-put-dfanalytics-prereq]]
  15. ==== {api-prereq-title}
  16. * You must have `machine_learning_admin` built-in role to use this API. You must
  17. also have `read` and `view_index_metadata` privileges on the source index and
  18. `read`, `create_index`, and `index` privileges on the destination index. For
  19. more information, see <<security-privileges>> and <<built-in-roles>>.
  20. [[ml-put-dfanalytics-desc]]
  21. ==== {api-description-title}
  22. This API creates a {dfanalytics-job} that performs an analysis on the source
  23. index and stores the outcome in a destination index.
  24. The destination index will be automatically created if it does not exist. The
  25. `index.number_of_shards` and `index.number_of_replicas` settings of the source
  26. index will be copied over the destination index. When the source index matches
  27. multiple indices, these settings will be set to the maximum values found in the
  28. source indices.
  29. The mappings of the source indices are also attempted to be copied over
  30. to the destination index, however, if the mappings of any of the fields don't
  31. match among the source indices, the attempt will fail with an error message.
  32. If the destination index already exists, then it will be use as is. This makes
  33. it possible to set up the destination index in advance with custom settings
  34. and mappings.
  35. [[ml-put-dfanalytics-supported-fields]]
  36. ===== Supported fields
  37. ====== {oldetection-cap}
  38. {oldetection-cap} requires numeric or boolean data to analyze. The algorithms
  39. don't support missing values therefore fields that have data types other than
  40. numeric or boolean are ignored. Documents where included fields contain missing
  41. values, null values, or an array are also ignored. Therefore the `dest` index
  42. may contain documents that don't have an {olscore}.
  43. ====== {regression-cap}
  44. {regression-cap} supports fields that are numeric, `boolean`, `text`, `keyword`,
  45. and `ip`. It is also tolerant of missing values. Fields that are supported are
  46. included in the analysis, other fields are ignored. Documents where included
  47. fields contain an array with two or more values are also ignored. Documents in
  48. the `dest` index that don’t contain a results field are not included in the
  49. {reganalysis}.
  50. ====== {classification-cap}
  51. {classification-cap} supports fields that are numeric, `boolean`, `text`,
  52. `keyword`, and `ip`. It is also tolerant of missing values. Fields that are
  53. supported are included in the analysis, other fields are ignored. Documents
  54. where included fields contain an array with two or more values are also ignored.
  55. Documents in the `dest` index that don’t contain a results field are not
  56. included in the {classanalysis}.
  57. {classanalysis-cap} can be improved by mapping ordinal variable values to a
  58. single number. For example, in case of age ranges, you can model the values as
  59. "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on.
  60. [[ml-put-dfanalytics-path-params]]
  61. ==== {api-path-parms-title}
  62. `<data_frame_analytics_id>`::
  63. (Required, string) A numerical character string that uniquely identifies the
  64. {dfanalytics-job}. This identifier can contain lowercase alphanumeric
  65. characters (a-z and 0-9), hyphens, and underscores. It must start and end with
  66. alphanumeric characters.
  67. [[ml-put-dfanalytics-request-body]]
  68. ==== {api-request-body-title}
  69. `analysis`::
  70. (Required, object) Defines the type of {dfanalytics} you want to perform on
  71. your source index. For example: `outlier_detection`. See
  72. <<dfanalytics-types>>.
  73. `analyzed_fields`::
  74. (Optional, object) Specify `includes` and/or `excludes` patterns to select
  75. which fields will be included in the analysis. If `analyzed_fields` is not set,
  76. only the relevant fields will be included. For example, all the numeric fields
  77. for {oldetection}. For the supported field types, see <<ml-put-dfanalytics-supported-fields>>.
  78. Also see the <<explain-dfanalytics>> which helps understand
  79. field selection.
  80. `includes`:::
  81. (Optional, array) An array of strings that defines the fields that will be
  82. included in the analysis.
  83. `excludes`:::
  84. (Optional, array) An array of strings that defines the fields that will be
  85. excluded from the analysis. You do not need to add fields with unsupported
  86. data types to `excludes`, these fields are excluded from the analysis
  87. automatically.
  88. `description`::
  89. (Optional, string) A description of the job.
  90. `dest`::
  91. (Required, object) The destination configuration, consisting of `index` and
  92. optionally `results_field` (`ml` by default).
  93. `index`:::
  94. (Required, string) Defines the _destination index_ to store the results of
  95. the {dfanalytics-job}.
  96. `results_field`:::
  97. (Optional, string) Defines the name of the field in which to store the
  98. results of the analysis. Default to `ml`.
  99. `model_memory_limit`::
  100. (Optional, string) The approximate maximum amount of memory resources that are
  101. permitted for analytical processing. The default value for {dfanalytics-jobs}
  102. is `1gb`. If your `elasticsearch.yml` file contains an
  103. `xpack.ml.max_model_memory_limit` setting, an error occurs when you try to
  104. create {dfanalytics-jobs} that have `model_memory_limit` values greater than
  105. that setting. For more information, see <<ml-settings>>.
  106. `source`::
  107. (object) The configuration of how to source the analysis data. It requires an `index`.
  108. Optionally, `query` and `_source` may be specified.
  109. `index`:::
  110. (Required, string or array) Index or indices on which to perform the
  111. analysis. It can be a single index or index pattern as well as an array of
  112. indices or patterns.
  113. `query`:::
  114. (Optional, object) The {es} query domain-specific language
  115. (<<query-dsl,DSL>>). This value corresponds to the query object in an {es}
  116. search POST body. All the options that are supported by {es} can be used,
  117. as this object is passed verbatim to {es}. By default, this property has
  118. the following value: `{"match_all": {}}`.
  119. `_source`:::
  120. (Optional, object) Specify `includes` and/or `excludes` patterns to select
  121. which fields will be present in the destination. Fields that are excluded
  122. cannot be included in the analysis.
  123. `includes`::::
  124. (array) An array of strings that defines the fields that will be included in
  125. the destination.
  126. `excludes`::::
  127. (array) An array of strings that defines the fields that will be excluded
  128. from the destination.
  129. `allow_lazy_start`::
  130. (Optional, boolean) Whether this job should be allowed to start when there
  131. is insufficient {ml} node capacity for it to be immediately assigned to a node.
  132. The default is `false`, which means that the <<start-dfanalytics>>
  133. will return an error if a {ml} node with capacity to run the
  134. job cannot immediately be found. (However, this is also subject to
  135. the cluster-wide `xpack.ml.max_lazy_ml_nodes` setting - see
  136. <<advanced-ml-settings>>.) If this option is set to `true` then
  137. the <<start-dfanalytics>> will not return an error, and the job will
  138. wait in the `starting` state until sufficient {ml} node capacity
  139. is available.
  140. [[ml-put-dfanalytics-example]]
  141. ==== {api-examples-title}
  142. [[ml-put-dfanalytics-example-od]]
  143. ===== {oldetection-cap} example
  144. The following example creates the `loganalytics` {dfanalytics-job}, the analysis
  145. type is `outlier_detection`:
  146. [source,console]
  147. --------------------------------------------------
  148. PUT _ml/data_frame/analytics/loganalytics
  149. {
  150. "description": "Outlier detection on log data",
  151. "source": {
  152. "index": "logdata"
  153. },
  154. "dest": {
  155. "index": "logdata_out"
  156. },
  157. "analysis": {
  158. "outlier_detection": {
  159. "compute_feature_influence": true,
  160. "outlier_fraction": 0.05,
  161. "standardization_enabled": true
  162. }
  163. }
  164. }
  165. --------------------------------------------------
  166. // TEST[setup:setup_logdata]
  167. The API returns the following result:
  168. [source,console-result]
  169. ----
  170. {
  171. "id": "loganalytics",
  172. "description": "Outlier detection on log data",
  173. "source": {
  174. "index": ["logdata"],
  175. "query": {
  176. "match_all": {}
  177. }
  178. },
  179. "dest": {
  180. "index": "logdata_out",
  181. "results_field": "ml"
  182. },
  183. "analysis": {
  184. "outlier_detection": {
  185. "compute_feature_influence": true,
  186. "outlier_fraction": 0.05,
  187. "standardization_enabled": true
  188. }
  189. },
  190. "model_memory_limit": "1gb",
  191. "create_time" : 1562265491319,
  192. "version" : "8.0.0",
  193. "allow_lazy_start" : false
  194. }
  195. ----
  196. // TESTRESPONSE[s/1562265491319/$body.$_path/]
  197. // TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/]
  198. [[ml-put-dfanalytics-example-r]]
  199. ===== {regression-cap} examples
  200. The following example creates the `house_price_regression_analysis`
  201. {dfanalytics-job}, the analysis type is `regression`:
  202. [source,console]
  203. --------------------------------------------------
  204. PUT _ml/data_frame/analytics/house_price_regression_analysis
  205. {
  206. "source": {
  207. "index": "houses_sold_last_10_yrs"
  208. },
  209. "dest": {
  210. "index": "house_price_predictions"
  211. },
  212. "analysis":
  213. {
  214. "regression": {
  215. "dependent_variable": "price"
  216. }
  217. }
  218. }
  219. --------------------------------------------------
  220. // TEST[skip:TBD]
  221. The API returns the following result:
  222. [source,console-result]
  223. ----
  224. {
  225. "id" : "house_price_regression_analysis",
  226. "source" : {
  227. "index" : [
  228. "houses_sold_last_10_yrs"
  229. ],
  230. "query" : {
  231. "match_all" : { }
  232. }
  233. },
  234. "dest" : {
  235. "index" : "house_price_predictions",
  236. "results_field" : "ml"
  237. },
  238. "analysis" : {
  239. "regression" : {
  240. "dependent_variable" : "price",
  241. "training_percent" : 100
  242. }
  243. },
  244. "model_memory_limit" : "1gb",
  245. "create_time" : 1567168659127,
  246. "version" : "8.0.0",
  247. "allow_lazy_start" : false
  248. }
  249. ----
  250. // TESTRESPONSE[s/1567168659127/$body.$_path/]
  251. // TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/]
  252. The following example creates a job and specifies a training percent:
  253. [source,console]
  254. --------------------------------------------------
  255. PUT _ml/data_frame/analytics/student_performance_mathematics_0.3
  256. {
  257. "source": {
  258. "index": "student_performance_mathematics"
  259. },
  260. "dest": {
  261. "index":"student_performance_mathematics_reg"
  262. },
  263. "analysis":
  264. {
  265. "regression": {
  266. "dependent_variable": "G3",
  267. "training_percent": 70 <1>
  268. }
  269. }
  270. }
  271. --------------------------------------------------
  272. // TEST[skip:TBD]
  273. <1> The `training_percent` defines the percentage of the data set that will be used
  274. for training the model.
  275. [[ml-put-dfanalytics-example-c]]
  276. ===== {classification-cap} example
  277. The following example creates the `loan_classification` {dfanalytics-job}, the
  278. analysis type is `classification`:
  279. [source,console]
  280. --------------------------------------------------
  281. PUT _ml/data_frame/analytics/loan_classification
  282. {
  283. "source" : {
  284. "index": "loan-applicants"
  285. },
  286. "dest" : {
  287. "index": "loan-applicants-classified"
  288. },
  289. "analysis" : {
  290. "classification": {
  291. "dependent_variable": "label",
  292. "training_percent": 75,
  293. "num_top_classes": 2
  294. }
  295. }
  296. }
  297. --------------------------------------------------
  298. // TEST[skip:TBD]