put-dfanalytics.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[put-dfanalytics]]
  4. === Create {dfanalytics-jobs} API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Create {dfanalytics-jobs}</titleabbrev>
  8. ++++
  9. Instantiates a {dfanalytics-job}.
  10. experimental[]
  11. [[ml-put-dfanalytics-request]]
  12. ==== {api-request-title}
  13. `PUT _ml/data_frame/analytics/<data_frame_analytics_id>`
  14. [[ml-put-dfanalytics-prereq]]
  15. ==== {api-prereq-title}
  16. If the {es} {security-features} are enabled, you must have the following built-in roles and privileges:
  17. * `machine_learning_admin`
  18. * `kibana_user` (UI only)
  19. * source index: `read`, `view_index_metadata`
  20. * destination index: `read`, `create_index`, `manage` and `index`
  21. * cluster: `monitor` (UI only)
  22. For more information, see <<security-privileges>> and <<built-in-roles>>.
  23. [[ml-put-dfanalytics-desc]]
  24. ==== {api-description-title}
  25. This API creates a {dfanalytics-job} that performs an analysis on the source
  26. index and stores the outcome in a destination index.
  27. The destination index will be automatically created if it does not exist. The
  28. `index.number_of_shards` and `index.number_of_replicas` settings of the source
  29. index will be copied over the destination index. When the source index matches
  30. multiple indices, these settings will be set to the maximum values found in the
  31. source indices.
  32. The mappings of the source indices are also attempted to be copied over
  33. to the destination index, however, if the mappings of any of the fields don't
  34. match among the source indices, the attempt will fail with an error message.
  35. If the destination index already exists, then it will be use as is. This makes
  36. it possible to set up the destination index in advance with custom settings
  37. and mappings.
  38. [[ml-put-dfanalytics-supported-fields]]
  39. ===== Supported fields
  40. ====== {oldetection-cap}
  41. {oldetection-cap} requires numeric or boolean data to analyze. The algorithms
  42. don't support missing values therefore fields that have data types other than
  43. numeric or boolean are ignored. Documents where included fields contain missing
  44. values, null values, or an array are also ignored. Therefore the `dest` index
  45. may contain documents that don't have an {olscore}.
  46. ====== {regression-cap}
  47. {regression-cap} supports fields that are numeric, `boolean`, `text`, `keyword`,
  48. and `ip`. It is also tolerant of missing values. Fields that are supported are
  49. included in the analysis, other fields are ignored. Documents where included
  50. fields contain an array with two or more values are also ignored. Documents in
  51. the `dest` index that don’t contain a results field are not included in the
  52. {reganalysis}.
  53. ====== {classification-cap}
  54. {classification-cap} supports fields that are numeric, `boolean`, `text`,
  55. `keyword`, and `ip`. It is also tolerant of missing values. Fields that are
  56. supported are included in the analysis, other fields are ignored. Documents
  57. where included fields contain an array with two or more values are also ignored.
  58. Documents in the `dest` index that don’t contain a results field are not
  59. included in the {classanalysis}.
  60. {classanalysis-cap} can be improved by mapping ordinal variable values to a
  61. single number. For example, in case of age ranges, you can model the values as
  62. "0-14" = 0, "15-24" = 1, "25-34" = 2, and so on.
  63. [[ml-put-dfanalytics-path-params]]
  64. ==== {api-path-parms-title}
  65. `<data_frame_analytics_id>`::
  66. (Required, string)
  67. include::{docdir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define]
  68. [[ml-put-dfanalytics-request-body]]
  69. ==== {api-request-body-title}
  70. `analysis`::
  71. (Required, object)
  72. include::{docdir}/ml/ml-shared.asciidoc[tag=analysis]
  73. `analyzed_fields`::
  74. (Optional, object)
  75. include::{docdir}/ml/ml-shared.asciidoc[tag=analyzed-fields]
  76. [source,console]
  77. --------------------------------------------------
  78. PUT _ml/data_frame/analytics/loganalytics
  79. {
  80. "source": {
  81. "index": "logdata"
  82. },
  83. "dest": {
  84. "index": "logdata_out"
  85. },
  86. "analysis": {
  87. "outlier_detection": {
  88. }
  89. },
  90. "analyzed_fields": {
  91. "includes": [ "request.bytes", "response.counts.error" ],
  92. "excludes": [ "source.geo" ]
  93. }
  94. }
  95. --------------------------------------------------
  96. // TEST[setup:setup_logdata]
  97. `description`::
  98. (Optional, string)
  99. include::{docdir}/ml/ml-shared.asciidoc[tag=description-dfa]
  100. `dest`::
  101. (Required, object)
  102. include::{docdir}/ml/ml-shared.asciidoc[tag=dest]
  103. `model_memory_limit`::
  104. (Optional, string)
  105. include::{docdir}/ml/ml-shared.asciidoc[tag=model-memory-limit-dfa]
  106. `source`::
  107. (object)
  108. include::{docdir}/ml/ml-shared.asciidoc[tag=source-put-dfa]
  109. `allow_lazy_start`::
  110. (Optional, boolean)
  111. include::{docdir}/ml/ml-shared.asciidoc[tag=allow-lazy-start]
  112. [[ml-put-dfanalytics-example]]
  113. ==== {api-examples-title}
  114. [[ml-put-dfanalytics-example-preprocess]]
  115. ===== Preprocessing actions example
  116. The following example shows how to limit the scope of the analysis to certain
  117. fields, specify excluded fields in the destination index, and use a query to
  118. filter your data before analysis.
  119. [source,console]
  120. --------------------------------------------------
  121. PUT _ml/data_frame/analytics/model-flight-delays-pre
  122. {
  123. "source": {
  124. "index": [
  125. "kibana_sample_data_flights" <1>
  126. ],
  127. "query": { <2>
  128. "range": {
  129. "DistanceKilometers": {
  130. "gt": 0
  131. }
  132. }
  133. },
  134. "_source": { <3>
  135. "includes": [],
  136. "excludes": [
  137. "FlightDelay",
  138. "FlightDelayType"
  139. ]
  140. }
  141. },
  142. "dest": { <4>
  143. "index": "df-flight-delays",
  144. "results_field": "ml-results"
  145. },
  146. "analysis": {
  147. "regression": {
  148. "dependent_variable": "FlightDelayMin",
  149. "training_percent": 90
  150. }
  151. },
  152. "analyzed_fields": { <5>
  153. "includes": [],
  154. "excludes": [
  155. "FlightNum"
  156. ]
  157. },
  158. "model_memory_limit": "100mb"
  159. }
  160. --------------------------------------------------
  161. // TEST[skip:setup kibana sample data]
  162. <1> The source index to analyze.
  163. <2> This query filters out entire documents that will not be present in the
  164. destination index.
  165. <3> The `_source` object defines fields in the dataset that will be included or
  166. excluded in the destination index. In this case, `includes` does not specify any
  167. fields, so the default behavior takes place: all the fields of the source index
  168. will included except the ones that are explicitly specified in `excludes`.
  169. <4> Defines the destination index that contains the results of the analysis and
  170. the fields of the source index specified in the `_source` object. Also defines
  171. the name of the `results_field`.
  172. <5> Specifies fields to be included in or excluded from the analysis. This does
  173. not affect whether the fields will be present in the destination index, only
  174. affects whether they are used in the analysis.
  175. In this example, we can see that all the fields of the source index are included
  176. in the destination index except `FlightDelay` and `FlightDelayType` because
  177. these are defined as excluded fields by the `excludes` parameter of the
  178. `_source` object. The `FlightNum` field is included in the destination index,
  179. however it is not included in the analysis because it is explicitly specified as
  180. excluded field by the `excludes` parameter of the `analyzed_fields` object.
  181. [[ml-put-dfanalytics-example-od]]
  182. ===== {oldetection-cap} example
  183. The following example creates the `loganalytics` {dfanalytics-job}, the analysis
  184. type is `outlier_detection`:
  185. [source,console]
  186. --------------------------------------------------
  187. PUT _ml/data_frame/analytics/loganalytics
  188. {
  189. "description": "Outlier detection on log data",
  190. "source": {
  191. "index": "logdata"
  192. },
  193. "dest": {
  194. "index": "logdata_out"
  195. },
  196. "analysis": {
  197. "outlier_detection": {
  198. "compute_feature_influence": true,
  199. "outlier_fraction": 0.05,
  200. "standardization_enabled": true
  201. }
  202. }
  203. }
  204. --------------------------------------------------
  205. // TEST[setup:setup_logdata]
  206. The API returns the following result:
  207. [source,console-result]
  208. ----
  209. {
  210. "id": "loganalytics",
  211. "description": "Outlier detection on log data",
  212. "source": {
  213. "index": ["logdata"],
  214. "query": {
  215. "match_all": {}
  216. }
  217. },
  218. "dest": {
  219. "index": "logdata_out",
  220. "results_field": "ml"
  221. },
  222. "analysis": {
  223. "outlier_detection": {
  224. "compute_feature_influence": true,
  225. "outlier_fraction": 0.05,
  226. "standardization_enabled": true
  227. }
  228. },
  229. "model_memory_limit": "1gb",
  230. "create_time" : 1562265491319,
  231. "version" : "8.0.0",
  232. "allow_lazy_start" : false
  233. }
  234. ----
  235. // TESTRESPONSE[s/1562265491319/$body.$_path/]
  236. // TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/]
  237. [[ml-put-dfanalytics-example-r]]
  238. ===== {regression-cap} examples
  239. The following example creates the `house_price_regression_analysis`
  240. {dfanalytics-job}, the analysis type is `regression`:
  241. [source,console]
  242. --------------------------------------------------
  243. PUT _ml/data_frame/analytics/house_price_regression_analysis
  244. {
  245. "source": {
  246. "index": "houses_sold_last_10_yrs"
  247. },
  248. "dest": {
  249. "index": "house_price_predictions"
  250. },
  251. "analysis":
  252. {
  253. "regression": {
  254. "dependent_variable": "price"
  255. }
  256. }
  257. }
  258. --------------------------------------------------
  259. // TEST[skip:TBD]
  260. The API returns the following result:
  261. [source,console-result]
  262. ----
  263. {
  264. "id" : "house_price_regression_analysis",
  265. "source" : {
  266. "index" : [
  267. "houses_sold_last_10_yrs"
  268. ],
  269. "query" : {
  270. "match_all" : { }
  271. }
  272. },
  273. "dest" : {
  274. "index" : "house_price_predictions",
  275. "results_field" : "ml"
  276. },
  277. "analysis" : {
  278. "regression" : {
  279. "dependent_variable" : "price",
  280. "training_percent" : 100
  281. }
  282. },
  283. "model_memory_limit" : "1gb",
  284. "create_time" : 1567168659127,
  285. "version" : "8.0.0",
  286. "allow_lazy_start" : false
  287. }
  288. ----
  289. // TESTRESPONSE[s/1567168659127/$body.$_path/]
  290. // TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/]
  291. The following example creates a job and specifies a training percent:
  292. [source,console]
  293. --------------------------------------------------
  294. PUT _ml/data_frame/analytics/student_performance_mathematics_0.3
  295. {
  296. "source": {
  297. "index": "student_performance_mathematics"
  298. },
  299. "dest": {
  300. "index":"student_performance_mathematics_reg"
  301. },
  302. "analysis":
  303. {
  304. "regression": {
  305. "dependent_variable": "G3",
  306. "training_percent": 70, <1>
  307. "randomize_seed": 19673948271 <2>
  308. }
  309. }
  310. }
  311. --------------------------------------------------
  312. // TEST[skip:TBD]
  313. <1> The `training_percent` defines the percentage of the data set that will be
  314. used for training the model.
  315. <2> The `randomize_seed` is the seed used to randomly pick which data is used
  316. for training.
  317. [[ml-put-dfanalytics-example-c]]
  318. ===== {classification-cap} example
  319. The following example creates the `loan_classification` {dfanalytics-job}, the
  320. analysis type is `classification`:
  321. [source,console]
  322. --------------------------------------------------
  323. PUT _ml/data_frame/analytics/loan_classification
  324. {
  325. "source" : {
  326. "index": "loan-applicants"
  327. },
  328. "dest" : {
  329. "index": "loan-applicants-classified"
  330. },
  331. "analysis" : {
  332. "classification": {
  333. "dependent_variable": "label",
  334. "training_percent": 75,
  335. "num_top_classes": 2
  336. }
  337. }
  338. }
  339. --------------------------------------------------
  340. // TEST[skip:TBD]