put-dfanalytics.asciidoc 4.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[put-dfanalytics]]
  4. === Create {dfanalytics-jobs} API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Create {dfanalytics-jobs}</titleabbrev>
  8. ++++
  9. Instantiates a {dfanalytics-job}.
  10. experimental[]
  11. [[ml-put-dfanalytics-request]]
  12. ==== {api-request-title}
  13. `PUT _ml/data_frame/analytics/<data_frame_analytics_id>`
  14. [[ml-put-dfanalytics-prereq]]
  15. ==== {api-prereq-title}
  16. * You must have `machine_learning_admin` built-in role to use this API. You must
  17. also have `read` and `view_index_metadata` privileges on the source index and
  18. `read`, `create_index`, and `index` privileges on the destination index. For
  19. more information, see {stack-ov}/security-privileges.html[Security privileges]
  20. and {stack-ov}/built-in-roles.html[Built-in roles].
  21. [[ml-put-dfanalytics-desc]]
  22. ==== {api-description-title}
  23. This API creates a {dfanalytics-job} that performs an analysis on the source
  24. index and stores the outcome in a destination index.
  25. The destination index will be automatically created if it does not exist. The
  26. `index.number_of_shards` and `index.number_of_replicas` settings of the source
  27. index will be copied over the destination index. When the source index matches
  28. multiple indices, these settings will be set to the maximum values found in the
  29. source indices.
  30. The mappings of the source indices are also attempted to be copied over
  31. to the destination index, however, if the mappings of any of the fields don't
  32. match among the source indices, the attempt will fail with an error message.
  33. If the destination index already exists, then it will be use as is. This makes
  34. it possible to set up the destination index in advance with custom settings
  35. and mappings.
  36. [[ml-put-dfanalytics-path-params]]
  37. ==== {api-path-parms-title}
  38. `<data_frame_analytics_id>`::
  39. (Required, string) A numerical character string that uniquely identifies the
  40. {dfanalytics-job}. This identifier can contain lowercase alphanumeric
  41. characters (a-z and 0-9), hyphens, and underscores. It must start and end with
  42. alphanumeric characters.
  43. [[ml-put-dfanalytics-request-body]]
  44. ==== {api-request-body-title}
  45. `analysis`::
  46. (Required, object) Defines the type of {dfanalytics} you want to perform on your source
  47. index. For example: `outlier_detection`. See <<dfanalytics-types>>.
  48. `analyzed_fields`::
  49. (Optional, object) You can specify both `includes` and/or `excludes` patterns. If
  50. `analyzed_fields` is not set, only the relevant fields will be included. For
  51. example, all the numeric fields for {oldetection}.
  52. `dest`::
  53. (Required, object) The destination configuration, consisting of `index` and
  54. optionally `results_field` (`ml` by default). See
  55. <<ml-dfanalytics-properties,{dfanalytics} properties>>.
  56. `model_memory_limit`::
  57. (Optional, string) The approximate maximum amount of memory resources that are
  58. permitted for analytical processing. The default value for {dfanalytics-jobs}
  59. is `1gb`. If your `elasticsearch.yml` file contains an
  60. `xpack.ml.max_model_memory_limit` setting, an error occurs when you try to
  61. create {dfanalytics-jobs} that have `model_memory_limit` values greater than
  62. that setting. For more information, see <<ml-settings>>.
  63. `source`::
  64. (Required, object) The source configuration, consisting of `index` and
  65. optionally a `query`. See
  66. <<ml-dfanalytics-properties,{dfanalytics} properties>>.
  67. [[ml-put-dfanalytics-example]]
  68. ==== {api-examples-title}
  69. The following example creates the `loganalytics` {dfanalytics-job}, the analysis
  70. type is `outlier_detection`:
  71. [source,js]
  72. --------------------------------------------------
  73. PUT _ml/data_frame/analytics/loganalytics
  74. {
  75. "source": {
  76. "index": "logdata"
  77. },
  78. "dest": {
  79. "index": "logdata_out"
  80. },
  81. "analysis": {
  82. "outlier_detection": {
  83. }
  84. }
  85. }
  86. --------------------------------------------------
  87. // CONSOLE
  88. // TEST[setup:setup_logdata]
  89. The API returns the following result:
  90. [source,js]
  91. ----
  92. {
  93. "id": "loganalytics",
  94. "source": {
  95. "index": ["logdata"],
  96. "query": {
  97. "match_all": {}
  98. }
  99. },
  100. "dest": {
  101. "index": "logdata_out",
  102. "results_field": "ml"
  103. },
  104. "analysis": {
  105. "outlier_detection": {}
  106. },
  107. "model_memory_limit": "1gb",
  108. "create_time" : 1562265491319,
  109. "version" : "8.0.0"
  110. }
  111. ----
  112. // TESTRESPONSE[s/1562265491319/$body.$_path/]
  113. // TESTRESPONSE[s/"version": "8.0.0"/"version": $body.version/]