start-dfanalytics.asciidoc 2.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[start-dfanalytics]]
  4. = Start {dfanalytics-jobs} API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Start {dfanalytics-jobs}</titleabbrev>
  8. ++++
  9. Starts a {dfanalytics-job}.
  10. [[ml-start-dfanalytics-request]]
  11. == {api-request-title}
  12. `POST _ml/data_frame/analytics/<data_frame_analytics_id>/_start`
  13. [[ml-start-dfanalytics-prereq]]
  14. == {api-prereq-title}
  15. Requires the following privileges:
  16. * cluster: `manage_ml` (the `machine_learning_admin` built-in role grants this
  17. privilege)
  18. * source indices: `read`, `view_index_metadata`
  19. * destination index: `read`, `create_index`, `manage` and `index`
  20. [[ml-start-dfanalytics-desc]]
  21. == {api-description-title}
  22. A {dfanalytics-job} can be started and stopped multiple times throughout its
  23. lifecycle.
  24. If the destination index does not exist, it is created automatically the first
  25. time you start the {dfanalytics-job}. The `index.number_of_shards` and
  26. `index.number_of_replicas` settings for the destination index are copied from
  27. the source index. If there are multiple source indices, the destination index
  28. copies the highest setting values. The mappings for the destination index are
  29. also copied from the source indices. If there are any mapping conflicts, the job
  30. fails to start.
  31. If the destination index exists, it is used as is. You can therefore set up the
  32. destination index in advance with custom settings and mappings.
  33. IMPORTANT: When {es} {security-features} are enabled, the {dfanalytics-job}
  34. remembers which user created it and runs the job using those credentials. If you
  35. provided <<http-clients-secondary-authorization,secondary authorization headers>>
  36. when you created the job, those credentials are used.
  37. [[ml-start-dfanalytics-path-params]]
  38. == {api-path-parms-title}
  39. `<data_frame_analytics_id>`::
  40. (Required, string)
  41. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define]
  42. [[ml-start-dfanalytics-query-params]]
  43. == {api-query-parms-title}
  44. `timeout`::
  45. (Optional, <<time-units,time units>>)
  46. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=timeout-start]
  47. [[ml-start-dfanalytics-response-body]]
  48. == {api-response-body-title}
  49. `acknowledged`::
  50. (Boolean) For a successful response, this value is always `true`. On failure, an
  51. exception is returned instead.
  52. `node`::
  53. (string) The ID of the node that the job was started on.
  54. If the job is allowed to open lazily and has not yet been assigned to a node, this value is an empty string.
  55. [[ml-start-dfanalytics-example]]
  56. == {api-examples-title}
  57. The following example starts the `loganalytics` {dfanalytics-job}:
  58. [source,console]
  59. --------------------------------------------------
  60. POST _ml/data_frame/analytics/loganalytics/_start
  61. --------------------------------------------------
  62. // TEST[skip:setup:logdata_job]
  63. When the {dfanalytics-job} starts, you receive the following results:
  64. [source,console-result]
  65. ----
  66. {
  67. "acknowledged" : true,
  68. "node" : "node-1"
  69. }
  70. ----