update-dfanalytics.asciidoc 3.2 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495969798
  1. [role="xpack"]
  2. [testenv="platinum"]
  3. [[update-dfanalytics]]
  4. === Update {dfanalytics-jobs} API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Update {dfanalytics-jobs}</titleabbrev>
  8. ++++
  9. Updates an existing {dfanalytics-job}.
  10. experimental[]
  11. [[ml-update-dfanalytics-request]]
  12. ==== {api-request-title}
  13. `POST _ml/data_frame/analytics/<data_frame_analytics_id>/_update`
  14. [[ml-update-dfanalytics-prereq]]
  15. ==== {api-prereq-title}
  16. If the {es} {security-features} are enabled, you must have the following
  17. built-in roles and privileges:
  18. * `machine_learning_admin`
  19. * `kibana_admin` (UI only)
  20. * source indices: `read`, `view_index_metadata`
  21. * destination index: `read`, `create_index`, `manage` and `index`
  22. * cluster: `monitor` (UI only)
  23. For more information, see <<security-privileges>> and <<built-in-roles>>.
  24. NOTE: The {dfanalytics-job} remembers which roles the user who created it had at
  25. the time of creation. When you start the job, it performs the analysis using
  26. those same roles. If you provide
  27. <<http-clients-secondary-authorization,secondary authorization headers>>,
  28. those credentials are used instead.
  29. [[ml-update-dfanalytics-desc]]
  30. ==== {api-description-title}
  31. This API updates an existing {dfanalytics-job} that performs an analysis on the source
  32. indices and stores the outcome in a destination index.
  33. [[ml-update-dfanalytics-path-params]]
  34. ==== {api-path-parms-title}
  35. `<data_frame_analytics_id>`::
  36. (Required, string)
  37. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=job-id-data-frame-analytics-define]
  38. [role="child_attributes"]
  39. [[ml-update-dfanalytics-request-body]]
  40. ==== {api-request-body-title}
  41. `allow_lazy_start`::
  42. (Optional, boolean)
  43. Specifies whether this job can start when there is insufficient {ml} node
  44. capacity for it to be immediately assigned to a node. The default is `false`; if
  45. a {ml} node with capacity to run the job cannot immediately be found, the API
  46. returns an error. However, this is also subject to the cluster-wide
  47. `xpack.ml.max_lazy_ml_nodes` setting. See <<advanced-ml-settings>>. If this
  48. option is set to `true`, the API does not return an error and the job waits in
  49. the `starting` state until sufficient {ml} node capacity is available.
  50. `description`::
  51. (Optional, string)
  52. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=description-dfa]
  53. `model_memory_limit`::
  54. (Optional, string)
  55. The approximate maximum amount of memory resources that are permitted for
  56. analytical processing. The default value for {dfanalytics-jobs} is `1gb`. If
  57. your `elasticsearch.yml` file contains an `xpack.ml.max_model_memory_limit`
  58. setting, an error occurs when you try to create {dfanalytics-jobs} that have
  59. `model_memory_limit` values greater than that setting. For more information, see
  60. <<ml-settings>>.
  61. [[ml-update-dfanalytics-example]]
  62. ==== {api-examples-title}
  63. [[ml-update-dfanalytics-example-preprocess]]
  64. ===== Updating model memory limit example
  65. The following example shows how to update the model memory limit for the existing {dfanalytics} configuration.
  66. [source,console]
  67. --------------------------------------------------
  68. POST _ml/data_frame/analytics/model-flight-delays/_update
  69. {
  70. "model_memory_limit": "200mb"
  71. }
  72. --------------------------------------------------
  73. // TEST[skip:setup kibana sample data]