start-trained-model-deployment.asciidoc 3.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116
  1. [role="xpack"]
  2. [[start-trained-model-deployment]]
  3. = Start trained model deployment API
  4. [subs="attributes"]
  5. ++++
  6. <titleabbrev>Start trained model deployment</titleabbrev>
  7. ++++
  8. experimental::[]
  9. Starts a new trained model deployment.
  10. [[start-trained-model-deployment-request]]
  11. == {api-request-title}
  12. `POST _ml/trained_models/<model_id>/deployment/_start`
  13. [[start-trained-model-deployment-prereq]]
  14. == {api-prereq-title}
  15. Requires the `manage_ml` cluster privilege. This privilege is included in the
  16. `machine_learning_admin` built-in role.
  17. [[start-trained-model-deployment-desc]]
  18. == {api-description-title}
  19. Currently only `pytorch` models are supported for deployment. When deployed,
  20. the model attempts allocation to every machine learning node. Once deployed
  21. the model can be used by the <<inference-processor,{infer-cap} processor>>
  22. in an ingest pipeline or directly in the <<infer-trained-model-deployment>> API.
  23. [[start-trained-model-deployment-path-params]]
  24. == {api-path-parms-title}
  25. `<model_id>`::
  26. (Required, string)
  27. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id]
  28. [[start-trained-model-deployment-query-params]]
  29. == {api-query-parms-title}
  30. `inference_threads`::
  31. (Optional, integer)
  32. Sets the number of threads used by the inference process. This generally increases
  33. the inference speed. The inference process is a compute-bound process; any number
  34. greater than the number of available hardware threads on the machine does not increase the
  35. inference speed. If this setting is greater than the number of hardware threads
  36. it will automatically be changed to a value less than the number of hardware threads.
  37. Defaults to 1.
  38. `model_threads`::
  39. (Optional, integer)
  40. The number of threads used when sending inference requests to the model.
  41. Increasing this value generally increases the throughput.
  42. If this setting is greater than the number of hardware threads
  43. it will automatically be changed to a value less than the number of hardware threads.
  44. Defaults to 1.
  45. [NOTE]
  46. =============================================
  47. If the sum of `inference_threads` and `model_threads` is greater than the number of
  48. hardware threads then the number of `inference_threads` will be reduced.
  49. =============================================
  50. `queue_capacity`::
  51. (Optional, integer)
  52. Controls how many inference requests are allowed in the queue at a time.
  53. Every machine learning node in the cluster where the model can be allocated
  54. has a queue of this size; when the number of requests exceeds the total value,
  55. new requests are rejected with a 429 error. Defaults to 1024.
  56. `timeout`::
  57. (Optional, time)
  58. Controls the amount of time to wait for the model to deploy. Defaults
  59. to 20 seconds.
  60. `wait_for`::
  61. (Optional, string)
  62. Specifies the allocation status to wait for before returning. Defaults to
  63. `started`. The value `starting` indicates deployment is starting but not yet on
  64. any node. The value `started` indicates the model has started on at least one
  65. node. The value `fully_allocated` indicates the deployment has started on all
  66. valid nodes.
  67. [[start-trained-model-deployment-example]]
  68. == {api-examples-title}
  69. The following example starts a new deployment for a
  70. `elastic__distilbert-base-uncased-finetuned-conll03-english` trained model:
  71. [source,console]
  72. --------------------------------------------------
  73. POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/_start?wait_for=started&timeout=1m
  74. --------------------------------------------------
  75. // TEST[skip:TBD]
  76. The API returns the following results:
  77. [source,console-result]
  78. ----
  79. {
  80. "allocation": {
  81. "task_parameters": {
  82. "model_id": "elastic__distilbert-base-uncased-finetuned-conll03-english",
  83. "model_bytes": 265632637
  84. },
  85. "routing_table": {
  86. "uckeG3R8TLe2MMNBQ6AGrw": {
  87. "routing_state": "started",
  88. "reason": ""
  89. }
  90. },
  91. "allocation_state": "started",
  92. "start_time": "2021-11-02T11:50:34.766591Z"
  93. }
  94. }
  95. ----