infer-trained-model-deployment.asciidoc 2.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[infer-trained-model-deployment]]
  4. = Infer trained model deployment API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Infer trained model deployment</titleabbrev>
  8. ++++
  9. Evaluates a trained model.
  10. [[infer-trained-model-deployment-request]]
  11. == {api-request-title}
  12. `POST _ml/trained_models/<model_id>/deployment/_infer`
  13. ////
  14. [[infer-trained-model-deployment-prereq]]
  15. == {api-prereq-title}
  16. ////
  17. ////
  18. [[infer-trained-model-deployment-desc]]
  19. == {api-description-title}
  20. ////
  21. [[infer-trained-model-deployment-path-params]]
  22. == {api-path-parms-title}
  23. `<model_id>`::
  24. (Required, string)
  25. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id]
  26. [[infer-trained-model-deployment-query-params]]
  27. == {api-query-parms-title}
  28. `timeout`::
  29. (Optional, time)
  30. Controls the amount of time to wait for {infer} results. Defaults to 10 seconds.
  31. [[infer-trained-model-request-body]]
  32. == {api-request-body-title}
  33. `input`::
  34. (Required,string)
  35. The input text for evaluation.
  36. ////
  37. [[infer-trained-model-deployment-results]]
  38. == {api-response-body-title}
  39. ////
  40. ////
  41. [[ml-get-trained-models-response-codes]]
  42. == {api-response-codes-title}
  43. ////
  44. [[infer-trained-model-deployment-example]]
  45. == {api-examples-title}
  46. The response depends on the task the model is trained for. If it is a
  47. sentiment analysis task, the response is the score. For example:
  48. [source,console]
  49. --------------------------------------------------
  50. POST _ml/trained_models/model2/deployment/_infer
  51. {
  52. "input": "The movie was awesome!!"
  53. }
  54. --------------------------------------------------
  55. // TEST[skip:TBD]
  56. The API returns scores in this case, for example:
  57. [source,console-result]
  58. ----
  59. {
  60. "positive" : 0.9998062667902223,
  61. "negative" : 1.9373320977752957E-4
  62. }
  63. ----
  64. // NOTCONSOLE
  65. For named entity recognition (NER) tasks, the response contains the recognized
  66. entities and their type. For example:
  67. [source,console]
  68. --------------------------------------------------
  69. POST _ml/trained_models/model2/deployment/_infer
  70. {
  71. "input": "Hi my name is Josh and I live in Berlin"
  72. }
  73. --------------------------------------------------
  74. // TEST[skip:TBD]
  75. The API returns scores in this case, for example:
  76. [source,console-result]
  77. ----
  78. {
  79. "entities" : [
  80. {
  81. "label" : "person",
  82. "score" : 0.9988716330253505,
  83. "word" : "Josh"
  84. },
  85. {
  86. "label" : "location",
  87. "score" : 0.9980872542990656,
  88. "word" : "Berlin"
  89. }
  90. ]
  91. }
  92. ----
  93. // NOTCONSOLE