123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250 |
- [role="xpack"]
- [[put-inference-api]]
- === Create {infer} API
- experimental[]
- Creates a model to perform an {infer} task.
- IMPORTANT: The {infer} APIs enable you to use certain services, such as ELSER,
- OpenAI, or Hugging Face, in your cluster. This is not the same feature that you
- can use on an ML node with custom {ml} models. If you want to train and use your
- own model, use the <<ml-df-trained-models-apis>>.
- [discrete]
- [[put-inference-api-request]]
- ==== {api-request-title}
- `PUT /_inference/<task_type>/<model_id>`
- [discrete]
- [[put-inference-api-prereqs]]
- ==== {api-prereq-title}
- * Requires the `manage` <<privileges-list-cluster,cluster privilege>>.
- [discrete]
- [[put-inference-api-desc]]
- ==== {api-description-title}
- The create {infer} API enables you to create and configure an {infer} model to
- perform a specific {infer} task.
- The following services are available through the {infer} API:
- * ELSER
- * OpenAI
- * Hugging Face
- [discrete]
- [[put-inference-api-path-params]]
- ==== {api-path-parms-title}
- `<model_id>`::
- (Required, string)
- The unique identifier of the model.
- `<task_type>`::
- (Required, string)
- The type of the {infer} task that the model will perform. Available task types:
- * `sparse_embedding`,
- * `text_embedding`.
- [discrete]
- [[put-inference-api-request-body]]
- == {api-request-body-title}
- `service`::
- (Required, string)
- The type of service supported for the specified task type.
- Available services:
- * `elser`: specify the `sparse_embedding` task type to use the ELSER service.
- * `openai`: specify the `text_embedding` task type to use the OpenAI service.
- * `hugging_face`: specify the `text_embedding` task type to use the Hugging Face
- service.
- `service_settings`::
- (Required, object)
- Settings used to install the {infer} model. These settings are specific to the
- `service` you specified.
- +
- .`service_settings` for `elser`
- [%collapsible%closed]
- =====
- `num_allocations`:::
- (Required, integer)
- The number of model allocations to create.
- `num_threads`:::
- (Required, integer)
- The number of threads to use by each model allocation.
- =====
- +
- .`service_settings` for `openai`
- [%collapsible%closed]
- =====
- `api_key`:::
- (Required, string)
- A valid API key of your OpenAI account. You can find your OpenAI API keys in
- your OpenAI account under the
- https://platform.openai.com/api-keys[API keys section].
- IMPORTANT: You need to provide the API key only once, during the {infer} model
- creation. The <<get-inference-api>> does not retrieve your API key. After
- creating the {infer} model, you cannot change the associated API key. If you
- want to use a different API key, delete the {infer} model and recreate it with
- the same name and the updated API key.
- `organization_id`:::
- (Optional, string)
- The unique identifier of your organization. You can find the Organization ID in
- your OpenAI account under
- https://platform.openai.com/account/organization[**Settings** > **Organizations**].
- `url`:::
- (Optional, string)
- The URL endpoint to use for the requests. Can be changed for testing purposes.
- Defaults to `https://api.openai.com/v1/embeddings`.
- =====
- +
- .`service_settings` for `hugging_face`
- [%collapsible%closed]
- =====
- `api_key`:::
- (Required, string)
- A valid access token of your Hugging Face account. You can find your Hugging
- Face access tokens or you can create a new one
- https://huggingface.co/settings/tokens[on the settings page].
- IMPORTANT: You need to provide the API key only once, during the {infer} model
- creation. The <<get-inference-api>> does not retrieve your API key. After
- creating the {infer} model, you cannot change the associated API key. If you
- want to use a different API key, delete the {infer} model and recreate it with
- the same name and the updated API key.
- `url`:::
- (Required, string)
- The URL endpoint to use for the requests.
- =====
- `task_settings`::
- (Optional, object)
- Settings to configure the {infer} task. These settings are specific to the
- `<task_type>` you specified.
- +
- .`task_settings` for `text_embedding`
- [%collapsible%closed]
- =====
- `model`:::
- (Optional, string)
- The name of the model to use for the {infer} task. Refer to the
- https://platform.openai.com/docs/guides/embeddings/what-are-embeddings[OpenAI documentation]
- for the list of available text embedding models.
- =====
- [discrete]
- [[put-inference-api-example]]
- ==== {api-examples-title}
- This section contains example API calls for every service type.
- [discrete]
- [[inference-example-elser]]
- ===== ELSER service
- The following example shows how to create an {infer} model called
- `my-elser-model` to perform a `sparse_embedding` task type.
- [source,console]
- ------------------------------------------------------------
- PUT _inference/sparse_embedding/my-elser-model
- {
- "service": "elser",
- "service_settings": {
- "num_allocations": 1,
- "num_threads": 1
- },
- "task_settings": {}
- }
- ------------------------------------------------------------
- // TEST[skip:TBD]
- Example response:
- [source,console-result]
- ------------------------------------------------------------
- {
- "model_id": "my-elser-model",
- "task_type": "sparse_embedding",
- "service": "elser",
- "service_settings": {
- "num_allocations": 1,
- "num_threads": 1
- },
- "task_settings": {}
- }
- ------------------------------------------------------------
- // NOTCONSOLE
- [discrete]
- [[inference-example-openai]]
- ===== OpenAI service
- The following example shows how to create an {infer} model called
- `openai_embeddings` to perform a `text_embedding` task type.
- [source,console]
- ------------------------------------------------------------
- PUT _inference/text_embedding/openai_embeddings
- {
- "service": "openai",
- "service_settings": {
- "api_key": "<api_key>"
- },
- "task_settings": {
- "model": "text-embedding-ada-002"
- }
- }
- ------------------------------------------------------------
- // TEST[skip:TBD]
- [discrete]
- [[inference-example-hugging-face]]
- ===== Hugging Face service
- The following example shows how to create an {infer} model called
- `hugging-face_embeddings` to perform a `text_embedding` task type.
- [source,console]
- ------------------------------------------------------------
- PUT _inference/text_embedding/hugging-face-embeddings
- {
- "service": "hugging_face",
- "service_settings": {
- "api_key": "<access_token>", <1>
- "url": "<url_endpoint>" <2>
- }
- }
- ------------------------------------------------------------
- // TEST[skip:TBD]
- <1> A valid Hugging Face access token. You can find on the
- https://huggingface.co/settings/tokens[settings page of your account].
- <2> The {infer} endpoint URL you created on Hugging Face.
- Create a new {infer} endpoint on
- https://ui.endpoints.huggingface.co/[the Hugging Face endpoint page] to get an
- endpoint URL. Select the model you want to use on the new endpoint creation page
- - for example `intfloat/e5-small-v2` - then select the `Sentence Embeddings`
- task under the Advanced configuration section. Create the endpoint. Copy the URL
- after the endpoint initialization has been finished.
|