Browse Source

[DOCS] Documents that deployment_id can be used as inference_id in certain cases. (#121055) (#121059)

István Zoltán Szabó 8 months ago
parent
commit
4614cb1f59
1 changed files with 4 additions and 1 deletions
  1. 4 1
      docs/reference/query-dsl/sparse-vector-query.asciidoc

+ 4 - 1
docs/reference/query-dsl/sparse-vector-query.asciidoc

@@ -62,11 +62,14 @@ GET _search
 (Required, string) The name of the field that contains the token-weight pairs to be searched against.
 
 `inference_id`::
-(Optional, string) The <<inference-apis,inference ID>> to use to convert the query text into token-weight pairs.
+(Optional, string)
+The <<inference-apis,inference ID>> to use to convert the query text into token-weight pairs.
 It must be the same inference ID that was used to create the tokens from the input text.
 Only one of `inference_id` and `query_vector` is allowed.
 If `inference_id` is specified, `query` must also be specified.
 If all queried fields are of type <<semantic-text, semantic_text>>, the inference ID associated with the `semantic_text` field will be inferred.
+You can reference a `deployment_id` of a {ml} trained model deployment as an `inference_id`.
+For example, if you download and deploy the ELSER model in the {ml-cap} trained models UI in {kib}, you can use the `deployment_id` of that deployment as the `inference_id`.
 
 `query`::
 (Optional, string) The query text you want to use for search.