Răsfoiți Sursa

[DOCS] Move ESQL docs from docs-content repo (#133301)

Liam Thompson 1 lună în urmă
părinte
comite
4ede4ad685

+ 37 - 12
docs/reference/query-languages/esql.md

@@ -2,23 +2,48 @@
 navigation_title: "{{esql}}"
 mapped_pages:
   - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-language.html
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql.html
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-getting-started.html
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-using.html
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-examples.html
+products:
+  - id: elasticsearch
 ---
 
 # {{esql}} reference [esql-language]
 
-:::{note}
-This section provides detailed **reference information** about the {{esql}} language, including syntax, functions, and operators.
+**Elasticsearch Query Language ({{esql}})** is a piped query language for filtering, transforming, and analyzing data.
 
-For overview, conceptual, and getting started information, refer to the [{{esql}} language overview](docs-content://explore-analyze/query-filter/languages/esql.md) in the **Explore and analyze** section.
-:::
+## What's {{esql}}? [_the_esql_compute_engine]
 
-{{esql}} is a piped query language for exploring and analyzing data in {{es}}. It is designed to be easy to use and understand, while also being powerful enough to handle complex data processing.
+You can author {{esql}} queries to find specific events, perform statistical analysis, and create visualizations. It supports a wide range of commands, functions, and operators to perform various data operations, such as filter, aggregation, time-series analysis, and more. It initially supported a subset of the features available in Query DSL, but it is rapidly evolving with every {{serverless-full}} and Stack release.
 
-This reference section provides detailed technical information about {{esql}} features, syntax, and behavior:
+{{esql}} is designed to be easy to read and write, making it accessible for users with varying levels of technical expertise. It is particularly useful for data analysts, security professionals, and developers who need to work with large datasets in Elasticsearch.
+
+## How does it work? [search-analyze-data-esql]
+
+{{esql}} uses pipes (`|`) to manipulate and transform data in a step-by-step fashion. This approach allows you to compose a series of operations, where the output of one operation becomes the input for the next, enabling complex data transformations and analysis.
+
+Here's a simple example of an {{esql}} query:
+
+```esql
+FROM sample_data
+| SORT @timestamp DESC
+| LIMIT 3
+```
+
+Note that each line in the query represents a step in the data processing pipeline:
+- The `FROM` clause specifies the index or data stream to query
+- The `SORT` clause sorts the data by the `@timestamp` field in descending order
+- The `LIMIT` clause restricts the output to the top 3 results
+
+### User interfaces
+
+You can interact with {{esql}} in two ways:
+
+- **Programmatic access**: Use {{esql}} syntax with the {{es}} `_query` endpoint.
+  - Refer to [](esql/esql-rest.md)
+
+- **Interactive interfaces**: Work with {{esql}} through Elastic user interfaces including Kibana Discover, Dashboards, Dev Tools, and analysis tools in Elastic Security and Observability.
+  - Refer to [Using {{esql}} in {{kib}}](docs-content://explore-analyze/query-filter/languages/esql-kibana.md).
 
-* [Syntax reference](esql/esql-syntax-reference.md): Learn the basic syntax of commands, functions, and operators
-* [Advanced workflows](esql/esql-advanced.md): Learn how to handle more complex tasks with these guides, including how to extract, transform, and combine data from multiple indices
-* [Types and fields](esql/esql-types-and-fields.md): Learn about how {{esql}} handles different data types and special fields
-* [Limitations](esql/limitations.md): Learn about the current limitations of {{esql}}
-* [Examples](esql/esql-examples.md): Explore some example queries
-* [Troubleshooting](esql/esql-troubleshooting.md): Learn how to diagnose and resolve issues with {{esql}}

+ 507 - 0
docs/reference/query-languages/esql/esql-cross-clusters.md

@@ -0,0 +1,507 @@
+---
+navigation_title: Query across clusters
+mapped_pages:
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-cross-clusters.html
+applies_to:
+  stack: preview 9.0, ga 9.1
+  serverless: unavailable
+products:
+  - id: elasticsearch
+---
+
+
+# Use ES|QL across clusters [esql-cross-clusters]
+
+With {{esql}}, you can execute a single query across multiple clusters.
+
+
+## Prerequisites [esql-ccs-prerequisites]
+
+* {{ccs-cap}} requires remote clusters. To set up remote clusters, see [*Remote clusters*](docs-content://deploy-manage/remote-clusters.md).
+
+    To ensure your remote cluster configuration supports {{ccs}}, see [Supported {{ccs}} configurations](docs-content://solutions/search/cross-cluster-search.md#ccs-supported-configurations).
+
+* For full {{ccs}} capabilities, the local and remote cluster must be on the same [subscription level](https://www.elastic.co/subscriptions).
+* The local coordinating node must have the [`remote_cluster_client`](docs-content://deploy-manage/distributed-architecture/clusters-nodes-shards/node-roles.md#remote-node) node role.
+* If you use [sniff mode](docs-content:///deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode), the local coordinating node must be able to connect to seed and gateway nodes on the remote cluster.
+
+    We recommend using gateway nodes capable of serving as coordinating nodes. The seed nodes can be a subset of these gateway nodes.
+
+* If you use [proxy mode](docs-content:///deploy-manage/remote-clusters/remote-clusters-self-managed.md#proxy-mode), the local coordinating node must be able to connect to the configured `proxy_address`. The proxy at this address must be able to route connections to gateway and coordinating nodes on the remote cluster.
+* {{ccs-cap}} requires different security privileges on the local cluster and remote cluster. See [Configure privileges for {{ccs}}](docs-content://deploy-manage/remote-clusters/remote-clusters-cert.md#remote-clusters-privileges-ccs) and [*Remote clusters*](docs-content://deploy-manage/remote-clusters.md).
+
+
+## Security model [esql-ccs-security-model]
+
+{{es}} supports two security models for cross-cluster search (CCS):
+
+* [TLS certificate authentication](#esql-ccs-security-model-certificate)
+* [API key authentication](#esql-ccs-security-model-api-key)
+
+::::{tip}
+To check which security model is being used to connect your clusters, run `GET _remote/info`. If you’re using the API key authentication method, you’ll see the `"cluster_credentials"` key in the response.
+
+::::
+
+
+
+### TLS certificate authentication [esql-ccs-security-model-certificate]
+
+::::{admonition} Deprecated in 9.0.0.
+:class: warning
+
+Use [API key authentication](#esql-ccs-security-model-api-key) instead.
+::::
+
+
+TLS certificate authentication secures remote clusters with mutual TLS. This could be the preferred model when a single administrator has full control over both clusters. We generally recommend that roles and their privileges be identical in both clusters.
+
+Refer to [TLS certificate authentication](docs-content://deploy-manage/remote-clusters/remote-clusters-cert.md) for prerequisites and detailed setup instructions.
+
+
+### API key authentication [esql-ccs-security-model-api-key]
+
+The following information pertains to using {{esql}} across clusters with the [**API key based security model**](docs-content://deploy-manage/remote-clusters/remote-clusters-api-key.md). You’ll need to follow the steps on that page for the **full setup instructions**. This page only contains additional information specific to {{esql}}.
+
+API key based cross-cluster search (CCS) enables more granular control over allowed actions between clusters. This may be the preferred model when you have different administrators for different clusters and want more control over who can access what data. In this model, cluster administrators must explicitly define the access given to clusters and users.
+
+You will need to:
+
+* Create an API key on the **remote cluster** using the [Create cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) API or using the [Kibana API keys UI](docs-content://deploy-manage/api-keys/elasticsearch-api-keys.md).
+* Add the API key to the keystore on the **local cluster**, as part of the steps in [configuring the local cluster](docs-content://deploy-manage/remote-clusters/remote-clusters-api-key.md#remote-clusters-security-api-key-local-actions). All cross-cluster requests from the local cluster are bound by the API key’s privileges.
+
+Using {{esql}} with the API key based security model requires some additional permissions that may not be needed when using the traditional query DSL based search. The following example API call creates a role that can query remote indices using {{esql}} when using the API key based security model. The final privilege, `remote_cluster`, is required to allow remote enrich operations.
+
+```console
+POST /_security/role/remote1
+{
+  "cluster": ["cross_cluster_search"], <1>
+  "indices": [
+    {
+      "names" : [""], <2>
+      "privileges": ["read"]
+    }
+  ],
+  "remote_indices": [ <3>
+    {
+      "names": [ "logs-*" ],
+      "privileges": [ "read","read_cross_cluster" ], <4>
+      "clusters" : ["my_remote_cluster"] <5>
+    }
+  ],
+   "remote_cluster": [ <6>
+        {
+            "privileges": [
+                "monitor_enrich"
+            ],
+            "clusters": [
+                "my_remote_cluster"
+            ]
+        }
+    ]
+}
+```
+
+1. The `cross_cluster_search` cluster privilege is required for the *local* cluster.
+2. Typically, users will have permissions to read both local and remote indices. However, for cases where the role is intended to ONLY search the remote cluster, the `read` permission is still required for the local cluster. To provide read access to the local cluster, but disallow reading any indices in the local cluster, the `names` field may be an empty string.
+3. The indices allowed read access to the remote cluster. The configured [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) must also allow this index to be read.
+4. The `read_cross_cluster` privilege is always required when using {{esql}} across clusters with the API key based security model.
+5. The remote clusters to which these privileges apply. This remote cluster must be configured with a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) and connected to the remote cluster before the remote index can be queried. Verify connection using the [Remote cluster info](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) API.
+6. Required to allow remote enrichment. Without this, the user cannot read from the `.enrich` indices on the remote cluster. The `remote_cluster` security privilege was introduced in version **8.15.0**.
+
+
+You will then need a user or API key with the permissions you created above. The following example API call creates a user with the `remote1` role.
+
+```console
+POST /_security/user/remote_user
+{
+  "password" : "<PASSWORD>",
+  "roles" : [ "remote1" ]
+}
+```
+
+Remember that all cross-cluster requests from the local cluster are bound by the cross cluster API key’s privileges, which are controlled by the remote cluster’s administrator.
+
+::::{tip}
+Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to add the new permissions required for {{esql}} with ENRICH.
+
+::::
+
+
+
+## Remote cluster setup [ccq-remote-cluster-setup]
+
+Once the security model is configured, you can add remote clusters.
+
+The following [cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) API request adds three remote clusters: `cluster_one`, `cluster_two`, and `cluster_three`.
+
+```console
+PUT _cluster/settings
+{
+  "persistent": {
+    "cluster": {
+      "remote": {
+        "cluster_one": {
+          "seeds": [
+            "35.238.149.1:9300"
+          ],
+          "skip_unavailable": true
+        },
+        "cluster_two": {
+          "seeds": [
+            "35.238.149.2:9300"
+          ],
+          "skip_unavailable": false
+        },
+        "cluster_three": {  <1>
+          "seeds": [
+            "35.238.149.3:9300"
+          ]
+        }
+      }
+    }
+  }
+}
+```
+
+1. Since `skip_unavailable` was not set on `cluster_three`, it uses the default of `true`. See the [Optional remote clusters](#ccq-skip-unavailable-clusters) section for details.
+
+
+
+## Query across multiple clusters [ccq-from]
+
+In the `FROM` command, specify data streams and indices on remote clusters using the format `<remote_cluster_name>:<target>`. For instance, the following {{esql}} request queries the `my-index-000001` index on a single remote cluster named `cluster_one`:
+
+```esql
+FROM cluster_one:my-index-000001
+| LIMIT 10
+```
+
+Similarly, this {{esql}} request queries the `my-index-000001` index from three clusters:
+
+* The local ("querying") cluster
+* Two remote clusters, `cluster_one` and `cluster_two`
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
+| LIMIT 10
+```
+
+Likewise, this {{esql}} request queries the `my-index-000001` index from all remote clusters (`cluster_one`, `cluster_two`, and `cluster_three`):
+
+```esql
+FROM *:my-index-000001
+| LIMIT 10
+```
+
+
+## Cross-cluster metadata [ccq-cluster-details]
+
+Using the `"include_ccs_metadata": true` option, users can request that ES|QL {{ccs}} responses include metadata about the search on each cluster (when the response format is JSON). Here we show an example using the async search endpoint. {{ccs-cap}} metadata is also present in the synchronous search endpoint response when requested. If the search returns partial results and there are partial shard or remote cluster failures, `_clusters` metadata containing the failures will be included in the response regardless of the `include_ccs_metadata` parameter.
+
+```console
+POST /_query/async?format=json
+{
+  "query": """
+    FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index*
+    | STATS COUNT(http.response.status_code) BY user.id
+    | LIMIT 2
+  """,
+  "include_ccs_metadata": true
+}
+```
+
+Which returns:
+
+```console-result
+{
+  "is_running": false,
+  "took": 42,  <1>
+  "is_partial": false, <7>
+  "columns" : [
+    {
+      "name" : "COUNT(http.response.status_code)",
+      "type" : "long"
+    },
+    {
+      "name" : "user.id",
+      "type" : "keyword"
+    }
+  ],
+  "values" : [
+    [4, "elkbee"],
+    [1, "kimchy"]
+  ],
+  "_clusters": {  <2>
+    "total": 3,
+    "successful": 3,
+    "running": 0,
+    "skipped": 0,
+    "partial": 0,
+    "failed": 0,
+    "details": { <3>
+      "(local)": { <4>
+        "status": "successful",
+        "indices": "blogs",
+        "took": 41,  <5>
+        "_shards": { <6>
+          "total": 13,
+          "successful": 13,
+          "skipped": 0,
+          "failed": 0
+        }
+      },
+      "cluster_one": {
+        "status": "successful",
+        "indices": "cluster_one:my-index-000001",
+        "took": 38,
+        "_shards": {
+          "total": 4,
+          "successful": 4,
+          "skipped": 0,
+          "failed": 0
+        }
+      },
+      "cluster_two": {
+        "status": "successful",
+        "indices": "cluster_two:my-index*",
+        "took": 40,
+        "_shards": {
+          "total": 18,
+          "successful": 18,
+          "skipped": 1,
+          "failed": 0
+        }
+      }
+    }
+  }
+}
+```
+
+1. How long the entire search (across all clusters) took, in milliseconds.
+2. This section of counters shows all possible cluster search states and how many cluster searches are currently in that state. The clusters can have one of the following statuses: **running**, **successful** (searches on all shards were successful), **skipped** (the search failed on a cluster marked with `skip_unavailable`=`true`), **failed** (the search failed on a cluster marked with `skip_unavailable`=`false`) or **partial** (the search was [interrupted](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql) before finishing or has partially failed).
+3. The `_clusters/details` section shows metadata about the search on each cluster.
+4. If you included indices from the local cluster you sent the request to in your {{ccs}}, it is identified as "(local)".
+5. How long (in milliseconds) the search took on each cluster. This can be useful to determine which clusters have slower response times than others.
+6. The shard details for the search on that cluster, including a count of shards that were skipped due to the can-match phase results. Shards are skipped when they cannot have any matching data and therefore are not included in the full ES|QL query.
+7. The `is_partial` field is set to `true` if the search has partial results for any reason, for example due to partial shard failures,
+failures in remote clusters, or if the async query was stopped by calling the [async query stop API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql).
+
+The cross-cluster metadata can be used to determine whether any data came back from a cluster. For instance, in the query below, the wildcard expression for `cluster-two` did not resolve to a concrete index (or indices). The cluster is, therefore, marked as *skipped* and the total number of shards searched is set to zero.
+
+```console
+POST /_query/async?format=json
+{
+  "query": """
+    FROM cluster_one:my-index*,cluster_two:logs*
+    | STATS COUNT(http.response.status_code) BY user.id
+    | LIMIT 2
+  """,
+  "include_ccs_metadata": true
+}
+```
+
+Which returns:
+
+```console-result
+{
+  "is_running": false,
+  "took": 55,
+  "is_partial": true, <3>
+  "columns": [
+     ...
+  ],
+  "values": [
+     ...
+  ],
+  "_clusters": {
+    "total": 2,
+    "successful": 1,
+    "running": 0,
+    "skipped": 1, <1>
+    "partial": 0,
+    "failed": 0,
+    "details": {
+      "cluster_one": {
+        "status": "successful",
+        "indices": "cluster_one:my-index*",
+        "took": 38,
+        "_shards": {
+          "total": 4,
+          "successful": 4,
+          "skipped": 0,
+          "failed": 0
+        }
+      },
+      "cluster_two": {
+        "status": "skipped", <1>
+        "indices": "cluster_two:logs*",
+        "took": 0,
+        "_shards": {
+          "total": 0, <2>
+          "successful": 0,
+          "skipped": 0,
+          "failed": 0
+        }
+      }
+    }
+  }
+}
+```
+
+1. This cluster is marked as *skipped*, since there were no matching indices on that cluster.
+2. Indicates that no shards were searched (due to not having any matching indices).
+3. Since one of the clusters is skipped, the search result is marked as partial.
+
+
+
+## Enrich across clusters [ccq-enrich]
+
+Enrich in {{esql}} across clusters operates similarly to [local enrich](commands/enrich.md). If the enrich policy and its enrich indices are consistent across all clusters, simply write the enrich command as you would without remote clusters. In this default mode, {{esql}} can execute the enrich command on either the local cluster or the remote clusters, aiming to minimize computation or inter-cluster data transfer. Ensuring that the policy exists with consistent data on both the local cluster and the remote clusters is critical for ES|QL to produce a consistent query result.
+
+::::{tip}
+Enrich in {{esql}} across clusters using the API key based security model was introduced in version **8.15.0**. Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to use the new required permissions. Refer to the example in the [API key authentication](#esql-ccs-security-model-api-key) section.
+
+::::
+
+
+In the following example, the enrich with `hosts` policy can be executed on either the local cluster or the remote cluster `cluster_one`.
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001
+| ENRICH hosts ON ip
+| LIMIT 10
+```
+
+Enrich with an {{esql}} query against remote clusters only can also happen on the local cluster. This means the below query requires the `hosts` enrich policy to exist on the local cluster as well.
+
+```esql
+FROM cluster_one:my-index-000001,cluster_two:my-index-000001
+| LIMIT 10
+| ENRICH hosts ON ip
+```
+
+
+### Enrich with coordinator mode [esql-enrich-coordinator]
+
+{{esql}} provides the enrich `_coordinator` mode to force {{esql}} to execute the enrich command on the local cluster. This mode should be used when the enrich policy is not available on the remote clusters or maintaining consistency of enrich indices across clusters is challenging.
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001
+| ENRICH _coordinator:hosts ON ip
+| SORT host_name
+| LIMIT 10
+```
+
+::::{important}
+Enrich with the `_coordinator` mode usually increases inter-cluster data transfer and workload on the local cluster.
+
+::::
+
+
+
+### Enrich with remote mode [esql-enrich-remote]
+
+{{esql}} also provides the enrich `_remote` mode to force {{esql}} to execute the enrich command independently on each remote cluster where the target indices reside. This mode is useful for managing different enrich data on each cluster, such as detailed information of hosts for each region where the target (main) indices contain log events from these hosts.
+
+In the below example, the `hosts` enrich policy is required to exist on all remote clusters: the `querying` cluster (as local indices are included), the remote cluster `cluster_one`, and `cluster_two`.
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
+| ENRICH _remote:hosts ON ip
+| SORT host_name
+| LIMIT 10
+```
+
+A `_remote` enrich cannot be executed after a [`STATS`](commands/stats-by.md) command. The following example would result in an error:
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
+| STATS COUNT(*) BY ip
+| ENRICH _remote:hosts ON ip
+| SORT host_name
+| LIMIT 10
+```
+
+
+### Multiple enrich commands [esql-multi-enrich]
+
+You can include multiple enrich commands in the same query with different modes. {{esql}} will attempt to execute them accordingly. For example, this query performs two enriches, first with the `hosts` policy on any cluster and then with the `vendors` policy on the local cluster.
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
+| ENRICH hosts ON ip
+| ENRICH _coordinator:vendors ON os
+| LIMIT 10
+```
+
+A `_remote` enrich command can’t be executed after a `_coordinator` enrich command. The following example would result in an error.
+
+```esql
+FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
+| ENRICH _coordinator:hosts ON ip
+| ENRICH _remote:vendors ON os
+| LIMIT 10
+```
+
+
+## Excluding clusters or indices from {{esql}} query [ccq-exclude]
+
+To exclude an entire cluster, prefix the cluster alias with a minus sign in the `FROM` command, for example: `-my_cluster:*`:
+
+```esql
+FROM my-index-000001,cluster*:my-index-000001,-cluster_three:*
+| LIMIT 10
+```
+
+To exclude a specific remote index, prefix the index with a minus sign in the `FROM` command, such as `my_cluster:-my_index`:
+
+```esql
+FROM my-index-000001,cluster*:my-index-*,cluster_three:-my-index-000001
+| LIMIT 10
+```
+
+
+## Skipping problematic remote clusters [ccq-skip-unavailable-clusters]
+
+{{ccs-cap}} for {{esql}} behavior when there are problems connecting to or running query on remote clusters differs between versions.
+
+::::{tab-set}
+
+:::{tab-item} 9.1
+Remote clusters are configured with the `skip_unavailable: true` setting by default. With this setting, clusters are marked as `skipped` or `partial` rather than causing queries to fail in the following scenarios:
+
+* The remote cluster is disconnected from the querying cluster, either before or during the query execution.
+* The remote cluster does not have the requested index, or it is not accessible due to security settings.
+* An error happened while processing the query on the remote cluster.
+
+The `partial` status means the remote query either has errors or was interrupted by an explicit user action, but some data may be returned.
+
+Queries will still fail when `skip_unavailable` is set `true`, if none of the specified indices exist. For example, the
+following queries will fail:
+
+```esql
+FROM cluster_one:missing-index | LIMIT 10
+FROM cluster_one:missing-index* | LIMIT 10
+FROM cluster_one:missing-index*,cluster_two:missing-index | LIMIT 10
+```
+:::
+
+:::{tab-item} 9.0
+If a remote cluster disconnects from the querying cluster, {{ccs}} for {{esql}} will set it to `skipped`
+and continue the query with other clusters, unless the remote cluster's `skip_unavailable` setting is set to `false`,
+in which case the query will fail.
+:::
+
+::::
+
+## Query across clusters during an upgrade [ccq-during-upgrade]
+
+You can still search a remote cluster while performing a rolling upgrade on the local cluster. However, the local coordinating node’s "upgrade from" and "upgrade to" version must be compatible with the remote cluster’s gateway node.
+
+::::{warning}
+Running multiple versions of {{es}} in the same cluster beyond the duration of an upgrade is not supported.
+::::
+
+
+For more information about upgrades, see [Upgrading {{es}}](docs-content://deploy-manage/upgrade/deployment-or-cluster.md).

+ 6 - 86
docs/reference/query-languages/esql/esql-examples.md

@@ -1,91 +1,11 @@
 ---
-navigation_title: "Examples"
+navigation_title: "Tutorials"
 ---
 
-# {{esql}} examples [esql-examples]
+# {{esql}} tutorials [esql-examples]
 
-## Aggregating and enriching windows event logs
+Use these hands-on tutorials to explore practical use cases with {{esql}}:
 
-```esql
-FROM logs-*
-| WHERE event.code IS NOT NULL
-| STATS event_code_count = COUNT(event.code) BY event.code,host.name
-| ENRICH win_events ON event.code WITH event_description
-| WHERE event_description IS NOT NULL and host.name IS NOT NULL
-| RENAME event_description AS event.description
-| SORT event_code_count DESC
-| KEEP event_code_count,event.code,host.name,event.description
-```
-
-* It starts by querying logs from indices that match the pattern "logs-*".
-* Filters events where the "event.code" field is not null.
-* Aggregates the count of events by "event.code" and "host.name."
-* Enriches the events with additional information using the "EVENT_DESCRIPTION" field.
-* Filters out events where "EVENT_DESCRIPTION" or "host.name" is null.
-* Renames "EVENT_DESCRIPTION" as "event.description."
-* Sorts the result by "event_code_count" in descending order.
-* Keeps only selected fields: "event_code_count," "event.code," "host.name," and "event.description."
-
-
-## Summing outbound traffic from a process `curl.exe`
-
-```esql
-FROM logs-endpoint
-| WHERE process.name == "curl.exe"
-| STATS bytes = SUM(destination.bytes) BY destination.address
-| EVAL kb =  bytes/1024
-| SORT kb DESC
-| LIMIT 10
-| KEEP kb,destination.address
-```
-
-* Queries logs from the "logs-endpoint" source.
-* Filters events where the "process.name" field is "curl.exe."
-* Calculates the sum of bytes sent to destination addresses and converts it to kilobytes (KB).
-* Sorts the results by "kb" (kilobytes) in descending order.
-* Limits the output to the top 10 results.
-* Keeps only the "kb" and "destination.address" fields.
-
-
-
-## Manipulating DNS logs to find a high number of unique dns queries per registered domain
-
-```esql
-FROM logs-*
-| GROK dns.question.name "%{DATA}\\.%{GREEDYDATA:dns.question.registered_domain:string}"
-| STATS unique_queries = COUNT_DISTINCT(dns.question.name) BY dns.question.registered_domain, process.name
-| WHERE unique_queries > 10
-| SORT unique_queries DESC
-| RENAME unique_queries AS `Unique Queries`, dns.question.registered_domain AS `Registered Domain`, process.name AS `Process`
-```
-
-* Queries logs from indices matching "logs-*."
-* Uses the "grok" pattern to extract the registered domain from the "dns.question.name" field.
-* Calculates the count of unique DNS queries per registered domain and process name.
-* Filters results where "unique_queries" are greater than 10.
-* Sorts the results by "unique_queries" in descending order.
-* Renames fields for clarity: "unique_queries" to "Unique Queries," "dns.question.registered_domain" to "Registered Domain," and "process.name" to "Process."
-
-
-
-## Identifying high-numbers of outbound user connections
-
-```esql
-FROM logs-*
-| WHERE NOT CIDR_MATCH(destination.ip, "10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16")
-| STATS destcount = COUNT(destination.ip) BY user.name, host.name
-| ENRICH ldap_lookup_new ON user.name
-| WHERE group.name IS NOT NULL
-| EVAL follow_up = CASE(destcount >= 100, "true","false")
-| SORT destcount DESC
-| KEEP destcount, host.name, user.name, group.name, follow_up
-```
-
-* Queries logs from indices matching "logs-*."
-* Filters out events where the destination IP address falls within private IP address ranges (e.g., 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
-* Calculates the count of unique destination IPs by "user.name" and "host.name."
-* Enriches the "user.name" field with LDAP group information.
-* Filters out results where "group.name" is not null.
-* Uses a "CASE" statement to create a "follow_up" field, setting it to "true" when "destcount" is greater than or equal to 100 and "false" otherwise.
-* Sorts the results by "destcount" in descending order.
-* Keeps selected fields: "destcount," "host.name," "user.name," "group.name," and "follow_up."
+- [](esql-getting-started.md): Learn the basic syntax of the language.
+- [Search and filter with {{esql}}](esql-search-tutorial.md): Learn how to use {{esql}} to search and filter data.
+- [Threat hunting with {{esql}}](docs-content://solutions/security/esql-for-security/esql-threat-hunting-tutorial.md): Learn how to use {{esql}} for advanced threat hunting techniques and security analysis.

+ 424 - 0
docs/reference/query-languages/esql/esql-getting-started.md

@@ -0,0 +1,424 @@
+---
+applies_to:
+  stack: ga
+  serverless: ga
+navigation_title: Get started
+---
+
+# Get started with {{esql}} queries [esql-getting-started]
+
+This hands-on guide covers the basics of using {{esql}} to query and aggregate your data.
+
+::::{tip}
+This getting started is also available as an [interactive Python notebook](https://github.com/elastic/elasticsearch-labs/blob/main/notebooks/esql/esql-getting-started.ipynb) in the `elasticsearch-labs` GitHub repository.
+::::
+
+## Prerequisites [esql-getting-started-prerequisites]
+
+To follow along with the queries in this guide, you can either set up your own deployment, or use Elastic’s public {{esql}} demo environment.
+
+:::::::{tab-set}
+
+::::::{tab-item} Own deployment
+First ingest some sample data. In {{kib}}, open the main menu and select **Dev Tools**. Run the following two requests:
+
+```console
+PUT sample_data
+{
+  "mappings": {
+    "properties": {
+      "client_ip": {
+        "type": "ip"
+      },
+      "message": {
+        "type": "keyword"
+      }
+    }
+  }
+}
+
+PUT sample_data/_bulk
+{"index": {}}
+{"@timestamp": "2023-10-23T12:15:03.360Z", "client_ip": "172.21.2.162", "message": "Connected to 10.1.0.3", "event_duration": 3450233}
+{"index": {}}
+{"@timestamp": "2023-10-23T12:27:28.948Z", "client_ip": "172.21.2.113", "message": "Connected to 10.1.0.2", "event_duration": 2764889}
+{"index": {}}
+{"@timestamp": "2023-10-23T13:33:34.937Z", "client_ip": "172.21.0.5", "message": "Disconnected", "event_duration": 1232382}
+{"index": {}}
+{"@timestamp": "2023-10-23T13:51:54.732Z", "client_ip": "172.21.3.15", "message": "Connection error", "event_duration": 725448}
+{"index": {}}
+{"@timestamp": "2023-10-23T13:52:55.015Z", "client_ip": "172.21.3.15", "message": "Connection error", "event_duration": 8268153}
+{"index": {}}
+{"@timestamp": "2023-10-23T13:53:55.832Z", "client_ip": "172.21.3.15", "message": "Connection error", "event_duration": 5033755}
+{"index": {}}
+{"@timestamp": "2023-10-23T13:55:01.543Z", "client_ip": "172.21.3.15", "message": "Connected to 10.1.0.1", "event_duration": 1756467}
+```
+::::::
+
+::::::{tab-item} Demo environment
+The data set used in this guide has been preloaded into the Elastic {{esql}} public demo environment. Visit [ela.st/ql](https://ela.st/ql) to start using it.
+::::::
+
+:::::::
+
+## Run an {{esql}} query [esql-getting-started-running-queries]
+
+In {{kib}}, you can use Console or Discover to run {{esql}} queries:
+
+:::::::{tab-set}
+
+::::::{tab-item} Console
+To get started with {{esql}} in Console, open the main menu and select **Dev Tools**.
+
+The general structure of an [{{esql}} query API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql) request is:
+
+```txt
+POST /_query?format=txt
+{
+  "query": """
+
+  """
+}
+```
+
+Enter the actual {{esql}} query between the two sets of triple quotes. For example:
+
+```txt
+POST /_query?format=txt
+{
+  "query": """
+FROM kibana_sample_data_logs
+  """
+}
+```
+
+::::::
+
+::::::{tab-item} Discover
+To get started with {{esql}} in Discover, open the main menu and select **Discover**. Next, select **Try ES|QL** from the application menu bar.
+
+Adjust the time filter so it includes the timestamps in the sample data (October 23rd, 2023).
+
+After switching to {{esql}} mode, the query bar shows a sample query. You can replace this query with the queries in this getting started guide.
+
+You can adjust the editor’s height by dragging its bottom border to your liking.
+::::::
+
+:::::::
+
+## Your first {{esql}} query [esql-getting-started-first-query]
+
+Each {{esql}} query starts with a [source command](commands/source-commands.md). A source command produces a table, typically with data from {{es}}.
+
+:::{image} ../images/elasticsearch-reference-source-command.svg
+:alt: A source command producing a table from {{es}}
+:::
+
+The [`FROM`](commands/from.md) source command returns a table with documents from a data stream, index, or alias. Each row in the resulting table represents a document. This query returns up to 1000 documents from the `sample_data` index:
+
+```esql
+FROM sample_data
+```
+
+Each column corresponds to a field, and can be accessed by the name of that field.
+
+::::{tip}
+{{esql}} keywords are case-insensitive. The following query is identical to the previous one:
+
+```esql
+from sample_data
+```
+
+::::
+
+
+
+## Processing commands [esql-getting-started-limit]
+
+A source command can be followed by one or more [processing commands](commands/processing-commands.md), separated by a pipe character: `|`. Processing commands change an input table by adding, removing, or changing rows and columns. Processing commands can perform filtering, projection, aggregation, and more.
+
+:::{image} ../images/elasticsearch-reference-esql-limit.png
+:alt: A processing command changing an input table
+:width: 500px
+:::
+
+For example, you can use the [`LIMIT`](commands/limit.md) command to limit the number of rows that are returned, up to a maximum of 10,000 rows:
+
+```esql
+FROM sample_data
+| LIMIT 3
+```
+
+::::{tip}
+For readability, you can put each command on a separate line. However, you don’t have to. The following query is identical to the previous one:
+
+```esql
+FROM sample_data | LIMIT 3
+```
+
+::::
+
+
+
+### Sort a table [esql-getting-started-sort]
+
+:::{image} ../images/elasticsearch-reference-esql-sort.png
+:alt: A processing command sorting an input table
+:width: 500px
+:::
+
+Another processing command is the [`SORT`](commands/sort.md) command. By default, the rows returned by `FROM` don’t have a defined sort order. Use the `SORT` command to sort rows on one or more columns:
+
+```esql
+FROM sample_data
+| SORT @timestamp DESC
+```
+
+
+### Query the data [esql-getting-started-where]
+
+Use the [`WHERE`](commands/where.md) command to query the data. For example, to find all events with a duration longer than 5ms:
+
+```esql
+FROM sample_data
+| WHERE event_duration > 5000000
+```
+
+`WHERE` supports several [operators](functions-operators/operators.md). For example, you can use [`LIKE`](functions-operators/operators.md#esql-like) to run a wildcard query against the `message` column:
+
+```esql
+FROM sample_data
+| WHERE message LIKE "Connected*"
+```
+
+
+### More processing commands [esql-getting-started-more-commands]
+
+There are many other processing commands, like [`KEEP`](commands/keep.md) and [`DROP`](commands/drop.md) to keep or drop columns, [`ENRICH`](commands/enrich.md) to enrich a table with data from indices in {{es}}, and [`DISSECT`](commands/dissect.md) and [`GROK`](commands/grok.md) to process data. Refer to [Processing commands](commands/processing-commands.md) for an overview of all processing commands.
+
+
+## Chain processing commands [esql-getting-started-chaining]
+
+You can chain processing commands, separated by a pipe character: `|`. Each processing command works on the output table of the previous command. The result of a query is the table produced by the final processing command.
+
+:::{image} ../images/elasticsearch-reference-esql-sort-limit.png
+:alt: Processing commands can be chained
+:::
+
+The following example first sorts the table on `@timestamp`, and next limits the result set to 3 rows:
+
+```esql
+FROM sample_data
+| SORT @timestamp DESC
+| LIMIT 3
+```
+
+::::{note}
+The order of processing commands is important. First limiting the result set to 3 rows before sorting those 3 rows would most likely return a result that is different than this example, where the sorting comes before the limit.
+::::
+
+
+
+## Compute values [esql-getting-started-eval]
+
+Use the [`EVAL`](commands/eval.md) command to append columns to a table, with calculated values. For example, the following query appends a `duration_ms` column. The values in the column are computed by dividing `event_duration` by 1,000,000. In other words: `event_duration` converted from nanoseconds to milliseconds.
+
+```esql
+FROM sample_data
+| EVAL duration_ms = event_duration/1000000.0
+```
+
+`EVAL` supports several [functions](commands/eval.md). For example, to round a number to the closest number with the specified number of digits, use the [`ROUND`](functions-operators/math-functions.md#esql-round) function:
+
+```esql
+FROM sample_data
+| EVAL duration_ms = ROUND(event_duration/1000000.0, 1)
+```
+
+
+## Calculate statistics [esql-getting-started-stats]
+
+{{esql}} can not only be used to query your data, you can also use it to aggregate your data. Use the [`STATS`](commands/stats-by.md) command to calculate statistics. For example, the median duration:
+
+```esql
+FROM sample_data
+| STATS median_duration = MEDIAN(event_duration)
+```
+
+You can calculate multiple stats with one command:
+
+```esql
+FROM sample_data
+| STATS median_duration = MEDIAN(event_duration), max_duration = MAX(event_duration)
+```
+
+Use `BY` to group calculated stats by one or more columns. For example, to calculate the median duration per client IP:
+
+```esql
+FROM sample_data
+| STATS median_duration = MEDIAN(event_duration) BY client_ip
+```
+
+
+## Access columns [esql-getting-started-access-columns]
+
+You can access columns by their name. If a name contains special characters, [it needs to be quoted](esql-syntax.md#esql-identifiers) with backticks (```).
+
+Assigning an explicit name to a column created by `EVAL` or `STATS` is optional. If you don’t provide a name, the new column name is equal to the function expression. For example:
+
+```esql
+FROM sample_data
+| EVAL event_duration/1000000.0
+```
+
+In this query, `EVAL` adds a new column named `event_duration/1000000.0`. Because its name contains special characters, to access this column, quote it with backticks:
+
+```esql
+FROM sample_data
+| EVAL event_duration/1000000.0
+| STATS MEDIAN(`event_duration/1000000.0`)
+```
+
+
+## Create a histogram [esql-getting-started-histogram]
+
+To track statistics over time, {{esql}} enables you to create histograms using the [`BUCKET`](functions-operators/grouping-functions.md#esql-bucket) function. `BUCKET` creates human-friendly bucket sizes and returns a value for each row that corresponds to the resulting bucket the row falls into.
+
+Combine `BUCKET` with [`STATS`](commands/stats-by.md) to create a histogram. For example, to count the number of events per hour:
+
+```esql
+FROM sample_data
+| STATS c = COUNT(*) BY bucket = BUCKET(@timestamp, 24, "2023-10-23T00:00:00Z", "2023-10-23T23:59:59Z")
+```
+
+Or the median duration per hour:
+
+```esql
+FROM sample_data
+| KEEP @timestamp, event_duration
+| STATS median_duration = MEDIAN(event_duration) BY bucket = BUCKET(@timestamp, 24, "2023-10-23T00:00:00Z", "2023-10-23T23:59:59Z")
+```
+
+
+## Enrich data [esql-getting-started-enrich]
+
+{{esql}} enables you to [enrich](esql-enrich-data.md) a table with data from indices in {{es}}, using the [`ENRICH`](commands/enrich.md) command.
+
+:::{image} ../images/elasticsearch-reference-esql-enrich.png
+:alt: esql enrich
+:::
+
+Before you can use `ENRICH`, you first need to [create](esql-enrich-data.md#esql-create-enrich-policy) and [execute](esql-enrich-data.md#esql-execute-enrich-policy) an [enrich policy](esql-enrich-data.md#esql-enrich-policy).
+
+:::::::{tab-set}
+
+::::::{tab-item} Own deployment
+The following requests create and execute a policy called `clientip_policy`. The policy links an IP address to an environment ("Development", "QA", or "Production"):
+
+```console
+PUT clientips
+{
+  "mappings": {
+    "properties": {
+      "client_ip": {
+        "type": "keyword"
+      },
+      "env": {
+        "type": "keyword"
+      }
+    }
+  }
+}
+
+PUT clientips/_bulk
+{ "index" : {}}
+{ "client_ip": "172.21.0.5", "env": "Development" }
+{ "index" : {}}
+{ "client_ip": "172.21.2.113", "env": "QA" }
+{ "index" : {}}
+{ "client_ip": "172.21.2.162", "env": "QA" }
+{ "index" : {}}
+{ "client_ip": "172.21.3.15", "env": "Production" }
+{ "index" : {}}
+{ "client_ip": "172.21.3.16", "env": "Production" }
+
+PUT /_enrich/policy/clientip_policy
+{
+  "match": {
+    "indices": "clientips",
+    "match_field": "client_ip",
+    "enrich_fields": ["env"]
+  }
+}
+
+PUT /_enrich/policy/clientip_policy/_execute?wait_for_completion=false
+```
+::::::
+
+::::::{tab-item} Demo environment
+On the demo environment at [ela.st/ql](https://ela.st/ql/), an enrich policy called `clientip_policy` has already been created an executed. The policy links an IP address to an environment ("Development", "QA", or "Production").
+::::::
+
+:::::::
+After creating and executing a policy, you can use it with the `ENRICH` command:
+
+```esql
+FROM sample_data
+| KEEP @timestamp, client_ip, event_duration
+| EVAL client_ip = TO_STRING(client_ip)
+| ENRICH clientip_policy ON client_ip WITH env
+```
+
+You can use the new `env` column that’s added by the `ENRICH` command in subsequent commands. For example, to calculate the median duration per environment:
+
+```esql
+FROM sample_data
+| KEEP @timestamp, client_ip, event_duration
+| EVAL client_ip = TO_STRING(client_ip)
+| ENRICH clientip_policy ON client_ip WITH env
+| STATS median_duration = MEDIAN(event_duration) BY env
+```
+
+For more about data enrichment with {{esql}}, refer to [Data enrichment](esql-enrich-data.md).
+
+
+## Process data [esql-getting-started-process-data]
+
+Your data may contain unstructured strings that you want to [structure](esql-process-data-with-dissect-grok.md) to make it easier to analyze the data. For example, the sample data contains log messages like:
+
+```txt
+"Connected to 10.1.0.3"
+```
+
+By extracting the IP address from these messages, you can determine which IP has accepted the most client connections.
+
+To structure unstructured strings at query time, you can use the {{esql}} [`DISSECT`](commands/dissect.md) and [`GROK`](commands/grok.md) commands. `DISSECT` works by breaking up a string using a delimiter-based pattern. `GROK` works similarly, but uses regular expressions. This makes `GROK` more powerful, but generally also slower.
+
+In this case, no regular expressions are needed, as the `message` is straightforward: "Connected to ", followed by the server IP. To match this string, you can use the following `DISSECT` command:
+
+```esql
+FROM sample_data
+| DISSECT message "Connected to %{server_ip}"
+```
+
+This adds a `server_ip` column to those rows that have a `message` that matches this pattern. For other rows, the value of `server_ip` is `null`.
+
+You can use the new `server_ip` column that’s added by the `DISSECT` command in subsequent commands. For example, to determine how many connections each server has accepted:
+
+```esql
+FROM sample_data
+| WHERE STARTS_WITH(message, "Connected to")
+| DISSECT message "Connected to %{server_ip}"
+| STATS COUNT(*) BY server_ip
+```
+
+For more about data processing with {{esql}}, refer to [Data processing with DISSECT and GROK](esql-process-data-with-dissect-grok.md).
+
+
+## Learn more [esql-getting-learn-more]
+
+- Explore the zero-setup, live [{{esql}} demo environment](http://esql.demo.elastic.co/).
+- 
+- Follow along with our hands-on tutorials:
+  - [Search and filter with {{esql}}](docs-content://solutions/search/esql-search-tutorial.md): A hands-on tutorial that shows you how to use {{esql}} to search and filter data.
+  - [Threat hunting with {{esql}}](docs-content://solutions/security/esql-for-security/esql-threat-hunting-tutorial.md): A hands-on tutorial that shows you how to use {{esql}} for advanced threat hunting techniques and security analysis.

+ 159 - 0
docs/reference/query-languages/esql/esql-multi-index.md

@@ -0,0 +1,159 @@
+---
+navigation_title: Query multiple indices
+mapped_pages:
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-multi-index.html
+applies_to:
+  stack: ga
+  serverless: ga
+products:
+  - id: elasticsearch
+---
+
+# Use ES|QL to query multiple indices [esql-multi-index]
+
+With {{esql}}, you can execute a single query across multiple indices, data streams, or aliases. To do so, use wildcards and date arithmetic. The following example uses a comma-separated list and a wildcard:
+
+```esql
+FROM employees-00001,other-employees-*
+```
+
+Use the format `<remote_cluster_name>:<target>` to [query data streams and indices on remote clusters](esql-cross-clusters.md):
+
+```esql
+FROM cluster_one:employees-00001,cluster_two:other-employees-*
+```
+
+
+## Field type mismatches [esql-multi-index-invalid-mapping]
+
+When querying multiple indices, data streams, or aliases, you might find that the same field is mapped to multiple different types. For example, consider the two indices with the following field mappings:
+
+**index: events_ip**
+
+```
+{
+  "mappings": {
+    "properties": {
+      "@timestamp":     { "type": "date" },
+      "client_ip":      { "type": "ip" },
+      "event_duration": { "type": "long" },
+      "message":        { "type": "keyword" }
+    }
+  }
+}
+```
+
+**index: events_keyword**
+
+```
+{
+  "mappings": {
+    "properties": {
+      "@timestamp":     { "type": "date" },
+      "client_ip":      { "type": "keyword" },
+      "event_duration": { "type": "long" },
+      "message":        { "type": "keyword" }
+    }
+  }
+}
+```
+
+When you query each of these individually with a simple query like `FROM events_ip`, the results are provided with type-specific columns:
+
+```esql
+FROM events_ip
+| SORT @timestamp DESC
+```
+
+| @timestamp:date | client_ip:ip | event_duration:long | message:keyword |
+| --- | --- | --- | --- |
+| 2023-10-23T13:55:01.543Z | 172.21.3.15 | 1756467 | Connected to 10.1.0.1 |
+| 2023-10-23T13:53:55.832Z | 172.21.3.15 | 5033755 | Connection error |
+| 2023-10-23T13:52:55.015Z | 172.21.3.15 | 8268153 | Connection error |
+
+Note how the `client_ip` column is correctly identified as type `ip`, and all values are displayed. However, if instead the query sources two conflicting indices with `FROM events_*`, the type of the `client_ip` column cannot be determined and is reported as `unsupported` with all values returned as `null`.
+
+$$$query-unsupported$$$
+
+```esql
+FROM events_*
+| SORT @timestamp DESC
+```
+
+| @timestamp:date | client_ip:unsupported | event_duration:long | message:keyword |
+| --- | --- | --- | --- |
+| 2023-10-23T13:55:01.543Z | null | 1756467 | Connected to 10.1.0.1 |
+| 2023-10-23T13:53:55.832Z | null | 5033755 | Connection error |
+| 2023-10-23T13:52:55.015Z | null | 8268153 | Connection error |
+| 2023-10-23T13:51:54.732Z | null | 725448 | Connection error |
+| 2023-10-23T13:33:34.937Z | null | 1232382 | Disconnected |
+| 2023-10-23T12:27:28.948Z | null | 2764889 | Connected to 10.1.0.2 |
+| 2023-10-23T12:15:03.360Z | null | 3450233 | Connected to 10.1.0.3 |
+
+In addition, if the query refers to this unsupported field directly, the query fails:
+
+```esql
+FROM events_*
+| SORT client_ip DESC
+```
+
+```bash
+Cannot use field [client_ip] due to ambiguities being mapped as
+[2] incompatible types:
+    [ip] in [events_ip],
+    [keyword] in [events_keyword]
+```
+
+
+## Union types [esql-multi-index-union-types]
+
+::::{warning}
+This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.
+::::
+
+
+{{esql}} has a way to handle [field type mismatches](#esql-multi-index-invalid-mapping). When the same field is mapped to multiple types in multiple indices, the type of the field is understood to be a *union* of the various types in the index mappings. As seen in the preceding examples, this *union type* cannot be used in the results, and cannot be referred to by the query — except in `KEEP`, `DROP` or when it’s passed to a type conversion function that accepts all the types in the *union* and converts the field to a single type. {{esql}} offers a suite of [type conversion functions](functions-operators/type-conversion-functions.md) to achieve this.
+
+In the above examples, the query can use a command like `EVAL client_ip = TO_IP(client_ip)` to resolve the union of `ip` and `keyword` to just `ip`. You can also use the type-conversion syntax `EVAL client_ip = client_ip::IP`. Alternatively, the query could use [`TO_STRING`](functions-operators/type-conversion-functions.md#esql-to_string) to convert all supported types into `KEYWORD`.
+
+For example, the [query](#query-unsupported) that returned `client_ip:unsupported` with `null` values can be improved using the `TO_IP` function or the equivalent `field::ip` syntax. These changes also resolve the error message. As long as the only reference to the original field is to pass it to a conversion function that resolves the type ambiguity, no error results.
+
+```esql
+FROM events_*
+| EVAL client_ip = TO_IP(client_ip)
+| KEEP @timestamp, client_ip, event_duration, message
+| SORT @timestamp DESC
+```
+
+| @timestamp:date | client_ip:ip | event_duration:long | message:keyword |
+| --- | --- | --- | --- |
+| 2023-10-23T13:55:01.543Z | 172.21.3.15 | 1756467 | Connected to 10.1.0.1 |
+| 2023-10-23T13:53:55.832Z | 172.21.3.15 | 5033755 | Connection error |
+| 2023-10-23T13:52:55.015Z | 172.21.3.15 | 8268153 | Connection error |
+| 2023-10-23T13:51:54.732Z | 172.21.3.15 | 725448 | Connection error |
+| 2023-10-23T13:33:34.937Z | 172.21.0.5 | 1232382 | Disconnected |
+| 2023-10-23T12:27:28.948Z | 172.21.2.113 | 2764889 | Connected to 10.1.0.2 |
+| 2023-10-23T12:15:03.360Z | 172.21.2.162 | 3450233 | Connected to 10.1.0.3 |
+
+
+## Index metadata [esql-multi-index-index-metadata]
+
+It can be helpful to know the particular index from which each row is sourced. To get this information, use the [`METADATA`](esql-metadata-fields.md) option on the [`FROM`](commands/from.md) command.
+
+```esql
+FROM events_* METADATA _index
+| EVAL client_ip = TO_IP(client_ip)
+| KEEP _index, @timestamp, client_ip, event_duration, message
+| SORT @timestamp DESC
+```
+
+| _index:keyword | @timestamp:date | client_ip:ip | event_duration:long | message:keyword |
+| --- | --- | --- | --- | --- |
+| events_ip | 2023-10-23T13:55:01.543Z | 172.21.3.15 | 1756467 | Connected to 10.1.0.1 |
+| events_ip | 2023-10-23T13:53:55.832Z | 172.21.3.15 | 5033755 | Connection error |
+| events_ip | 2023-10-23T13:52:55.015Z | 172.21.3.15 | 8268153 | Connection error |
+| events_keyword | 2023-10-23T13:51:54.732Z | 172.21.3.15 | 725448 | Connection error |
+| events_keyword | 2023-10-23T13:33:34.937Z | 172.21.0.5 | 1232382 | Disconnected |
+| events_keyword | 2023-10-23T12:27:28.948Z | 172.21.2.113 | 2764889 | Connected to 10.1.0.2 |
+| events_keyword | 2023-10-23T12:15:03.360Z | 172.21.2.162 | 3450233 | Connected to 10.1.0.3 |
+

+ 13 - 0
docs/reference/query-languages/esql/esql-multi.md

@@ -0,0 +1,13 @@
+---
+applies_to:
+  stack: ga
+  serverless: ga
+navigation_title: Query multiple sources
+---
+
+# Query multiple indices or clusters with {{esql}}
+
+{{esql}} allows you to query across multiple indices or clusters. Learn more in the following sections:
+
+* [Query multiple indices](esql-multi-index.md)
+* [Query across clusters](esql-cross-clusters.md)

+ 353 - 0
docs/reference/query-languages/esql/esql-rest.md

@@ -0,0 +1,353 @@
+---
+navigation_title: "REST API"
+mapped_pages:
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-rest.html
+applies_to:
+  stack: ga
+  serverless: ga
+products:
+  - id: elasticsearch
+---
+
+# Use the {{esql}} REST API [esql-rest]
+
+::::{tip}
+The [Search and filter with {{esql}}](docs-content://solutions/search/esql-search-tutorial.md) tutorial provides a hands-on introduction to the {{esql}} `_query` API.
+::::
+
+## Overview [esql-rest-overview]
+
+The [`_query` API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-esql) accepts an {{esql}} query string in the `query` parameter, runs it, and returns the results. For example:
+
+```console
+POST /_query?format=txt
+{
+  "query": "FROM library | KEEP author, name, page_count, release_date | SORT page_count DESC | LIMIT 5"
+}
+```
+
+Which returns:
+
+```text
+     author      |        name        |  page_count   | release_date
+-----------------+--------------------+---------------+------------------------
+Peter F. Hamilton|Pandora's Star      |768            |2004-03-02T00:00:00.000Z
+Vernor Vinge     |A Fire Upon the Deep|613            |1992-06-01T00:00:00.000Z
+Frank Herbert    |Dune                |604            |1965-06-01T00:00:00.000Z
+Alastair Reynolds|Revelation Space    |585            |2000-03-15T00:00:00.000Z
+James S.A. Corey |Leviathan Wakes     |561            |2011-06-02T00:00:00.000Z
+```
+
+
+### Run the {{esql}} query API in Console [esql-kibana-console]
+
+We recommend using [Console](docs-content://explore-analyze/query-filter/tools/console.md) to run the {{esql}} query API, because of its rich autocomplete features.
+
+When creating the query, using triple quotes (`"""`) allows you to use special characters like quotes (`"`) without having to escape them. They also make it easier to write multi-line requests.
+
+```console
+POST /_query?format=txt
+{
+  "query": """
+    FROM library
+    | KEEP author, name, page_count, release_date
+    | SORT page_count DESC
+    | LIMIT 5
+  """
+}
+```
+
+### Response formats [esql-rest-format]
+
+{{esql}} can return the data in the following human readable and binary formats. You can set the format by specifying the `format` parameter in the URL or by setting the `Accept` or `Content-Type` HTTP header.
+
+For example:
+
+```console
+POST /_query?format=yaml
+```
+
+::::{note}
+The URL parameter takes precedence over the HTTP headers. If neither is specified then the response is returned in the same format as the request.
+::::
+
+#### Structured formats
+
+Complete responses with metadata. Useful for automatic parsing.
+
+| `format` | HTTP header | Description |
+| --- | --- | --- |
+| `json` | `application/json` | [JSON](https://www.json.org/) (JavaScript Object Notation) human-readable format |
+| `yaml` | `application/yaml` | [YAML](https://en.wikipedia.org/wiki/YAML) (YAML Ain’t Markup Language) human-readable format |
+
+#### Tabular formats
+
+Query results only, without metadata. Useful for quick and manual data previews.
+
+| `format` | HTTP header | Description |
+| --- | --- | --- |
+| `csv` | `text/csv` | [Comma-separated values](https://en.wikipedia.org/wiki/Comma-separated_values) |
+| `tsv` | `text/tab-separated-values` | [Tab-separated values](https://en.wikipedia.org/wiki/Tab-separated_values) |
+| `txt` | `text/plain` | CLI-like representation |
+
+::::{tip}
+The `csv` format accepts a formatting URL query attribute, `delimiter`, which indicates which character should be used to separate the CSV values. It defaults to comma (`,`) and cannot take any of the following values: double quote (`"`), carriage-return (`\r`) and new-line (`\n`). The tab (`\t`) can also not be used. Use the `tsv` format instead.
+::::
+
+#### Binary formats
+
+Compact binary encoding. To be used by applications.
+
+| `format` | HTTP header | Description |
+| --- | --- | --- |
+| `cbor` | `application/cbor` | [Concise Binary Object Representation](https://cbor.io/) |
+| `smile` | `application/smile` | [Smile](https://en.wikipedia.org/wiki/Smile_(data_interchange_format)) binary data format similarto CBOR |
+| `arrow` | `application/vnd.apache.arrow.stream` | **Experimental.** [Apache Arrow](https://arrow.apache.org/) dataframes, [IPC streaming format](https://arrow.apache.org/docs/format/Columnar.html#ipc-streaming-format) |
+
+
+### Filtering using {{es}} Query DSL [esql-rest-filtering]
+
+Specify a Query DSL query in the `filter` parameter to filter the set of documents that an {{esql}} query runs on.
+
+```console
+POST /_query?format=txt
+{
+  "query": """
+    FROM library
+    | KEEP author, name, page_count, release_date
+    | SORT page_count DESC
+    | LIMIT 5
+  """,
+  "filter": {
+    "range": {
+      "page_count": {
+        "gte": 100,
+        "lte": 200
+      }
+    }
+  }
+}
+```
+
+Which returns:
+
+```text
+    author     |                name                |  page_count   | release_date
+---------------+------------------------------------+---------------+------------------------
+Douglas Adams  |The Hitchhiker's Guide to the Galaxy|180            |1979-10-12T00:00:00.000Z
+```
+
+
+### Columnar results [esql-rest-columnar]
+
+By default, {{esql}} returns results as rows. For example, `FROM` returns each individual document as one row. For the `json`, `yaml`, `cbor` and `smile` [formats](#esql-rest-format), {{esql}} can return the results in a columnar fashion where one row represents all the values of a certain column in the results.
+
+```console
+POST /_query?format=json
+{
+  "query": """
+    FROM library
+    | KEEP author, name, page_count, release_date
+    | SORT page_count DESC
+    | LIMIT 5
+  """,
+  "columnar": true
+}
+```
+
+Which returns:
+
+```console-result
+{
+  "took": 28,
+  "is_partial": false,
+  "columns": [
+    {"name": "author", "type": "text"},
+    {"name": "name", "type": "text"},
+    {"name": "page_count", "type": "integer"},
+    {"name": "release_date", "type": "date"}
+  ],
+  "values": [
+    ["Peter F. Hamilton", "Vernor Vinge", "Frank Herbert", "Alastair Reynolds", "James S.A. Corey"],
+    ["Pandora's Star", "A Fire Upon the Deep", "Dune", "Revelation Space", "Leviathan Wakes"],
+    [768, 613, 604, 585, 561],
+    ["2004-03-02T00:00:00.000Z", "1992-06-01T00:00:00.000Z", "1965-06-01T00:00:00.000Z", "2000-03-15T00:00:00.000Z", "2011-06-02T00:00:00.000Z"]
+  ]
+}
+```
+
+
+### Returning localized results [esql-locale-param]
+
+Use the `locale` parameter in the request body to return results (especially dates) formatted per the conventions of the locale. If `locale` is not specified, defaults to `en-US` (English). Refer to [JDK Supported Locales](https://www.oracle.com/java/technologies/javase/jdk17-suported-locales.html).
+
+Syntax: the `locale` parameter accepts language tags in the (case-insensitive) format `xy` and `xy-XY`.
+
+For example, to return a month name in French:
+
+```console
+POST /_query
+{
+  "locale": "fr-FR",
+  "query": """
+          ROW birth_date_string = "2023-01-15T00:00:00.000Z"
+          | EVAL birth_date = date_parse(birth_date_string)
+          | EVAL month_of_birth = DATE_FORMAT("MMMM",birth_date)
+          | LIMIT 5
+   """
+}
+```
+
+
+### Passing parameters to a query [esql-rest-params]
+
+Values, for example for a condition, can be passed to a query "inline", by integrating the value in the query string itself:
+
+```console
+POST /_query
+{
+  "query": """
+    FROM library
+    | EVAL year = DATE_EXTRACT("year", release_date)
+    | WHERE page_count > 300 AND author == "Frank Herbert"
+    | STATS count = COUNT(*) by year
+    | WHERE count > 0
+    | LIMIT 5
+  """
+}
+```
+
+To avoid any attempts of hacking or code injection, extract the values in a separate list of parameters. Use question mark placeholders (`?`) in the query string for each of the parameters:
+
+```console
+POST /_query
+{
+  "query": """
+    FROM library
+    | EVAL year = DATE_EXTRACT("year", release_date)
+    | WHERE page_count > ? AND author == ?
+    | STATS count = COUNT(*) by year
+    | WHERE count > ?
+    | LIMIT 5
+  """,
+  "params": [300, "Frank Herbert", 0]
+}
+```
+
+The parameters can be named parameters or positional parameters.
+
+Named parameters use question mark placeholders (`?`) followed by a string.
+
+```console
+POST /_query
+{
+  "query": """
+    FROM library
+    | EVAL year = DATE_EXTRACT("year", release_date)
+    | WHERE page_count > ?page_count AND author == ?author
+    | STATS count = COUNT(*) by year
+    | WHERE count > ?count
+    | LIMIT 5
+  """,
+  "params": [{"page_count" : 300}, {"author" : "Frank Herbert"}, {"count" : 0}]
+}
+```
+
+Positional parameters use question mark placeholders (`?`) followed by an integer.
+
+```console
+POST /_query
+{
+  "query": """
+    FROM library
+    | EVAL year = DATE_EXTRACT("year", release_date)
+    | WHERE page_count > ?1 AND author == ?2
+    | STATS count = COUNT(*) by year
+    | WHERE count > ?3
+    | LIMIT 5
+  """,
+  "params": [300, "Frank Herbert", 0]
+}
+```
+
+
+### Running an async {{esql}} query [esql-rest-async-query]
+
+The [{{esql}} async query API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-async-query) lets you asynchronously execute a query request, monitor its progress, and retrieve results when they become available.
+
+Executing an {{esql}} query is commonly quite fast, however queries across large data sets or frozen data can take some time. To avoid long waits, run an async {{esql}} query.
+
+Queries initiated by the async query API may return results or not. The `wait_for_completion_timeout` property determines how long to wait for the results. If the results are not available by this time, a [query id](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-async-query#esql-async-query-api-response-body-query-id) is returned which can be later used to retrieve the results. For example:
+
+```console
+POST /_query/async
+{
+  "query": """
+    FROM library
+    | EVAL year = DATE_TRUNC(1 YEARS, release_date)
+    | STATS MAX(page_count) BY year
+    | SORT year
+    | LIMIT 5
+  """,
+  "wait_for_completion_timeout": "2s"
+}
+```
+
+If the results are not available within the given timeout period, 2 seconds in this case, no results are returned but rather a response that includes:
+
+* A query ID
+* An `is_running` value of *true*, indicating the query is ongoing
+
+The query continues to run in the background without blocking other requests.
+
+```console-result
+{
+  "id": "FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=",
+  "is_running": true
+}
+```
+
+To check the progress of an async query, use the [{{esql}} async query get API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-async-query-get) with the query ID. Specify how long you’d like to wait for complete results in the `wait_for_completion_timeout` parameter.
+
+```console
+GET /_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=?wait_for_completion_timeout=30s
+```
+
+If the response’s `is_running` value is `false`, the query has finished and the results are returned, along with the `took` time for the query.
+
+```console-result
+{
+  "is_running": false,
+  "took": 48,
+  "columns": ...
+}
+```
+
+To stop a running async query and return the results computed so far, use the [async stop API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-async-query-stop) with the query ID.
+
+```console
+POST /_query/async/FmNJRUZ1YWZCU3dHY1BIOUhaenVSRkEaaXFlZ3h4c1RTWFNocDdnY2FSaERnUTozNDE=/stop
+```
+The query will be stopped and the response will contain the results computed so far. The response format is the same as the `get` API.
+
+```console-result
+{
+  "is_running": false,
+  "took": 48,
+  "is_partial": true,
+  "columns": ...
+}
+```
+This API can be used to retrieve results even if the query has already completed, as long as it's within the `keep_alive` window.
+The `is_partial` field indicates result completeness. A value of `true` means the results are potentially incomplete.
+
+Use the [{{esql}} async query delete API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-esql-async-query-delete) to delete an async query before the `keep_alive` period ends. If the query is still running, {{es}} cancels it.
+
+```console
+DELETE /_query/async/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=
+```
+
+::::{note}
+You will also receive the async ID and running status in the `X-Elasticsearch-Async-Id` and `X-Elasticsearch-Async-Is-Running` HTTP headers of the response, respectively.
+Useful if you use a tabular text format like `txt`, `csv` or `tsv`, as you won't receive those fields in the body there.
+::::

+ 475 - 0
docs/reference/query-languages/esql/esql-search-tutorial.md

@@ -0,0 +1,475 @@
+---
+applies_to:
+  stack: preview 9.0, ga 9.1
+  serverless: ga
+navigation_title: Search and filter with ES|QL
+---
+
+# Search and filter with {{esql}}
+
+This is a hands-on introduction to the basics of full-text search and semantic search, using {{esql}}.
+
+In this scenario, we're implementing search for a cooking blog. The blog contains recipes with various attributes including textual content, categorical data, and numerical ratings.
+
+## Requirements
+
+You need a running {{es}} cluster, together with {{kib}} to use the Dev Tools API Console. Refer to [choose your deployment type](docs-content://deploy-manage/deploy.md#choosing-your-deployment-type) for deployment options.
+
+Want to get started quickly? Run the following command in your terminal to set up a [single-node local cluster in Docker](docs-content://solutions/search/run-elasticsearch-locally.md):
+
+```sh
+curl -fsSL https://elastic.co/start-local | sh
+```
+
+## Running {{esql}} queries
+
+In this tutorial, {{esql}} examples are displayed in the following format:
+
+```esql
+FROM cooking_blog
+| WHERE description:"fluffy pancakes"
+| LIMIT 1000
+```
+
+If you want to run these queries in the [Dev Tools Console](docs-content://explore-analyze/query-filter/languages/esql-rest.md#esql-kibana-console), you need to use the following syntax:
+
+```console
+POST /_query?format=txt
+{
+  "query": """
+    FROM cooking_blog 
+    | WHERE description:"fluffy pancakes"  
+    | LIMIT 1000 
+  """
+}
+```
+
+If you'd prefer to use your favorite programming language, refer to [Client libraries](docs-content://solutions/search/site-or-app/clients.md) for a list of official and community-supported clients.
+
+## Step 1: Create an index
+
+Create the `cooking_blog` index to get started:
+
+```console
+PUT /cooking_blog
+```
+
+Now define the mappings for the index:
+
+```console
+PUT /cooking_blog/_mapping
+{
+  "properties": {
+    "title": {
+      "type": "text",
+      "analyzer": "standard", <1>
+      "fields": { <2>
+        "keyword": {
+          "type": "keyword",
+          "ignore_above": 256 <3>
+        }
+      }
+    },
+    "description": {
+      "type": "text",
+      "fields": {
+        "keyword": {
+          "type": "keyword"
+        }
+      }
+    },
+    "author": {
+      "type": "text",
+      "fields": {
+        "keyword": {
+          "type": "keyword"
+        }
+      }
+    },
+    "date": {
+      "type": "date",
+      "format": "yyyy-MM-dd"
+    },
+    "category": {
+      "type": "text",
+      "fields": {
+        "keyword": {
+          "type": "keyword"
+        }
+      }
+    },
+    "tags": {
+      "type": "text",
+      "fields": {
+        "keyword": {
+          "type": "keyword"
+        }
+      }
+    },
+    "rating": {
+      "type": "float"
+    }
+  }
+}
+```
+
+1. `analyzer`: Used for text analysis. If you don't specify it, the `standard` analyzer is used by default for `text` fields. It’s included here for demonstration purposes. To know more about analyzers, refer to [Anatomy of an analyzer](docs-content://manage-data/data-store/text-analysis/anatomy-of-an-analyzer.md).
+2. `ignore_above`: Prevents indexing values longer than 256 characters in the `keyword` field. This is the default value and it’s included here for demonstration purposes. It helps to save disk space and avoid potential issues with Lucene’s term byte-length limit. For more information, refer [ignore_above parameter](/reference/elasticsearch/mapping-reference/ignore-above.md).
+3. `description`: A field declared with both `text` and `keyword` [data types](/reference/elasticsearch/mapping-reference/field-data-types.md). Such fields are called  [Multi-fields](/reference/elasticsearch/mapping-reference/multi-fields.md). This enables both full-text search and exact matching/filtering on the same field. If you use [dynamic mapping](docs-content://manage-data/data-store/mapping/dynamic-field-mapping.md), these multi-fields will be created automatically. Other fields in the mapping like `author`, `category`, `tags` are also declared as multi-fields.
+
+::::{tip}
+Full-text search is powered by [text analysis](docs-content://solutions/search/full-text/text-analysis-during-search.md). Text analysis normalizes and standardizes text data so it can be efficiently stored in an inverted index and searched in near real-time. Analysis happens at both [index and search time](docs-content://manage-data/data-store/text-analysis/index-search-analysis.md). This tutorial won't cover analysis in detail, but it's important to understand how text is processed to create effective search queries.
+::::
+
+## Step 2: Add sample blog posts to your index [full-text-filter-tutorial-index-data]
+
+Next, you’ll need to index some example blog posts using the [Bulk API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-settings). Note that `text` fields are analyzed and multi-fields are generated at index time.
+
+```console
+POST /cooking_blog/_bulk?refresh=wait_for
+{"index":{"_id":"1"}}
+{"title":"Perfect Pancakes: A Fluffy Breakfast Delight","description":"Learn the secrets to making the fluffiest pancakes, so amazing you won't believe your tastebuds. This recipe uses buttermilk and a special folding technique to create light, airy pancakes that are perfect for lazy Sunday mornings.","author":"Maria Rodriguez","date":"2023-05-01","category":"Breakfast","tags":["pancakes","breakfast","easy recipes"],"rating":4.8}
+{"index":{"_id":"2"}}
+{"title":"Spicy Thai Green Curry: A Vegetarian Adventure","description":"Dive into the flavors of Thailand with this vibrant green curry. Packed with vegetables and aromatic herbs, this dish is both healthy and satisfying. Don't worry about the heat - you can easily adjust the spice level to your liking.","author":"Liam Chen","date":"2023-05-05","category":"Main Course","tags":["thai","vegetarian","curry","spicy"],"rating":4.6}
+{"index":{"_id":"3"}}
+{"title":"Classic Beef Stroganoff: A Creamy Comfort Food","description":"Indulge in this rich and creamy beef stroganoff. Tender strips of beef in a savory mushroom sauce, served over a bed of egg noodles. It's the ultimate comfort food for chilly evenings.","author":"Emma Watson","date":"2023-05-10","category":"Main Course","tags":["beef","pasta","comfort food"],"rating":4.7}
+{"index":{"_id":"4"}}
+{"title":"Vegan Chocolate Avocado Mousse","description":"Discover the magic of avocado in this rich, vegan chocolate mousse. Creamy, indulgent, and secretly healthy, it's the perfect guilt-free dessert for chocolate lovers.","author":"Alex Green","date":"2023-05-15","category":"Dessert","tags":["vegan","chocolate","avocado","healthy dessert"],"rating":4.5}
+{"index":{"_id":"5"}}
+{"title":"Crispy Oven-Fried Chicken","description":"Get that perfect crunch without the deep fryer! This oven-fried chicken recipe delivers crispy, juicy results every time. A healthier take on the classic comfort food.","author":"Maria Rodriguez","date":"2023-05-20","category":"Main Course","tags":["chicken","oven-fried","healthy"],"rating":4.9}
+```
+
+## Step 3: Basic search operations
+
+Full-text search involves executing text-based queries across one or more document fields. In this section, you'll start with simple text matching and build up to understanding how search results are ranked.
+
+{{esql}} provides multiple functions for full-text search, including `MATCH`, `MATCH_PHRASE`, and `QSTR`. For basic text matching, you can use either:
+
+1. Full [match function](/reference/query-languages/esql/functions-operators/search-functions.md#esql-match) syntax: `match(field, "search terms")`
+2. Compact syntax using the [match operator `:`](/reference/query-languages/esql/functions-operators/operators.md#esql-match-operator): `field:"search terms"`
+
+Both are equivalent for basic matching and can be used interchangeably. The compact syntax is more concise, while the function syntax allows for more configuration options. We use the compact syntax in most examples for brevity.
+
+Refer to the [`MATCH` function](/reference/query-languages/esql/functions-operators/search-functions.md#esql-match) reference docs for advanced parameters available with the function syntax.
+
+### Perform your first search query
+
+Let's start with the simplest possible search - looking for documents that contain specific words:
+
+```esql
+FROM cooking_blog
+| WHERE description:"fluffy pancakes"
+| LIMIT 1000
+```
+
+This query searches the `description` field for documents containing either "fluffy" OR "pancakes" (or both). By default, {{esql}} uses OR logic between search terms, so it matches documents that contain any of the specified words.
+
+### Control which fields appear in results
+
+You can specify the exact fields to include in your results using the `KEEP` command:
+
+```esql
+FROM cooking_blog
+| WHERE description:"fluffy pancakes"
+| KEEP title, description, rating
+| LIMIT 1000
+```
+
+This helps reduce the amount of data returned and focuses on the information you need.
+
+### Understand relevance scoring
+
+Search results can be ranked based on how well they match your query. To calculate and use relevance scores, you need to explicitly request the `_score` metadata:
+
+```esql
+FROM cooking_blog METADATA _score
+| WHERE description:"fluffy pancakes"
+| KEEP title, description, _score
+| SORT _score DESC
+| LIMIT 1000
+```
+
+Notice two important things:
+1. `METADATA _score` tells {{esql}} to include relevance scores in the results
+2. `SORT _score DESC` orders results by relevance (highest scores first)
+
+If you don't include `METADATA _score` in your query, you won't see relevance scores in your results. This means you won't be able to sort by relevance or filter based on relevance scores.
+
+Without explicit sorting, results aren't ordered by relevance even when scores are calculated. If you want the most relevant results first, you must sort by `_score`, by explicitly using `SORT _score DESC` or `SORT _score ASC`.
+
+:::{tip}
+When you include `METADATA _score`, search functions included in `WHERE` conditions contribute to the relevance score. Filtering operations (like range conditions and exact matches) don't affect the score.
+:::
+
+### Find exact matches
+
+Sometimes you need exact matches rather than full-text search. Use the `.keyword` field for case-sensitive exact matching:
+
+```esql
+FROM cooking_blog
+| WHERE category.keyword == "Breakfast"  # Exact match (case-sensitive)
+| KEEP title, category, rating
+| SORT rating DESC
+| LIMIT 1000
+```
+
+This is fundamentally different from full-text search - it's a binary yes/no filter that doesn't affect relevance scoring.
+
+## Step 4: Search precision control
+
+Now that you understand basic searching, explore how to control the precision of your text matches.
+
+### Require all search terms (AND logic)
+
+By default, searches with match use OR logic between terms. To require ALL terms to match, use the function syntax with the `operator` parameter to specify AND logic:
+
+```esql
+FROM cooking_blog
+| WHERE match(description, "fluffy pancakes", {"operator": "AND"})
+| LIMIT 1000
+```
+
+This stricter search returns *zero hits* on our sample data, as no document contains both "fluffy" and "pancakes" in the description.
+
+:::{note}
+The `MATCH` function with AND logic doesn't require terms to be adjacent or in order. It only requires that all terms appear somewhere in the field. Use `MATCH_PHRASE` to [search for exact phrases](#search-for-exact-phrases).
+:::
+
+### Set a minimum number of terms to match
+
+Sometimes requiring all terms is too strict, but the default OR behavior is too lenient. You can specify a minimum number of terms that must match:
+
+```esql
+FROM cooking_blog
+| WHERE match(title, "fluffy pancakes breakfast", {"minimum_should_match": 2})
+| LIMIT 1000
+```
+
+This query searches the title field to match at least 2 of the 3 terms: "fluffy", "pancakes", or "breakfast".
+
+### Search for exact phrases
+
+When you need to find documents containing an exact sequence of words, use the `MATCH_PHRASE` function:
+
+```esql
+FROM cooking_blog
+| WHERE MATCH_PHRASE(description, "rich and creamy")
+| KEEP title, description
+| LIMIT 1000
+```
+
+This query only matches documents where the words "rich and creamy" appear exactly in that order in the description field.
+
+## Step 5: Semantic search and hybrid search
+
+### Index semantic content
+
+{{es}} allows you to semantically search for documents based on the meaning of the text, rather than just the presence of specific keywords. This is useful when you want to find documents that are conceptually similar to a given query, even if they don't contain the exact search terms.
+
+ES|QL supports semantic search when your mappings include fields of the [`semantic_text`](/reference/elasticsearch/mapping-reference/semantic-text.md) type. This example mapping update adds a new field called `semantic_description` with the type `semantic_text`:
+
+```console
+PUT /cooking_blog/_mapping
+{
+  "properties": {
+    "semantic_description": {
+      "type": "semantic_text"
+    }
+  }
+}
+```
+
+Next, index a document with content into the new field:
+
+```console
+POST /cooking_blog/_doc
+{
+  "title": "Mediterranean Quinoa Bowl",
+  "semantic_description": "A protein-rich bowl with quinoa, chickpeas, fresh vegetables, and herbs. This nutritious Mediterranean-inspired dish is easy to prepare and perfect for a quick, healthy dinner.",
+  "author": "Jamie Oliver",
+  "date": "2023-06-01",
+  "category": "Main Course",
+  "tags": ["vegetarian", "healthy", "mediterranean", "quinoa"],
+  "rating": 4.7
+}
+```
+
+### Perform semantic search
+
+Once the document has been processed by the underlying model running on the inference endpoint, you can perform semantic searches. Here's an example natural language query against the `semantic_description` field:
+
+```esql
+FROM cooking_blog
+| WHERE semantic_description:"What are some easy to prepare but nutritious plant-based meals?"
+| LIMIT 5 
+```
+
+:::{tip}
+If you'd like to test out the semantic search workflow against a large dataset, follow the [semantic-search-tutorial](docs-content://solutions/search/semantic-search/semantic-search-semantic-text.md).
+:::
+
+### Perform hybrid search
+
+You can combine full-text and semantic queries. In this example we combine full-text and semantic search with custom weights:
+
+```esql
+FROM cooking_blog METADATA _score
+| WHERE match(semantic_description, "easy to prepare vegetarian meals", { "boost": 0.75 })
+    OR match(tags, "vegetarian", { "boost": 0.25 })
+| SORT _score DESC
+| LIMIT 5
+```
+
+This query searches the `semantic_description` field for documents that are semantically similar to "easy to prepare vegetarian meals" with a higher weight, while also matching the `tags` field for "vegetarian" with a lower weight. The results are sorted by relevance score.
+
+Learn how to combine these with complex criteria in [Step 8](#step-8-complex-search-solutions).
+
+## Step 6: Advanced search features
+
+Once you're comfortable with basic search precision, use the following advanced features for powerful search capabilities.
+
+### Use query string for complex patterns
+
+The `QSTR` function enables powerful search patterns using a compact query language. It's ideal for when you need wildcards, fuzzy matching, and boolean logic in a single expression:
+
+```esql
+FROM cooking_blog
+| WHERE QSTR(description, "fluffy AND pancak* OR (creamy -vegan)")
+| KEEP title, description
+| LIMIT 1000
+```
+
+Query string syntax lets you:
+- Use boolean operators: `AND`, `OR`, `-` (NOT)
+- Apply wildcards: `pancak*` matches "pancake" and "pancakes"
+- Enable fuzzy matching: `pancake~1` for typo tolerance
+- Group terms: `(thai AND curry) OR pasta`
+- Search exact phrases: `"fluffy pancakes"`
+- Search across fields: `QSTR("title,description", "pancake OR (creamy AND rich)")`
+
+### Search across multiple fields
+
+When users enter a search query, they often don't know (or care) whether their search terms appear in a specific field. You can search across multiple fields simultaneously:
+
+```esql
+FROM cooking_blog
+| WHERE title:"vegetarian curry" OR description:"vegetarian curry" OR tags:"vegetarian curry"
+| LIMIT 1000
+```
+
+This query searches for "vegetarian curry" across the title, description, and tags fields. Each field is treated with equal importance.
+
+### Weight different fields
+
+In many cases, matches in certain fields (like the title) might be more relevant than others. You can adjust the importance of each field using boost scoring:
+
+```esql
+FROM cooking_blog METADATA _score
+| WHERE match(title, "vegetarian curry", {"boost": 2.0})  # Title matches are twice as important
+    OR match(description, "vegetarian curry")
+    OR match(tags, "vegetarian curry")
+| KEEP title, description, tags, _score
+| SORT _score DESC
+| LIMIT 1000
+```
+
+## Step 7: Filtering and exact matching
+
+Filtering allows you to narrow down your search results based on exact criteria. Unlike full-text searches, filters are binary (yes/no) and do not affect the relevance score. Filters execute faster than queries because excluded results don't need to be scored.
+
+### Basic filtering by category
+
+```esql
+FROM cooking_blog
+| WHERE category.keyword == "Breakfast"  # Exact match using keyword field
+| KEEP title, author, rating, tags
+| SORT rating DESC
+| LIMIT 1000
+```
+
+### Date range filtering
+
+Often users want to find content published within a specific time frame:
+
+```esql
+FROM cooking_blog
+| WHERE date >= "2023-05-01" AND date <= "2023-05-31"  # Inclusive date range filter
+| KEEP title, author, date, rating
+| LIMIT 1000
+```
+
+### Numerical range filtering
+
+Filter by ratings or other numerical values:
+
+```esql
+FROM cooking_blog
+| WHERE rating >= 4.5  # Only highly-rated recipes
+| KEEP title, author, rating, tags
+| SORT rating DESC
+| LIMIT 1000
+```
+
+### Exact author matching
+
+Find recipes by a specific author:
+
+```esql
+FROM cooking_blog
+| WHERE author.keyword == "Maria Rodriguez"  # Exact match on author
+| KEEP title, author, rating, tags
+| SORT rating DESC
+| LIMIT 1000
+```
+
+## Step 8: Complex search solutions
+
+Real-world search often requires combining multiple types of criteria. This section shows how to build sophisticated search experiences.
+
+### Combine filters with full-text search
+
+Mix filters, full-text search, and custom scoring in a single query:
+
+```esql
+FROM cooking_blog METADATA _score
+| WHERE rating >= 4.5  # Numerical filter
+    AND NOT category.keyword == "Dessert"  # Exclusion filter
+    AND (title:"curry spicy" OR description:"curry spicy")  # Full-text search in multiple fields
+| SORT _score DESC
+| KEEP title, author, rating, tags, description
+| LIMIT 1000
+```
+
+### Advanced relevance scoring
+
+For complex relevance scoring with combined criteria, you can use the `EVAL` command to calculate custom scores:
+
+```esql
+FROM cooking_blog METADATA _score
+| WHERE NOT category.keyword == "Dessert"
+| EVAL tags_concat = MV_CONCAT(tags.keyword, ",")  # Convert multi-value field to string
+| WHERE tags_concat LIKE "*vegetarian*" AND rating >= 4.5  # Wildcard pattern matching
+| WHERE match(title, "curry spicy", {"boost": 2.0}) OR match(description, "curry spicy")
+| EVAL category_boost = CASE(category.keyword == "Main Course", 1.0, 0.0)  # Conditional boost
+| EVAL date_boost = CASE(DATE_DIFF("month", date, NOW()) <= 1, 0.5, 0.0)  # Boost recent content
+| EVAL custom_score = _score + category_boost + date_boost  # Combine scores
+| WHERE custom_score > 0  # Filter based on custom score
+| SORT custom_score DESC
+| LIMIT 1000
+```
+
+## Learn more
+
+### Documentation
+
+This tutorial introduced the basics of search and filtering in {{esql}}. Building a real-world search experience requires understanding many more advanced concepts and techniques. Here are some resources once you're ready to dive deeper:
+
+- [Search with {{esql}}](docs-content://solutions/search/esql-for-search.md): Learn about all the search capabilities in ES|QL, refer to Using ES|QL for search. {{esql}}.
+- [{{esql}} search functions](/reference/query-languages/esql/functions-operators/search-functions.md): Explore the full list of search functions available in {{esql}}.
+- [Semantic search](docs-content://solutions/search/semantic-search.md): Understand your various options for semantic search in Elasticsearch.
+  - [The `semantic_text` workflow](docs-content://solutions/search/semantic-search.md#_semantic_text_workflow): Learn how to use the `semantic_text` field type for semantic search. This is the recommended approach for most users looking to perform semantic search in {{es}}, because it abstracts away the complexity of setting up inference endpoints and models.
+
+### Related blog posts
+
+- [{{esql}}, you know for Search](https://www.elastic.co/search-labs/blog/esql-introducing-scoring-semantic-search): Introducing scoring and semantic search
+- [Introducing full text filtering in {{esql}}](https://www.elastic.co/search-labs/blog/filtering-in-esql-full-text-search-match-qstr): Overview of {{esql}}'s text filtering capabilities

+ 57 - 0
docs/reference/query-languages/esql/esql-task-management.md

@@ -0,0 +1,57 @@
+---
+navigation_title: List running queries
+mapped_pages:
+  - https://www.elastic.co/guide/en/elasticsearch/reference/current/esql-task-management.html
+applies_to:
+  stack: ga
+  serverless: ga
+products:
+  - id: elasticsearch
+---
+
+
+
+# Find long-running {{esql}} queries [esql-task-management]
+
+
+You can list running {{esql}} queries with the [task management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks):
+
+$$$esql-task-management-get-all$$$
+
+```console
+GET /_tasks?pretty&detailed&group_by=parents&human&actions=*data/read/esql
+```
+
+Which returns a list of statuses like this:
+
+```js
+{
+  "node" : "2j8UKw1bRO283PMwDugNNg",
+  "id" : 5326,
+  "type" : "transport",
+  "action" : "indices:data/read/esql",
+  "description" : "FROM test | STATS MAX(d) by a, b",  <1>
+  "start_time" : "2023-07-31T15:46:32.328Z",
+  "start_time_in_millis" : 1690818392328,
+  "running_time" : "41.7ms",                           <2>
+  "running_time_in_nanos" : 41770830,
+  "cancellable" : true,
+  "cancelled" : false,
+  "headers" : { }
+}
+```
+
+1. The user submitted query.
+2. Time the query has been running.
+
+
+You can use this to find long running queries and, if you need to, cancel them with the [task cancellation API](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-tasks#task-cancellation):
+
+$$$esql-task-management-cancelEsqlQueryRequestTests$$$
+
+```console
+POST _tasks/2j8UKw1bRO283PMwDugNNg:5326/_cancel
+```
+
+It may take a few seconds for the query to be stopped.
+

+ 3 - 2
docs/reference/query-languages/esql/esql-troubleshooting.md

@@ -4,6 +4,7 @@ navigation_title: "Troubleshooting"
 
 # Troubleshooting {{esql}} [esql-troubleshooting]
 
-This section provides some useful resource for troubleshooting {{esql}}
+This section provides some useful resource for troubleshooting {{esql}} issues:
 
-* [Query log](esql-query-log.md): Learn how to log {{esql}} queries
+- [Query log](esql-query-log.md): Learn how to log {{esql}} queries
+- [Task management API](esql-task-management.md): Learn how to diagnose issues like long-running queries.

BIN
docs/reference/query-languages/images/elasticsearch-reference-esql-enrich.png


BIN
docs/reference/query-languages/images/elasticsearch-reference-esql-limit.png


BIN
docs/reference/query-languages/images/elasticsearch-reference-esql-sort-limit.png


BIN
docs/reference/query-languages/images/elasticsearch-reference-esql-sort.png


+ 109 - 0
docs/reference/query-languages/images/elasticsearch-reference-source-command.svg

@@ -0,0 +1,109 @@
+<svg width="389" height="144" viewBox="0 0 389 144" fill="none" xmlns="http://www.w3.org/2000/svg">
+<g clip-path="url(#clip0_1_517)">
+<rect x="239" width="150" height="144" rx="8" fill="white"/>
+<g clip-path="url(#clip1_1_517)">
+<rect width="150" height="36" transform="translate(239)" fill="white" fill-opacity="0.01"/>
+<mask id="path-3-inside-1_1_517" fill="white">
+<path d="M239 0H289V36H239V0Z"/>
+</mask>
+<path d="M239 0H289V36H239V0Z" fill="white" fill-opacity="0.01"/>
+<path d="M239 0V-1H237V0H239ZM239 1H289V-1H239V1ZM241 36V0H237V36H241Z" fill="black" mask="url(#path-3-inside-1_1_517)"/>
+<mask id="path-5-inside-2_1_517" fill="white">
+<path d="M289 0H339V36H289V0Z"/>
+</mask>
+<path d="M289 0H339V36H289V0Z" fill="white" fill-opacity="0.01"/>
+<path d="M289 0V-1H288V0H289ZM289 1H339V-1H289V1ZM290 36V0H288V36H290Z" fill="black" mask="url(#path-5-inside-2_1_517)"/>
+<mask id="path-7-inside-3_1_517" fill="white">
+<path d="M339 0H389V36H339V0Z"/>
+</mask>
+<path d="M339 0H389V36H339V0Z" fill="white" fill-opacity="0.01"/>
+<path d="M339 0V-1H338V0H339ZM339 1H389V-1H339V1ZM340 36V0H338V36H340Z" fill="black" mask="url(#path-7-inside-3_1_517)"/>
+</g>
+<g clip-path="url(#clip2_1_517)">
+<rect width="150" height="36" transform="translate(239 36)" fill="white" fill-opacity="0.01"/>
+<mask id="path-9-inside-4_1_517" fill="white">
+<path d="M239 36H289V72H239V36Z"/>
+</mask>
+<path d="M239 36H289V72H239V36Z" fill="white" fill-opacity="0.01"/>
+<path d="M239 36V35H237V36H239ZM239 37H289V35H239V37ZM241 72V36H237V72H241Z" fill="black" mask="url(#path-9-inside-4_1_517)"/>
+<mask id="path-11-inside-5_1_517" fill="white">
+<path d="M289 36H339V72H289V36Z"/>
+</mask>
+<path d="M289 36H339V72H289V36Z" fill="white" fill-opacity="0.01"/>
+<path d="M289 36V35H288V36H289ZM289 37H339V35H289V37ZM290 72V36H288V72H290Z" fill="black" mask="url(#path-11-inside-5_1_517)"/>
+<mask id="path-13-inside-6_1_517" fill="white">
+<path d="M339 36H389V72H339V36Z"/>
+</mask>
+<path d="M339 36H389V72H339V36Z" fill="white" fill-opacity="0.01"/>
+<path d="M339 36V35H338V36H339ZM339 37H389V35H339V37ZM340 72V36H338V72H340Z" fill="black" mask="url(#path-13-inside-6_1_517)"/>
+</g>
+<g clip-path="url(#clip3_1_517)">
+<rect width="150" height="36" transform="translate(239 72)" fill="white" fill-opacity="0.01"/>
+<mask id="path-15-inside-7_1_517" fill="white">
+<path d="M239 72H289V108H239V72Z"/>
+</mask>
+<path d="M239 72H289V108H239V72Z" fill="white" fill-opacity="0.01"/>
+<path d="M239 72V71H237V72H239ZM239 73H289V71H239V73ZM241 108V72H237V108H241Z" fill="black" mask="url(#path-15-inside-7_1_517)"/>
+<mask id="path-17-inside-8_1_517" fill="white">
+<path d="M289 72H339V108H289V72Z"/>
+</mask>
+<path d="M289 72H339V108H289V72Z" fill="white" fill-opacity="0.01"/>
+<path d="M289 72V71H288V72H289ZM289 73H339V71H289V73ZM290 108V72H288V108H290Z" fill="black" mask="url(#path-17-inside-8_1_517)"/>
+<mask id="path-19-inside-9_1_517" fill="white">
+<path d="M339 72H389V108H339V72Z"/>
+</mask>
+<path d="M339 72H389V108H339V72Z" fill="white" fill-opacity="0.01"/>
+<path d="M339 72V71H338V72H339ZM339 73H389V71H339V73ZM340 108V72H338V108H340Z" fill="black" mask="url(#path-19-inside-9_1_517)"/>
+</g>
+<g clip-path="url(#clip4_1_517)">
+<rect width="150" height="36" transform="translate(239 108)" fill="white" fill-opacity="0.01"/>
+<mask id="path-21-inside-10_1_517" fill="white">
+<path d="M239 108H289V144H239V108Z"/>
+</mask>
+<path d="M239 108H289V144H239V108Z" fill="white" fill-opacity="0.01"/>
+<path d="M239 108V107H237V108H239ZM239 109H289V107H239V109ZM241 144V108H237V144H241Z" fill="black" mask="url(#path-21-inside-10_1_517)"/>
+<mask id="path-23-inside-11_1_517" fill="white">
+<path d="M289 108H339V144H289V108Z"/>
+</mask>
+<path d="M289 108H339V144H289V108Z" fill="white" fill-opacity="0.01"/>
+<path d="M289 108V107H288V108H289ZM289 109H339V107H289V109ZM290 144V108H288V144H290Z" fill="black" mask="url(#path-23-inside-11_1_517)"/>
+<mask id="path-25-inside-12_1_517" fill="white">
+<path d="M339 108H389V144H339V108Z"/>
+</mask>
+<path d="M339 108H389V144H339V108Z" fill="white" fill-opacity="0.01"/>
+<path d="M339 108V107H338V108H339ZM339 109H389V107H339V109ZM340 144V108H338V144H340Z" fill="black" mask="url(#path-25-inside-12_1_517)"/>
+</g>
+</g>
+<rect x="240.5" y="1.5" width="147" height="141" rx="6.5" stroke="#484848" stroke-width="3"/>
+<path fill-rule="evenodd" clip-rule="evenodd" d="M2.5625 72.001C2.5625 75.5475 3.05962 78.9685 3.90525 82.251H53.8125C59.4731 82.251 64.0625 77.6616 64.0625 72.001C64.0625 66.3379 59.4731 61.751 53.8125 61.751H3.90525C3.05962 65.031 2.5625 68.4545 2.5625 72.001Z" fill="#343741"/>
+<mask id="mask0_1_517" style="mask-type:luminance" maskUnits="userSpaceOnUse" x="6" y="31" width="70" height="24">
+<path fill-rule="evenodd" clip-rule="evenodd" d="M6.77472 31.0013H75.5399V54.0638H6.77472V31.0013Z" fill="white"/>
+</mask>
+<g mask="url(#mask0_1_517)">
+<path fill-rule="evenodd" clip-rule="evenodd" d="M71.5547 50.6326C72.9872 49.3129 74.3197 47.8958 75.542 46.3763C68.0262 37.0103 56.5026 31.0013 43.562 31.0013C27.3644 31.0013 13.4244 40.4236 6.77472 54.0638H62.8089C66.053 54.0638 69.1716 52.8312 71.5547 50.6326Z" fill="#FEC514"/>
+</g>
+<mask id="mask1_1_517" style="mask-type:luminance" maskUnits="userSpaceOnUse" x="6" y="89" width="70" height="24">
+<path fill-rule="evenodd" clip-rule="evenodd" d="M6.77448 89.9385H75.5399V113H6.77448V89.9385Z" fill="white"/>
+</mask>
+<g mask="url(#mask1_1_517)">
+<path fill-rule="evenodd" clip-rule="evenodd" d="M62.8087 89.9385H6.77448C13.4267 103.576 27.3642 113.001 43.5617 113.001C56.5023 113.001 68.0259 106.989 75.5417 97.626C74.3194 96.1039 72.9869 94.6868 71.5545 93.3671C69.1714 91.166 66.0528 89.9385 62.8087 89.9385Z" fill="#00BFB3"/>
+</g>
+<path d="M220.233 74.2348C221.744 73.5536 222.416 71.7771 221.735 70.2667L210.635 45.6538C209.954 44.1435 208.177 43.4713 206.667 44.1524C205.156 44.8335 204.484 46.6101 205.165 48.1205L215.032 69.9986L193.154 79.8651C191.643 80.5463 190.971 82.3229 191.652 83.8332C192.334 85.3436 194.11 86.0158 195.62 85.3347L220.233 74.2348ZM103.202 74.2485C134.387 60.6052 181.566 60.5433 217.938 74.3059L220.062 68.6942C182.434 54.4566 133.613 54.3948 100.798 68.7515L103.202 74.2485Z" fill="#484848"/>
+<defs>
+<clipPath id="clip0_1_517">
+<rect x="239" width="150" height="144" rx="8" fill="white"/>
+</clipPath>
+<clipPath id="clip1_1_517">
+<rect width="150" height="36" fill="white" transform="translate(239)"/>
+</clipPath>
+<clipPath id="clip2_1_517">
+<rect width="150" height="36" fill="white" transform="translate(239 36)"/>
+</clipPath>
+<clipPath id="clip3_1_517">
+<rect width="150" height="36" fill="white" transform="translate(239 72)"/>
+</clipPath>
+<clipPath id="clip4_1_517">
+<rect width="150" height="36" fill="white" transform="translate(239 108)"/>
+</clipPath>
+</defs>
+</svg>

+ 10 - 2
docs/reference/query-languages/toc.yml

@@ -85,6 +85,8 @@ toc:
       - file: query-dsl/regexp-syntax.md
   - file: esql.md
     children:
+      - file: esql/esql-getting-started.md
+      - file: esql/esql-rest.md
       - file: esql/esql-syntax-reference.md
         children:
           - file: esql/esql-syntax.md
@@ -129,6 +131,10 @@ toc:
               - file: esql/functions-operators/type-conversion-functions.md
               - file: esql/functions-operators/mv-functions.md
               - file: esql/functions-operators/operators.md
+      - file: esql/esql-multi.md
+        children:
+          - file: esql/esql-multi-index.md
+          - file: esql/esql-cross-clusters.md
       - file: esql/esql-advanced.md
         children:
           - file: esql/esql-process-data-with-dissect-grok.md
@@ -140,12 +146,14 @@ toc:
           - file: esql/esql-time-spans.md
           - file: esql/esql-metadata-fields.md
           - file: esql/esql-multivalued-fields.md
-
-      - file: esql/limitations.md
       - file: esql/esql-examples.md
+        children:
+          - file: esql/esql-search-tutorial.md
       - file: esql/esql-troubleshooting.md
         children:
           - file: esql/esql-query-log.md
+          - file: esql/esql-task-management.md
+      - file: esql/limitations.md
   - file: sql.md
     children:
       - file: sql/sql-spec.md