[[esql-cross-clusters]]
=== Using {esql} across clusters
++++
Using {esql} across clusters
++++
[partintro]
[NOTE]
====
For {ccs-cap} with {esql} on version 8.16 or later, remote clusters must also be on version 8.16 or later.
====
With {esql}, you can execute a single query across multiple clusters.
[discrete]
[[esql-ccs-prerequisites]]
==== Prerequisites
include::{es-ref-dir}/search/search-your-data/search-across-clusters.asciidoc[tag=ccs-prereqs]
include::{es-ref-dir}/search/search-your-data/search-across-clusters.asciidoc[tag=ccs-gateway-seed-nodes]
include::{es-ref-dir}/search/search-your-data/search-across-clusters.asciidoc[tag=ccs-proxy-mode]
[discrete]
[[esql-ccs-security-model]]
==== Security model
{es} supports two security models for cross-cluster search (CCS):
* <>
* <>
[TIP]
====
To check which security model is being used to connect your clusters, run `GET _remote/info`.
If you're using the API key authentication method, you'll see the `"cluster_credentials"` key in the response.
====
[discrete]
[[esql-ccs-security-model-certificate]]
===== TLS certificate authentication
TLS certificate authentication secures remote clusters with mutual TLS.
This could be the preferred model when a single administrator has full control over both clusters.
We generally recommend that roles and their privileges be identical in both clusters.
Refer to <> for prerequisites and detailed setup instructions.
[discrete]
[[esql-ccs-security-model-api-key]]
===== API key authentication
The following information pertains to using {esql} across clusters with the <>. You'll need to follow the steps on that page for the *full setup instructions*. This page only contains additional information specific to {esql}.
API key based cross-cluster search (CCS) enables more granular control over allowed actions between clusters.
This may be the preferred model when you have different administrators for different clusters and want more control over who can access what data. In this model, cluster administrators must explicitly define the access given to clusters and users.
You will need to:
* Create an API key on the *remote cluster* using the <> API or using the {kibana-ref}/api-keys.html[Kibana API keys UI].
* Add the API key to the keystore on the *local cluster*, as part of the steps in <>. All cross-cluster requests from the local cluster are bound by the API key’s privileges.
Using {esql} with the API key based security model requires some additional permissions that may not be needed when using the traditional query DSL based search.
The following example API call creates a role that can query remote indices using {esql} when using the API key based security model.
The final privilege, `remote_cluster`, is required to allow remote enrich operations.
[source,console]
----
POST /_security/role/remote1
{
"cluster": ["cross_cluster_search"], <1>
"indices": [
{
"names" : [""], <2>
"privileges": ["read"]
}
],
"remote_indices": [ <3>
{
"names": [ "logs-*" ],
"privileges": [ "read","read_cross_cluster" ], <4>
"clusters" : ["my_remote_cluster"] <5>
}
],
"remote_cluster": [ <6>
{
"privileges": [
"monitor_enrich"
],
"clusters": [
"my_remote_cluster"
]
}
]
}
----
<1> The `cross_cluster_search` cluster privilege is required for the _local_ cluster.
<2> Typically, users will have permissions to read both local and remote indices. However, for cases where the role
is intended to ONLY search the remote cluster, the `read` permission is still required for the local cluster.
To provide read access to the local cluster, but disallow reading any indices in the local cluster, the `names`
field may be an empty string.
<3> The indices allowed read access to the remote cluster. The configured
<> must also allow this index to be read.
<4> The `read_cross_cluster` privilege is always required when using {esql} across clusters with the API key based
security model.
<5> The remote clusters to which these privileges apply.
This remote cluster must be configured with a <>
and connected to the remote cluster before the remote index can be queried.
Verify connection using the <> API.
<6> Required to allow remote enrichment. Without this, the user cannot read from the `.enrich` indices on the
remote cluster. The `remote_cluster` security privilege was introduced in version *8.15.0*.
You will then need a user or API key with the permissions you created above. The following example API call creates
a user with the `remote1` role.
[source,console]
----
POST /_security/user/remote_user
{
"password" : "",
"roles" : [ "remote1" ]
}
----
Remember that all cross-cluster requests from the local cluster are bound by the cross cluster API key’s privileges,
which are controlled by the remote cluster's administrator.
[TIP]
====
Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to add the new permissions
required for {esql} with ENRICH.
====
[discrete]
[[ccq-remote-cluster-setup]]
==== Remote cluster setup
Once the security model is configured, you can add remote clusters.
include::{es-ref-dir}/search/search-your-data/search-across-clusters.asciidoc[tag=ccs-remote-cluster-setup]
<1> Since `skip_unavailable` was not set on `cluster_three`, it uses
the default of `true`. See the <>
section for details.
[discrete]
[[ccq-from]]
==== Query across multiple clusters
In the `FROM` command, specify data streams and indices on remote clusters
using the format `:`. For instance, the following
{esql} request queries the `my-index-000001` index on a single remote cluster
named `cluster_one`:
[source,esql]
----
FROM cluster_one:my-index-000001
| LIMIT 10
----
Similarly, this {esql} request queries the `my-index-000001` index from
three clusters:
* The local ("querying") cluster
* Two remote clusters, `cluster_one` and `cluster_two`
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
| LIMIT 10
----
Likewise, this {esql} request queries the `my-index-000001` index from all
remote clusters (`cluster_one`, `cluster_two`, and `cluster_three`):
[source,esql]
----
FROM *:my-index-000001
| LIMIT 10
----
[discrete]
[[ccq-cluster-details]]
==== Cross-cluster metadata
Using the `"include_ccs_metadata": true` option, users can request that
ES|QL {ccs} responses include metadata about the search on each cluster (when the response format is JSON).
Here we show an example using the async search endpoint. {ccs-cap} metadata is also present in the synchronous
search endpoint response when requested.
[source,console]
----
POST /_query/async?format=json
{
"query": """
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index*
| STATS COUNT(http.response.status_code) BY user.id
| LIMIT 2
""",
"include_ccs_metadata": true
}
----
// TEST[setup:my_index]
// TEST[s/cluster_one:my-index-000001,cluster_two:my-index//]
Which returns:
[source,console-result]
----
{
"is_running": false,
"took": 42, <1>
"is_partial": false, <7>
"columns" : [
{
"name" : "COUNT(http.response.status_code)",
"type" : "long"
},
{
"name" : "user.id",
"type" : "keyword"
}
],
"values" : [
[4, "elkbee"],
[1, "kimchy"]
],
"_clusters": { <2>
"total": 3,
"successful": 3,
"running": 0,
"skipped": 0,
"partial": 0,
"failed": 0,
"details": { <3>
"(local)": { <4>
"status": "successful",
"indices": "blogs",
"took": 41, <5>
"_shards": { <6>
"total": 13,
"successful": 13,
"skipped": 0,
"failed": 0
}
},
"cluster_one": {
"status": "successful",
"indices": "cluster_one:my-index-000001",
"took": 38,
"_shards": {
"total": 4,
"successful": 4,
"skipped": 0,
"failed": 0
}
},
"cluster_two": {
"status": "successful",
"indices": "cluster_two:my-index*",
"took": 40,
"_shards": {
"total": 18,
"successful": 18,
"skipped": 1,
"failed": 0
}
}
}
}
}
----
// TEST[skip: cross-cluster testing env not set up]
<1> How long the entire search (across all clusters) took, in milliseconds.
<2> This section of counters shows all possible cluster search states and how many cluster
searches are currently in that state. The clusters can have one of the following statuses: *running*,
*successful* (searches on all shards were successful), *skipped* (the search
failed on a cluster marked with `skip_unavailable`=`true`), *failed* (the search
failed on a cluster marked with `skip_unavailable`=`false`) or **partial** (the search was
<> before finishing or has partially failed).
<3> The `_clusters/details` section shows metadata about the search on each cluster.
<4> If you included indices from the local cluster you sent the request to in your {ccs},
it is identified as "(local)".
<5> How long (in milliseconds) the search took on each cluster. This can be useful to determine
which clusters have slower response times than others.
<6> The shard details for the search on that cluster, including a count of shards that were
skipped due to the can-match phase results. Shards are skipped when they cannot have any matching data
and therefore are not included in the full ES|QL query.
<7> The `is_partial` field is set to `true` if the search has partial results for any reason,
for example if it was interrupted before finishing using the <>,
or one of the remotes or shards failed.
The cross-cluster metadata can be used to determine whether any data came back from a cluster.
For instance, in the query below, the wildcard expression for `cluster-two` did not resolve
to a concrete index (or indices). The cluster is, therefore, marked as 'skipped' and the total
number of shards searched is set to zero.
[source,console]
----
POST /_query/async?format=json
{
"query": """
FROM cluster_one:my-index*,cluster_two:logs*
| STATS COUNT(http.response.status_code) BY user.id
| LIMIT 2
""",
"include_ccs_metadata": true
}
----
// TEST[continued]
// TEST[s/cluster_one:my-index\*,cluster_two:logs\*/my-index-000001/]
Which returns:
[source,console-result]
----
{
"is_running": false,
"took": 55,
"is_partial": false,
"columns": [
... // not shown
],
"values": [
... // not shown
],
"_clusters": {
"total": 2,
"successful": 2,
"running": 0,
"skipped": 0,
"partial": 0,
"failed": 0,
"details": {
"cluster_one": {
"status": "successful",
"indices": "cluster_one:my-index*",
"took": 38,
"_shards": {
"total": 4,
"successful": 4,
"skipped": 0,
"failed": 0
}
},
"cluster_two": {
"status": "skipped", <1>
"indices": "cluster_two:logs*",
"took": 0,
"_shards": {
"total": 0, <2>
"successful": 0,
"skipped": 0,
"failed": 0
}
}
}
}
}
----
// TEST[skip: cross-cluster testing env not set up]
<1> This cluster is marked as 'skipped', since there were no matching indices on that cluster.
<2> Indicates that no shards were searched (due to not having any matching indices).
[discrete]
[[ccq-enrich]]
==== Enrich across clusters
Enrich in {esql} across clusters operates similarly to <>.
If the enrich policy and its enrich indices are consistent across all clusters, simply
write the enrich command as you would without remote clusters. In this default mode,
{esql} can execute the enrich command on either the local cluster or the remote
clusters, aiming to minimize computation or inter-cluster data transfer. Ensuring that
the policy exists with consistent data on both the local cluster and the remote
clusters is critical for ES|QL to produce a consistent query result.
[TIP]
====
Enrich in {esql} across clusters using the API key based security model was introduced in version *8.15.0*.
Cross cluster API keys created in versions prior to 8.15.0 will need to replaced or updated to use the new required permissions.
Refer to the example in the <> section.
====
In the following example, the enrich with `hosts` policy can be executed on
either the local cluster or the remote cluster `cluster_one`.
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001
| ENRICH hosts ON ip
| LIMIT 10
----
Enrich with an {esql} query against remote clusters only can also happen on
the local cluster. This means the below query requires the `hosts` enrich
policy to exist on the local cluster as well.
[source,esql]
----
FROM cluster_one:my-index-000001,cluster_two:my-index-000001
| LIMIT 10
| ENRICH hosts ON ip
----
[discrete]
[[esql-enrich-coordinator]]
===== Enrich with coordinator mode
{esql} provides the enrich `_coordinator` mode to force {esql} to execute the enrich
command on the local cluster. This mode should be used when the enrich policy is
not available on the remote clusters or maintaining consistency of enrich indices
across clusters is challenging.
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001
| ENRICH _coordinator:hosts ON ip
| SORT host_name
| LIMIT 10
----
[discrete]
[IMPORTANT]
====
Enrich with the `_coordinator` mode usually increases inter-cluster data transfer and
workload on the local cluster.
====
[discrete]
[[esql-enrich-remote]]
===== Enrich with remote mode
{esql} also provides the enrich `_remote` mode to force {esql} to execute the enrich
command independently on each remote cluster where the target indices reside.
This mode is useful for managing different enrich data on each cluster, such as detailed
information of hosts for each region where the target (main) indices contain
log events from these hosts.
In the below example, the `hosts` enrich policy is required to exist on all
remote clusters: the `querying` cluster (as local indices are included),
the remote cluster `cluster_one`, and `cluster_two`.
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
| ENRICH _remote:hosts ON ip
| SORT host_name
| LIMIT 10
----
A `_remote` enrich cannot be executed after a <>
command. The following example would result in an error:
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
| STATS COUNT(*) BY ip
| ENRICH _remote:hosts ON ip
| SORT host_name
| LIMIT 10
----
[discrete]
[[esql-multi-enrich]]
===== Multiple enrich commands
You can include multiple enrich commands in the same query with different
modes. {esql} will attempt to execute them accordingly. For example, this
query performs two enriches, first with the `hosts` policy on any cluster
and then with the `vendors` policy on the local cluster.
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
| ENRICH hosts ON ip
| ENRICH _coordinator:vendors ON os
| LIMIT 10
----
A `_remote` enrich command can't be executed after a `_coordinator` enrich
command. The following example would result in an error.
[source,esql]
----
FROM my-index-000001,cluster_one:my-index-000001,cluster_two:my-index-000001
| ENRICH _coordinator:hosts ON ip
| ENRICH _remote:vendors ON os
| LIMIT 10
----
[discrete]
[[ccq-exclude]]
==== Excluding clusters or indices from {esql} query
To exclude an entire cluster, prefix the cluster alias with a minus sign in
the `FROM` command, for example: `-my_cluster:*`:
[source,esql]
----
FROM my-index-000001,cluster*:my-index-000001,-cluster_three:*
| LIMIT 10
----
To exclude a specific remote index, prefix the index with a minus sign in
the `FROM` command, such as `my_cluster:-my_index`:
[source,esql]
----
FROM my-index-000001,cluster*:my-index-*,cluster_three:-my-index-000001
| LIMIT 10
----
[discrete]
[[ccq-skip-unavailable-clusters]]
==== Optional remote clusters
If the remote cluster is configured with `skip_unavailable: true` (the default setting), the cluster would be set
to `skipped` or `partial` status but the query will not fail, if:
* The remote cluster is disconnected from the querying cluster, either before or during the query.
* The remote cluster does not have the requested index.
* An error happened while processing the query on the remote cluster.
The `partial` status will be used if the remote query was partially successful and some data was returned.
This however does not apply to the situation when the remote cluster is missing an index and this is the only index in the query,
or all the indices in the query are missing. For example, the following queries will fail:
[source,esql]
----
FROM cluster_one:missing-index | LIMIT 10
FROM cluster_one:missing-index* | LIMIT 10
FROM cluster_one:missing-index*,cluster_two:missing-index | LIMIT 10
----
[discrete]
[[ccq-during-upgrade]]
==== Query across clusters during an upgrade
include::{es-ref-dir}/search/search-your-data/search-across-clusters.asciidoc[tag=ccs-during-upgrade]