|
@@ -137,6 +137,26 @@ However, it comes with an additional overhead of more frequent cancellation
|
|
|
checks that can be noticeable on large fast running search queries. Changing this
|
|
|
setting only affects the searches that start after the change is made.
|
|
|
|
|
|
+[float]
|
|
|
+[[search-concurrency-and-parallelism]]
|
|
|
+== Search concurrency and parallelism
|
|
|
+
|
|
|
+By default Elasticsearch doesn't reject any search requests based on the number
|
|
|
+of shards the request hits. While Elasticsearch will optimize the search
|
|
|
+execution on the coordinating node a large number of shards can have a
|
|
|
+significant impact CPU and memory wise. It is usually a better idea to organize
|
|
|
+data in such a way that there are fewer larger shards. In case you would like to
|
|
|
+configure a soft limit, you can update the `action.search.shard_count.limit`
|
|
|
+cluster setting in order to reject search requests that hit too many shards.
|
|
|
+
|
|
|
+The request parameter `max_concurrent_shard_requests` can be used to control the
|
|
|
+maximum number of concurrent shard requests the search API will execute for the
|
|
|
+request. This parameter should be used to protect a single request from
|
|
|
+overloading a cluster (e.g., a default request will hit all indices in a cluster
|
|
|
+which could cause shard request rejections if the number of shards per node is
|
|
|
+high). This default is based on the number of data nodes in the cluster but at
|
|
|
+most `256`.
|
|
|
+
|
|
|
--
|
|
|
|
|
|
include::search/search.asciidoc[]
|