Jelajahi Sumber

Set shard count limit to unlimited (#24012)

Now that we have incremental reduce functions for topN and aggregations
we can set the default for `action.search.shard_count.limit` to unlimited.
This still allows users to restrict these settings while by default we executed
across all shards matching the search requests index pattern.
Simon Willnauer 8 tahun lalu
induk
melakukan
040b86a76b

+ 1 - 1
core/src/main/java/org/elasticsearch/action/search/TransportSearchAction.java

@@ -60,7 +60,7 @@ public class TransportSearchAction extends HandledTransportAction<SearchRequest,
 
     /** The maximum number of shards for a single search request. */
     public static final Setting<Long> SHARD_COUNT_LIMIT_SETTING = Setting.longSetting(
-            "action.search.shard_count.limit", 1000L, 1L, Property.Dynamic, Property.NodeScope);
+            "action.search.shard_count.limit", Long.MAX_VALUE, 1L, Property.Dynamic, Property.NodeScope);
 
     private final ClusterService clusterService;
     private final SearchTransportService searchTransportService;

+ 7 - 6
docs/reference/search/search.asciidoc

@@ -60,9 +60,10 @@ GET /_search?q=tag:wow
 // CONSOLE
 // TEST[setup:twitter]
 
-By default elasticsearch rejects search requests that would query more than
-1000 shards. The reason is that such large numbers of shards make the job of
-the coordinating node very CPU and memory intensive. It is usually a better
-idea to organize data in such a way that there are fewer larger shards. In
-case you would like to bypass this limit, which is discouraged, you can update
-the `action.search.shard_count.limit` cluster setting to a greater value.
+By default elasticsearch doesn't reject any search requests based on the number
+of shards the request hits. While elasticsearch will optimize the search execution
+on the coordinating node a large number of shards can have a significant impact
+CPU and memory wise. It is usually a better idea to organize data in such a way
+that there are fewer larger shards. In case you would like to configure a soft
+limit, you can update the `action.search.shard_count.limit` cluster setting in order
+to reject search requests that hit too many shards.