| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313 | [[size-your-shards]]== Size your shardsTo protect against hardware failure and increase capacity, {es} stores copies ofan index’s data across multiple shards on multiple nodes. The number and size ofthese shards can have a significant impact on your cluster's health. One commonproblem is _oversharding_, a situation in which a cluster with a large number ofshards becomes unstable.[discrete][[create-a-sharding-strategy]]=== Create a sharding strategyThe best way to prevent oversharding and other shard-related issuesis to create a sharding strategy. A sharding strategy helps you determine andmaintain the optimal number of shards for your cluster while limiting the sizeof those shards.Unfortunately, there is no one-size-fits-all sharding strategy. A strategy thatworks in one environment may not scale in another. A good sharding strategy mustaccount for your infrastructure, use case, and performance expectations.The best way to create a sharding strategy is to benchmark your production dataon production hardware using the same queries and indexing loads you'd see inproduction. For our recommended methodology, watch thehttps://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[quantitativecluster sizing video]. As you test different shard configurations, use {kib}'s{kibana-ref}/elasticsearch-metrics.html[{es} monitoring tools] to track yourcluster's stability and performance.The following sections provide some reminders and guidelines you should considerwhen designing your sharding strategy. If your cluster has shard-relatedproblems, see <<fix-an-oversharded-cluster>>.[discrete][[shard-sizing-considerations]]=== Sizing considerationsKeep the following things in mind when building your sharding strategy.[discrete][[single-thread-per-shard]]==== Searches run on a single thread per shardMost searches hit multiple shards. Each shard runs the search on a singleCPU thread. While a shard can run multiple concurrent searches, searches across alarge number of shards can deplete a node's <<modules-threadpool,searchthread pool>>. This can result in low throughput and slow search speeds.[discrete][[each-shard-has-overhead]]==== Each shard has overheadEvery shard uses memory and CPU resources. In most cases, a smallset of large shards uses fewer resources than many small shards.Segments play a big role in a shard's resource usage. Most shards containseveral segments, which store its index data. {es} keeps segment metadata inJVM heap memory so it can be quickly retrieved for searches. As ashard grows, its segments are <<index-modules-merge,merged>> into fewer, largersegments. This decreases the number of segments, which means less metadata iskept in heap memory.[discrete][[shard-auto-balance]]==== {es} automatically balances shards within a data tierA cluster's nodes are grouped into <<data-tiers,data tiers>>. Within each tier,{es} attempts to spread an index's shards across as many nodes as possible. Whenyou add a new node or a node fails, {es} automatically rebalances the index'sshards across the tier's remaining nodes.[discrete][[shard-size-best-practices]]=== Best practicesWhere applicable, use the following best practices as starting points for yoursharding strategy.[discrete][[delete-indices-not-documents]]==== Delete indices, not documentsDeleted documents aren't immediately removed from {es}'s file system.Instead, {es} marks the document as deleted on each related shard. The markeddocument will continue to use resources until it's removed during a periodic<<index-modules-merge,segment merge>>.When possible, delete entire indices instead. {es} can immediately removedeleted indices directly from the file system and free up resources.[discrete][[use-ds-ilm-for-time-series]]==== Use data streams and {ilm-init} for time series data<<data-streams,Data streams>> let you store time series data across multiple,time-based backing indices. You can use <<index-lifecycle-management,{ilm}({ilm-init})>> to automatically manage these backing indices.[role="screenshot"]image:images/ilm/index-lifecycle-policies.png[]One advantage of this setup is<<getting-started-index-lifecycle-management,automatic rollover>>, which createsa new write index when the current one meets a defined `max_primary_shard_size`,`max_age`, `max_docs`, or `max_size` threshold. When an index is no longerneeded, you can use {ilm-init} to automatically delete it and free up resources.{ilm-init} also makes it easy to change your sharding strategy over time:* *Want to decrease the shard count for new indices?* +Change the <<index-number-of-shards,`index.number_of_shards`>> setting in thedata stream's <<data-streams-change-mappings-and-settings,matching indextemplate>>.* *Want larger shards?* +Increase your {ilm-init} policy's <<ilm-rollover,rollover threshold>>.* *Need indices that span shorter intervals?* +Offset the increased shard count by deleting older indices sooner. You can dothis by lowering the `min_age` threshold for your policy's<<ilm-index-lifecycle,delete phase>>.Every new backing index is an opportunity to further tune your strategy.[discrete][[shard-size-recommendation]]==== Aim for shard sizes between 10GB and 50GBLarge shards may make a cluster less likely to recover from failure. When a nodefails, {es} rebalances the node's shards across the data tier's remaining nodes.Large shards can be harder to move across a network and may tax node resources.While not a hard limit, shards between 10GB and 50GB tend to work well. You maybe able to use larger shards depending on your network and use case.If you use {ilm-init}, set the <<ilm-rollover,rollover action>>'s`max_primary_shard_size` threshold to `50gb` to avoid shards larger than 50GB.To see the current size of your shards, use the <<cat-shards,cat shards API>>.[source,console]----GET _cat/shards?v=true&h=index,prirep,shard,store&s=prirep,store&bytes=gb----// TEST[setup:my_index]The `pri.store.size` value shows the combined size of all primary shards forthe index.[source,txt]----index                                 prirep shard store.ds-my-data-stream-2099.05.06-000001  p      0      50gb...----// TESTRESPONSE[non_json]// TESTRESPONSE[s/\.ds-my-data-stream-2099\.05\.06-000001/my-index-000001/]// TESTRESPONSE[s/50gb/.*/][discrete][[shard-count-recommendation]]==== Aim for 20 shards or fewer per GB of heap memoryThe number of shards a node can hold is proportional to the node'sheap memory. For example, a node with 30GB of heap memory shouldhave at most 600 shards. The further below this limit you can keep your nodes,the better. If you find your nodes exceeding more than 20 shards per GB,consider adding another node.To check the current size of each node's heap, use the <<cat-nodes,cat nodesAPI>>.[source,console]----GET _cat/nodes?v=true&h=heap.current----// TEST[setup:my_index]You can use the <<cat-shards,cat shards API>> to check the number of shards pernode.[source,console]----GET _cat/shards?v=true----// TEST[setup:my_index][discrete][[avoid-node-hotspots]]==== Avoid node hotspotsIf too many shards are allocated to a specific node, the node can become ahotspot. For example, if a single node contains too many shards for an indexwith a high indexing volume, the node is likely to have issues.To prevent hotspots, use the<<total-shards-per-node,`index.routing.allocation.total_shards_per_node`>> indexsetting to explicitly limit the number of shards on a single node. You canconfigure `index.routing.allocation.total_shards_per_node` using the<<indices-update-settings,update index settings API>>.[source,console]--------------------------------------------------PUT my-index-000001/_settings{  "index" : {    "routing.allocation.total_shards_per_node" : 5  }}--------------------------------------------------// TEST[setup:my_index][discrete][[fix-an-oversharded-cluster]]=== Fix an oversharded clusterIf your cluster is experiencing stability issues due to oversharded indices,you can use one or more of the following methods to fix them.[discrete][[create-indices-that-cover-longer-time-periods]]==== Create indices that cover longer time periodsIf you use {ilm-init} and your retention policy allows it, avoid using a`max_age` threshold for the rollover action. Instead, use`max_primary_shard_size` to avoid creating empty indices or many small shards.If your retention policy requires a `max_age` threshold, increase it to createindices that cover longer time intervals. For example, instead of creating dailyindices, you can create indices on a weekly or monthly basis.[discrete][[delete-empty-indices]]==== Delete empty or unneeded indicesIf you're using {ilm-init} and roll over indices based on a `max_age` threshold,you can inadvertently create indices with no documents. These empty indicesprovide no benefit but still consume resources.You can find these empty indices using the <<cat-count,cat count API>>.[source,console]----GET _cat/count/my-index-000001?v=true----// TEST[setup:my_index]Once you have a list of empty indices, you can delete them using the<<indices-delete-index,delete index API>>. You can also delete any otherunneeded indices.[source,console]----DELETE my-index-*----// TEST[setup:my_index][discrete][[force-merge-during-off-peak-hours]]==== Force merge during off-peak hoursIf you no longer write to an index, you can use the <<indices-forcemerge,forcemerge API>> to <<index-modules-merge,merge>> smaller segments into larger ones.This can reduce shard overhead and improve search speeds. However, force mergesare resource-intensive. If possible, run the force merge during off-peak hours.[source,console]----POST my-index-000001/_forcemerge----// TEST[setup:my_index][discrete][[shrink-existing-index-to-fewer-shards]]==== Shrink an existing index to fewer shardsIf you no longer write to an index, you can use the<<indices-shrink-index,shrink index API>> to reduce its shard count.[source,console]----POST my-index-000001/_shrink/my-shrunken-index-000001----// TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]{ilm-init} also has a <<ilm-shrink,shrink action>> for indices in thewarm phase.[discrete][[combine-smaller-indices]]==== Combine smaller indicesYou can also use the <<docs-reindex,reindex API>> to combine indiceswith similar mappings into a single large index. For time series data, you couldreindex indices for short time periods into a new index covering alonger period. For example, you could reindex daily indices from October with ashared index pattern, such as `my-index-2099.10.11`, into a monthly`my-index-2099.10` index. After the reindex, delete the smaller indices.[source,console]----POST _reindex{  "source": {    "index": "my-index-2099.10.*"  },  "dest": {    "index": "my-index-2099.10"  }}----
 |