size-your-shards.asciidoc 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439
  1. [[size-your-shards]]
  2. == Size your shards
  3. Each index in {es} is divided into one or more shards, each of which may be
  4. replicated across multiple nodes to protect against hardware failures. If you
  5. are using <<data-streams>> then each data stream is backed by a sequence of
  6. indices. There is a limit to the amount of data you can store on a single node
  7. so you can increase the capacity of your cluster by adding nodes and increasing
  8. the number of indices and shards to match. However, each index and shard has
  9. some overhead and if you divide your data across too many shards then the
  10. overhead can become overwhelming. A cluster with too many indices or shards is
  11. said to suffer from _oversharding_. An oversharded cluster will be less
  12. efficient at responding to searches and in extreme cases it may even become
  13. unstable.
  14. [discrete]
  15. [[create-a-sharding-strategy]]
  16. === Create a sharding strategy
  17. The best way to prevent oversharding and other shard-related issues is to
  18. create a sharding strategy. A sharding strategy helps you determine and
  19. maintain the optimal number of shards for your cluster while limiting the size
  20. of those shards.
  21. Unfortunately, there is no one-size-fits-all sharding strategy. A strategy that
  22. works in one environment may not scale in another. A good sharding strategy
  23. must account for your infrastructure, use case, and performance expectations.
  24. The best way to create a sharding strategy is to benchmark your production data
  25. on production hardware using the same queries and indexing loads you'd see in
  26. production. For our recommended methodology, watch the
  27. https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[quantitative
  28. cluster sizing video]. As you test different shard configurations, use {kib}'s
  29. {kibana-ref}/elasticsearch-metrics.html[{es} monitoring tools] to track your
  30. cluster's stability and performance.
  31. The following sections provide some reminders and guidelines you should
  32. consider when designing your sharding strategy. If your cluster is already
  33. oversharded, see <<reduce-cluster-shard-count>>.
  34. [discrete]
  35. [[shard-sizing-considerations]]
  36. === Sizing considerations
  37. Keep the following things in mind when building your sharding strategy.
  38. [discrete]
  39. [[single-thread-per-shard]]
  40. ==== Searches run on a single thread per shard
  41. Most searches hit multiple shards. Each shard runs the search on a single
  42. CPU thread. While a shard can run multiple concurrent searches, searches across a
  43. large number of shards can deplete a node's <<modules-threadpool,search
  44. thread pool>>. This can result in low throughput and slow search speeds.
  45. [discrete]
  46. [[each-shard-has-overhead]]
  47. ==== Each index, shard, segment and field has overhead
  48. Every index and every shard requires some memory and CPU resources. In most
  49. cases, a small set of large shards uses fewer resources than many small shards.
  50. Segments play a big role in a shard's resource usage. Most shards contain
  51. several segments, which store its index data. {es} keeps some segment metadata
  52. in heap memory so it can be quickly retrieved for searches. As a shard grows,
  53. its segments are <<index-modules-merge,merged>> into fewer, larger segments.
  54. This decreases the number of segments, which means less metadata is kept in
  55. heap memory.
  56. Every mapped field also carries some overhead in terms of memory usage and disk
  57. space. By default {es} will automatically create a mapping for every field in
  58. every document it indexes, but you can switch off this behaviour to
  59. <<explicit-mapping,take control of your mappings>>.
  60. Moreover every segment requires a small amount of heap memory for each mapped
  61. field. This per-segment-per-field heap overhead includes a copy of the field
  62. name, encoded using ISO-8859-1 if applicable or UTF-16 otherwise. Usually this
  63. is not noticeable, but you may need to account for this overhead if your shards
  64. have high segment counts and the corresponding mappings contain high field
  65. counts and/or very long field names.
  66. [discrete]
  67. [[shard-auto-balance]]
  68. ==== {es} automatically balances shards within a data tier
  69. A cluster's nodes are grouped into <<data-tiers,data tiers>>. Within each tier,
  70. {es} attempts to spread an index's shards across as many nodes as possible. When
  71. you add a new node or a node fails, {es} automatically rebalances the index's
  72. shards across the tier's remaining nodes.
  73. [discrete]
  74. [[shard-size-best-practices]]
  75. === Best practices
  76. Where applicable, use the following best practices as starting points for your
  77. sharding strategy.
  78. [discrete]
  79. [[delete-indices-not-documents]]
  80. ==== Delete indices, not documents
  81. Deleted documents aren't immediately removed from {es}'s file system.
  82. Instead, {es} marks the document as deleted on each related shard. The marked
  83. document will continue to use resources until it's removed during a periodic
  84. <<index-modules-merge,segment merge>>.
  85. When possible, delete entire indices instead. {es} can immediately remove
  86. deleted indices directly from the file system and free up resources.
  87. [discrete]
  88. [[use-ds-ilm-for-time-series]]
  89. ==== Use data streams and {ilm-init} for time series data
  90. <<data-streams,Data streams>> let you store time series data across multiple,
  91. time-based backing indices. You can use <<index-lifecycle-management,{ilm}
  92. ({ilm-init})>> to automatically manage these backing indices.
  93. One advantage of this setup is
  94. <<getting-started-index-lifecycle-management,automatic rollover>>, which creates
  95. a new write index when the current one meets a defined `max_primary_shard_size`,
  96. `max_age`, `max_docs`, or `max_size` threshold. When an index is no longer
  97. needed, you can use {ilm-init} to automatically delete it and free up resources.
  98. {ilm-init} also makes it easy to change your sharding strategy over time:
  99. * *Want to decrease the shard count for new indices?* +
  100. Change the <<index-number-of-shards,`index.number_of_shards`>> setting in the
  101. data stream's <<data-streams-change-mappings-and-settings,matching index
  102. template>>.
  103. * *Want larger shards or fewer backing indices?* +
  104. Increase your {ilm-init} policy's <<ilm-rollover,rollover threshold>>.
  105. * *Need indices that span shorter intervals?* +
  106. Offset the increased shard count by deleting older indices sooner. You can do
  107. this by lowering the `min_age` threshold for your policy's
  108. <<ilm-index-lifecycle,delete phase>>.
  109. Every new backing index is an opportunity to further tune your strategy.
  110. [discrete]
  111. [[shard-size-recommendation]]
  112. ==== Aim for shard sizes between 10GB and 50GB
  113. Larger shards take longer to recover after a failure. When a node fails, {es}
  114. rebalances the node's shards across the data tier's remaining nodes. This
  115. recovery process typically involves copying the shard contents across the
  116. network, so a 100GB shard will take twice as long to recover than a 50GB shard.
  117. In contrast, small shards carry proportionally more overhead and are less
  118. efficient to search. Searching fifty 1GB shards will take substantially more
  119. resources than searching a single 50GB shard containing the same data.
  120. There are no hard limits on shard size, but experience shows that shards
  121. between 10GB and 50GB typically work well for logs and time series data. You
  122. may be able to use larger shards depending on your network and use case.
  123. Smaller shards may be appropriate for
  124. {enterprise-search-ref}/index.html[Enterprise Search] and similar use cases.
  125. If you use {ilm-init}, set the <<ilm-rollover,rollover action>>'s
  126. `max_primary_shard_size` threshold to `50gb` to avoid shards larger than 50GB.
  127. To see the current size of your shards, use the <<cat-shards,cat shards API>>.
  128. [source,console]
  129. ----
  130. GET _cat/shards?v=true&h=index,prirep,shard,store&s=prirep,store&bytes=gb
  131. ----
  132. // TEST[setup:my_index]
  133. The `pri.store.size` value shows the combined size of all primary shards for
  134. the index.
  135. [source,txt]
  136. ----
  137. index prirep shard store
  138. .ds-my-data-stream-2099.05.06-000001 p 0 50gb
  139. ...
  140. ----
  141. // TESTRESPONSE[non_json]
  142. // TESTRESPONSE[s/\.ds-my-data-stream-2099\.05\.06-000001/my-index-000001/]
  143. // TESTRESPONSE[s/50gb/.*/]
  144. [discrete]
  145. [[shard-count-recommendation]]
  146. ==== Master-eligible nodes should have at least 1GB of heap per 3000 indices
  147. The number of indices a master node can manage is proportional to its heap
  148. size. The exact amount of heap memory needed for each index depends on various
  149. factors such as the size of the mapping and the number of shards per index.
  150. As a general rule of thumb, you should have fewer than 3000 indices per GB of
  151. heap on master nodes. For example, if your cluster has dedicated master nodes
  152. with 4GB of heap each then you should have fewer than 12000 indices. If your
  153. master nodes are not dedicated master nodes then the same sizing guidance
  154. applies: you should reserve at least 1GB of heap on each master-eligible node
  155. for every 3000 indices in your cluster.
  156. Note that this rule defines the absolute maximum number of indices that a
  157. master node can manage, but does not guarantee the performance of searches or
  158. indexing involving this many indices. You must also ensure that your data nodes
  159. have adequate resources for your workload and that your overall sharding
  160. strategy meets all your performance requirements. See also
  161. <<single-thread-per-shard>> and <<each-shard-has-overhead>>.
  162. To check the configured size of each node's heap, use the <<cat-nodes,cat nodes
  163. API>>.
  164. [source,console]
  165. ----
  166. GET _cat/nodes?v=true&h=heap.max
  167. ----
  168. // TEST[setup:my_index]
  169. You can use the <<cat-shards,cat shards API>> to check the number of shards per
  170. node.
  171. [source,console]
  172. ----
  173. GET _cat/shards?v=true
  174. ----
  175. // TEST[setup:my_index]
  176. [discrete]
  177. [[field-count-recommendation]]
  178. ==== Data nodes should have at least 1kB of heap per field per index, plus overheads
  179. The exact resource usage of each mapped field depends on its type, but a rule
  180. of thumb is to allow for approximately 1kB of heap overhead per mapped field
  181. per index held by each data node. In a running cluster, you can also consult the
  182. <<cluster-nodes-stats,Nodes stats API>>'s `mappings` indices statistic, which
  183. reports the number of field mappings and an estimation of their heap overhead.
  184. Additionally, you must also allow enough heap for {es}'s
  185. baseline usage as well as your workload such as indexing, searches and
  186. aggregations. 0.5GB of extra heap will suffice for many reasonable workloads,
  187. and you may need even less if your workload is very light while heavy workloads
  188. may require more.
  189. For example, if a data node holds shards from 1000 indices, each containing
  190. 4000 mapped fields, then you should allow approximately 1000 × 4000 × 1kB = 4GB
  191. of heap for the fields and another 0.5GB of heap for its workload and other
  192. overheads, and therefore this node will need a heap size of at least 4.5GB.
  193. Note that this rule defines the absolute maximum number of indices that a data
  194. node can manage, but does not guarantee the performance of searches or indexing
  195. involving this many indices. You must also ensure that your data nodes have
  196. adequate resources for your workload and that your overall sharding strategy
  197. meets all your performance requirements. See also <<single-thread-per-shard>>
  198. and <<each-shard-has-overhead>>.
  199. [discrete]
  200. [[avoid-node-hotspots]]
  201. ==== Avoid node hotspots
  202. If too many shards are allocated to a specific node, the node can become a
  203. hotspot. For example, if a single node contains too many shards for an index
  204. with a high indexing volume, the node is likely to have issues.
  205. To prevent hotspots, use the
  206. <<total-shards-per-node,`index.routing.allocation.total_shards_per_node`>> index
  207. setting to explicitly limit the number of shards on a single node. You can
  208. configure `index.routing.allocation.total_shards_per_node` using the
  209. <<indices-update-settings,update index settings API>>.
  210. [source,console]
  211. --------------------------------------------------
  212. PUT my-index-000001/_settings
  213. {
  214. "index" : {
  215. "routing.allocation.total_shards_per_node" : 5
  216. }
  217. }
  218. --------------------------------------------------
  219. // TEST[setup:my_index]
  220. [discrete]
  221. [[avoid-unnecessary-fields]]
  222. ==== Avoid unnecessary mapped fields
  223. By default {es} <<dynamic-mapping,automatically creates a mapping>> for every
  224. field in every document it indexes. Every mapped field corresponds to some data
  225. structures on disk which are needed for efficient search, retrieval, and
  226. aggregations on this field. Details about each mapped field are also held in
  227. memory. In many cases this overhead is unnecessary because a field is not used
  228. in any searches or aggregations. Use <<explicit-mapping>> instead of dynamic
  229. mapping to avoid creating fields that are never used. If a collection of fields
  230. are typically used together, consider using <<copy-to>> to consolidate them at
  231. index time. If a field is only rarely used, it may be better to make it a
  232. <<runtime,Runtime field>> instead.
  233. You can get information about which fields are being used with the
  234. <<field-usage-stats>> API, and you can analyze the disk usage of mapped fields
  235. using the <<indices-disk-usage>> API. Note however that unnecessary mapped
  236. fields also carry some memory overhead as well as their disk usage.
  237. [discrete]
  238. [[reduce-cluster-shard-count]]
  239. === Reduce a cluster's shard count
  240. If your cluster is already oversharded, you can use one or more of the following
  241. methods to reduce its shard count.
  242. [discrete]
  243. [[create-indices-that-cover-longer-time-periods]]
  244. ==== Create indices that cover longer time periods
  245. If you use {ilm-init} and your retention policy allows it, avoid using a
  246. `max_age` threshold for the rollover action. Instead, use
  247. `max_primary_shard_size` to avoid creating empty indices or many small shards.
  248. If your retention policy requires a `max_age` threshold, increase it to create
  249. indices that cover longer time intervals. For example, instead of creating daily
  250. indices, you can create indices on a weekly or monthly basis.
  251. [discrete]
  252. [[delete-empty-indices]]
  253. ==== Delete empty or unneeded indices
  254. If you're using {ilm-init} and roll over indices based on a `max_age` threshold,
  255. you can inadvertently create indices with no documents. These empty indices
  256. provide no benefit but still consume resources.
  257. You can find these empty indices using the <<cat-count,cat count API>>.
  258. [source,console]
  259. ----
  260. GET _cat/count/my-index-000001?v=true
  261. ----
  262. // TEST[setup:my_index]
  263. Once you have a list of empty indices, you can delete them using the
  264. <<indices-delete-index,delete index API>>. You can also delete any other
  265. unneeded indices.
  266. [source,console]
  267. ----
  268. DELETE my-index-000001
  269. ----
  270. // TEST[setup:my_index]
  271. [discrete]
  272. [[force-merge-during-off-peak-hours]]
  273. ==== Force merge during off-peak hours
  274. If you no longer write to an index, you can use the <<indices-forcemerge,force
  275. merge API>> to <<index-modules-merge,merge>> smaller segments into larger ones.
  276. This can reduce shard overhead and improve search speeds. However, force merges
  277. are resource-intensive. If possible, run the force merge during off-peak hours.
  278. [source,console]
  279. ----
  280. POST my-index-000001/_forcemerge
  281. ----
  282. // TEST[setup:my_index]
  283. [discrete]
  284. [[shrink-existing-index-to-fewer-shards]]
  285. ==== Shrink an existing index to fewer shards
  286. If you no longer write to an index, you can use the
  287. <<indices-shrink-index,shrink index API>> to reduce its shard count.
  288. {ilm-init} also has a <<ilm-shrink,shrink action>> for indices in the
  289. warm phase.
  290. [discrete]
  291. [[combine-smaller-indices]]
  292. ==== Combine smaller indices
  293. You can also use the <<docs-reindex,reindex API>> to combine indices
  294. with similar mappings into a single large index. For time series data, you could
  295. reindex indices for short time periods into a new index covering a
  296. longer period. For example, you could reindex daily indices from October with a
  297. shared index pattern, such as `my-index-2099.10.11`, into a monthly
  298. `my-index-2099.10` index. After the reindex, delete the smaller indices.
  299. [source,console]
  300. ----
  301. POST _reindex
  302. {
  303. "source": {
  304. "index": "my-index-2099.10.*"
  305. },
  306. "dest": {
  307. "index": "my-index-2099.10"
  308. }
  309. }
  310. ----
  311. [discrete]
  312. [[troubleshoot-shard-related-errors]]
  313. === Troubleshoot shard-related errors
  314. Here’s how to resolve common shard-related errors.
  315. [discrete]
  316. ==== this action would add [x] total shards, but this cluster currently has [y]/[z] maximum shards open;
  317. The <<cluster-max-shards-per-node,`cluster.max_shards_per_node`>> cluster
  318. setting limits the maximum number of open shards for a cluster. This error
  319. indicates an action would exceed this limit.
  320. If you're confident your changes won't destabilize the cluster, you can
  321. temporarily increase the limit using the <<cluster-update-settings,cluster
  322. update settings API>> and retry the action.
  323. [source,console]
  324. ----
  325. PUT _cluster/settings
  326. {
  327. "persistent" : {
  328. "cluster.max_shards_per_node": 1200
  329. }
  330. }
  331. ----
  332. This increase should only be temporary. As a long-term solution, we recommend
  333. you add nodes to the oversharded data tier or
  334. <<reduce-cluster-shard-count,reduce your cluster's shard count>>. To get a
  335. cluster's current shard count after making changes, use the
  336. <<cluster-stats,cluster stats API>>.
  337. [source,console]
  338. ----
  339. GET _cluster/stats?filter_path=indices.shards.total
  340. ----
  341. When a long-term solution is in place, we recommend you reset the
  342. `cluster.max_shards_per_node` limit.
  343. [source,console]
  344. ----
  345. PUT _cluster/settings
  346. {
  347. "persistent" : {
  348. "cluster.max_shards_per_node": null
  349. }
  350. }
  351. ----