size-your-shards.asciidoc 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536
  1. [[size-your-shards]]
  2. == Size your shards
  3. Each index in {es} is divided into one or more shards, each of which may be
  4. replicated across multiple nodes to protect against hardware failures. If you
  5. are using <<data-streams>> then each data stream is backed by a sequence of
  6. indices. There is a limit to the amount of data you can store on a single node
  7. so you can increase the capacity of your cluster by adding nodes and increasing
  8. the number of indices and shards to match. However, each index and shard has
  9. some overhead and if you divide your data across too many shards then the
  10. overhead can become overwhelming. A cluster with too many indices or shards is
  11. said to suffer from _oversharding_. An oversharded cluster will be less
  12. efficient at responding to searches and in extreme cases it may even become
  13. unstable.
  14. [discrete]
  15. [[create-a-sharding-strategy]]
  16. === Create a sharding strategy
  17. The best way to prevent oversharding and other shard-related issues is to
  18. create a sharding strategy. A sharding strategy helps you determine and
  19. maintain the optimal number of shards for your cluster while limiting the size
  20. of those shards.
  21. Unfortunately, there is no one-size-fits-all sharding strategy. A strategy that
  22. works in one environment may not scale in another. A good sharding strategy
  23. must account for your infrastructure, use case, and performance expectations.
  24. The best way to create a sharding strategy is to benchmark your production data
  25. on production hardware using the same queries and indexing loads you'd see in
  26. production. For our recommended methodology, watch the
  27. https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[quantitative
  28. cluster sizing video]. As you test different shard configurations, use {kib}'s
  29. {kibana-ref}/elasticsearch-metrics.html[{es} monitoring tools] to track your
  30. cluster's stability and performance.
  31. The following sections provide some reminders and guidelines you should
  32. consider when designing your sharding strategy. If your cluster is already
  33. oversharded, see <<reduce-cluster-shard-count>>.
  34. [discrete]
  35. [[shard-sizing-considerations]]
  36. === Sizing considerations
  37. Keep the following things in mind when building your sharding strategy.
  38. [discrete]
  39. [[single-thread-per-shard]]
  40. ==== Searches run on a single thread per shard
  41. Most searches hit multiple shards. Each shard runs the search on a single
  42. CPU thread. While a shard can run multiple concurrent searches, searches across a
  43. large number of shards can deplete a node's <<modules-threadpool,search
  44. thread pool>>. This can result in low throughput and slow search speeds.
  45. [discrete]
  46. [[each-shard-has-overhead]]
  47. ==== Each index, shard, segment and field has overhead
  48. Every index and every shard requires some memory and CPU resources. In most
  49. cases, a small set of large shards uses fewer resources than many small shards.
  50. Segments play a big role in a shard's resource usage. Most shards contain
  51. several segments, which store its index data. {es} keeps some segment metadata
  52. in heap memory so it can be quickly retrieved for searches. As a shard grows,
  53. its segments are <<index-modules-merge,merged>> into fewer, larger segments.
  54. This decreases the number of segments, which means less metadata is kept in
  55. heap memory.
  56. Every mapped field also carries some overhead in terms of memory usage and disk
  57. space. By default {es} will automatically create a mapping for every field in
  58. every document it indexes, but you can switch off this behaviour to
  59. <<explicit-mapping,take control of your mappings>>.
  60. Moreover every segment requires a small amount of heap memory for each mapped
  61. field. This per-segment-per-field heap overhead includes a copy of the field
  62. name, encoded using ISO-8859-1 if applicable or UTF-16 otherwise. Usually this
  63. is not noticeable, but you may need to account for this overhead if your shards
  64. have high segment counts and the corresponding mappings contain high field
  65. counts and/or very long field names.
  66. [discrete]
  67. [[shard-auto-balance]]
  68. ==== {es} automatically balances shards within a data tier
  69. A cluster's nodes are grouped into <<data-tiers,data tiers>>. Within each tier,
  70. {es} attempts to spread an index's shards across as many nodes as possible. When
  71. you add a new node or a node fails, {es} automatically rebalances the index's
  72. shards across the tier's remaining nodes.
  73. [discrete]
  74. [[shard-size-best-practices]]
  75. === Best practices
  76. Where applicable, use the following best practices as starting points for your
  77. sharding strategy.
  78. [discrete]
  79. [[delete-indices-not-documents]]
  80. ==== Delete indices, not documents
  81. Deleted documents aren't immediately removed from {es}'s file system.
  82. Instead, {es} marks the document as deleted on each related shard. The marked
  83. document will continue to use resources until it's removed during a periodic
  84. <<index-modules-merge,segment merge>>.
  85. When possible, delete entire indices instead. {es} can immediately remove
  86. deleted indices directly from the file system and free up resources.
  87. [discrete]
  88. [[use-ds-ilm-for-time-series]]
  89. ==== Use data streams and {ilm-init} for time series data
  90. <<data-streams,Data streams>> let you store time series data across multiple,
  91. time-based backing indices. You can use <<index-lifecycle-management,{ilm}
  92. ({ilm-init})>> to automatically manage these backing indices.
  93. One advantage of this setup is
  94. <<getting-started-index-lifecycle-management,automatic rollover>>, which creates
  95. a new write index when the current one meets a defined `max_primary_shard_size`,
  96. `max_age`, `max_docs`, or `max_size` threshold. When an index is no longer
  97. needed, you can use {ilm-init} to automatically delete it and free up resources.
  98. {ilm-init} also makes it easy to change your sharding strategy over time:
  99. * *Want to decrease the shard count for new indices?* +
  100. Change the <<index-number-of-shards,`index.number_of_shards`>> setting in the
  101. data stream's <<data-streams-change-mappings-and-settings,matching index
  102. template>>.
  103. * *Want larger shards or fewer backing indices?* +
  104. Increase your {ilm-init} policy's <<ilm-rollover,rollover threshold>>.
  105. * *Need indices that span shorter intervals?* +
  106. Offset the increased shard count by deleting older indices sooner. You can do
  107. this by lowering the `min_age` threshold for your policy's
  108. <<ilm-index-lifecycle,delete phase>>.
  109. Every new backing index is an opportunity to further tune your strategy.
  110. [discrete]
  111. [[shard-size-recommendation]]
  112. ==== Aim for shard sizes between 10GB and 50GB
  113. Larger shards take longer to recover after a failure. When a node fails, {es}
  114. rebalances the node's shards across the data tier's remaining nodes. This
  115. recovery process typically involves copying the shard contents across the
  116. network, so a 100GB shard will take twice as long to recover than a 50GB shard.
  117. In contrast, small shards carry proportionally more overhead and are less
  118. efficient to search. Searching fifty 1GB shards will take substantially more
  119. resources than searching a single 50GB shard containing the same data.
  120. There are no hard limits on shard size, but experience shows that shards
  121. between 10GB and 50GB typically work well for logs and time series data. You
  122. may be able to use larger shards depending on your network and use case.
  123. Smaller shards may be appropriate for
  124. {enterprise-search-ref}/index.html[Enterprise Search] and similar use cases.
  125. If you use {ilm-init}, set the <<ilm-rollover,rollover action>>'s
  126. `max_primary_shard_size` threshold to `50gb` to avoid shards larger than 50GB.
  127. To see the current size of your shards, use the <<cat-shards,cat shards API>>.
  128. [source,console]
  129. ----
  130. GET _cat/shards?v=true&h=index,prirep,shard,store&s=prirep,store&bytes=gb
  131. ----
  132. // TEST[setup:my_index]
  133. The `pri.store.size` value shows the combined size of all primary shards for
  134. the index.
  135. [source,txt]
  136. ----
  137. index prirep shard store
  138. .ds-my-data-stream-2099.05.06-000001 p 0 50gb
  139. ...
  140. ----
  141. // TESTRESPONSE[non_json]
  142. // TESTRESPONSE[s/\.ds-my-data-stream-2099\.05\.06-000001/my-index-000001/]
  143. // TESTRESPONSE[s/50gb/.*/]
  144. [discrete]
  145. [[shard-count-recommendation]]
  146. ==== Master-eligible nodes should have at least 1GB of heap per 3000 indices
  147. The number of indices a master node can manage is proportional to its heap
  148. size. The exact amount of heap memory needed for each index depends on various
  149. factors such as the size of the mapping and the number of shards per index.
  150. As a general rule of thumb, you should have fewer than 3000 indices per GB of
  151. heap on master nodes. For example, if your cluster has dedicated master nodes
  152. with 4GB of heap each then you should have fewer than 12000 indices. If your
  153. master nodes are not dedicated master nodes then the same sizing guidance
  154. applies: you should reserve at least 1GB of heap on each master-eligible node
  155. for every 3000 indices in your cluster.
  156. Note that this rule defines the absolute maximum number of indices that a
  157. master node can manage, but does not guarantee the performance of searches or
  158. indexing involving this many indices. You must also ensure that your data nodes
  159. have adequate resources for your workload and that your overall sharding
  160. strategy meets all your performance requirements. See also
  161. <<single-thread-per-shard>> and <<each-shard-has-overhead>>.
  162. To check the configured size of each node's heap, use the <<cat-nodes,cat nodes
  163. API>>.
  164. [source,console]
  165. ----
  166. GET _cat/nodes?v=true&h=heap.max
  167. ----
  168. // TEST[setup:my_index]
  169. You can use the <<cat-shards,cat shards API>> to check the number of shards per
  170. node.
  171. [source,console]
  172. ----
  173. GET _cat/shards?v=true
  174. ----
  175. // TEST[setup:my_index]
  176. [discrete]
  177. [[field-count-recommendation]]
  178. ==== Allow enough heap for field mappers and overheads
  179. Mapped fields consume some heap memory on each node, and require extra
  180. heap on data nodes.
  181. Ensure each node has enough heap for mappings, and also allow
  182. extra space for overheads associated with its workload. The following sections
  183. show how to determine these heap requirements.
  184. [discrete]
  185. ===== Mapping metadata in the cluster state
  186. Each node in the cluster has a copy of the <<cluster-state-api-desc,cluster state>>.
  187. The cluster state includes information about the field mappings for
  188. each index. This information has heap overhead. You can use the
  189. <<cluster-stats,Cluster stats API>> to get the heap overhead of the total size of
  190. all mappings after deduplication and compression.
  191. [source,console]
  192. ----
  193. GET _cluster/stats?human&filter_path=indices.mappings.total_deduplicated_mapping_size*
  194. ----
  195. // TEST[setup:node]
  196. This will show you information like in this example output:
  197. [source,console-result]
  198. ----
  199. {
  200. "indices": {
  201. "mappings": {
  202. "total_deduplicated_mapping_size": "1gb",
  203. "total_deduplicated_mapping_size_in_bytes": 1073741824
  204. }
  205. }
  206. }
  207. ----
  208. // TESTRESPONSE[s/"total_deduplicated_mapping_size": "1gb"/"total_deduplicated_mapping_size": $body.$_path/]
  209. // TESTRESPONSE[s/"total_deduplicated_mapping_size_in_bytes": 1073741824/"total_deduplicated_mapping_size_in_bytes": $body.$_path/]
  210. [discrete]
  211. ===== Retrieving heap size and field mapper overheads
  212. You can use the <<cluster-nodes-stats,Nodes stats API>> to get two relevant metrics
  213. for each node:
  214. * The size of the heap on each node.
  215. * Any additional estimated heap overhead for the fields per node. This is specific to
  216. data nodes, where apart from the cluster state field information mentioned above,
  217. there is additional heap overhead for each mapped field of an index held by the data
  218. node. For nodes which are not data nodes, this field may be zero.
  219. [source,console]
  220. ----
  221. GET _nodes/stats?human&filter_path=nodes.*.name,nodes.*.indices.mappings.total_estimated_overhead*,nodes.*.jvm.mem.heap_max*
  222. ----
  223. // TEST[setup:node]
  224. For each node, this will show you information like in this example output:
  225. [source,console-result]
  226. ----
  227. {
  228. "nodes": {
  229. "USpTGYaBSIKbgSUJR2Z9lg": {
  230. "name": "node-0",
  231. "indices": {
  232. "mappings": {
  233. "total_estimated_overhead": "1gb",
  234. "total_estimated_overhead_in_bytes": 1073741824
  235. }
  236. },
  237. "jvm": {
  238. "mem": {
  239. "heap_max": "4gb",
  240. "heap_max_in_bytes": 4294967296
  241. }
  242. }
  243. }
  244. }
  245. }
  246. ----
  247. // TESTRESPONSE[s/"USpTGYaBSIKbgSUJR2Z9lg"/\$node_name/]
  248. // TESTRESPONSE[s/"name": "node-0"/"name": $body.$_path/]
  249. // TESTRESPONSE[s/"total_estimated_overhead": "1gb"/"total_estimated_overhead": $body.$_path/]
  250. // TESTRESPONSE[s/"total_estimated_overhead_in_bytes": 1073741824/"total_estimated_overhead_in_bytes": $body.$_path/]
  251. // TESTRESPONSE[s/"heap_max": "4gb"/"heap_max": $body.$_path/]
  252. // TESTRESPONSE[s/"heap_max_in_bytes": 4294967296/"heap_max_in_bytes": $body.$_path/]
  253. [discrete]
  254. ===== Consider additional heap overheads
  255. Apart from the two field overhead metrics above, you must additionally allow
  256. enough heap for {es}'s baseline usage as well as your workload such as indexing,
  257. searches and aggregations. 0.5GB of extra heap will suffice for many reasonable
  258. workloads, and you may need even less if your workload is very light while heavy
  259. workloads may require more.
  260. [discrete]
  261. ===== Example
  262. As an example, consider the outputs above for a data node. The heap of the node
  263. will need at least:
  264. * 1 GB for the cluster state field information.
  265. * 1 GB for the additional estimated heap overhead for the fields of the data node.
  266. * 0.5 GB of extra heap for other overheads.
  267. Since the node has a 4GB heap max size in the example, it is thus sufficient
  268. for the total required heap of 2.5GB.
  269. If the heap max size for a node is not sufficient, consider
  270. <<avoid-unnecessary-fields,avoiding unnecessary fields>>,
  271. or scaling up the cluster, or redistributing index shards.
  272. Note that the above rules do not necessarily guarantee the performance of
  273. searches or indexing involving a very high number of indices. You must also
  274. ensure that your data nodes have adequate resources for your workload and
  275. that your overall sharding strategy meets all your performance requirements.
  276. See also <<single-thread-per-shard>> and <<each-shard-has-overhead>>.
  277. [discrete]
  278. [[avoid-node-hotspots]]
  279. ==== Avoid node hotspots
  280. If too many shards are allocated to a specific node, the node can become a
  281. hotspot. For example, if a single node contains too many shards for an index
  282. with a high indexing volume, the node is likely to have issues.
  283. To prevent hotspots, use the
  284. <<total-shards-per-node,`index.routing.allocation.total_shards_per_node`>> index
  285. setting to explicitly limit the number of shards on a single node. You can
  286. configure `index.routing.allocation.total_shards_per_node` using the
  287. <<indices-update-settings,update index settings API>>.
  288. [source,console]
  289. --------------------------------------------------
  290. PUT my-index-000001/_settings
  291. {
  292. "index" : {
  293. "routing.allocation.total_shards_per_node" : 5
  294. }
  295. }
  296. --------------------------------------------------
  297. // TEST[setup:my_index]
  298. [discrete]
  299. [[avoid-unnecessary-fields]]
  300. ==== Avoid unnecessary mapped fields
  301. By default {es} <<dynamic-mapping,automatically creates a mapping>> for every
  302. field in every document it indexes. Every mapped field corresponds to some data
  303. structures on disk which are needed for efficient search, retrieval, and
  304. aggregations on this field. Details about each mapped field are also held in
  305. memory. In many cases this overhead is unnecessary because a field is not used
  306. in any searches or aggregations. Use <<explicit-mapping>> instead of dynamic
  307. mapping to avoid creating fields that are never used. If a collection of fields
  308. are typically used together, consider using <<copy-to>> to consolidate them at
  309. index time. If a field is only rarely used, it may be better to make it a
  310. <<runtime,Runtime field>> instead.
  311. You can get information about which fields are being used with the
  312. <<field-usage-stats>> API, and you can analyze the disk usage of mapped fields
  313. using the <<indices-disk-usage>> API. Note however that unnecessary mapped
  314. fields also carry some memory overhead as well as their disk usage.
  315. [discrete]
  316. [[reduce-cluster-shard-count]]
  317. === Reduce a cluster's shard count
  318. If your cluster is already oversharded, you can use one or more of the following
  319. methods to reduce its shard count.
  320. [discrete]
  321. [[create-indices-that-cover-longer-time-periods]]
  322. ==== Create indices that cover longer time periods
  323. If you use {ilm-init} and your retention policy allows it, avoid using a
  324. `max_age` threshold for the rollover action. Instead, use
  325. `max_primary_shard_size` to avoid creating empty indices or many small shards.
  326. If your retention policy requires a `max_age` threshold, increase it to create
  327. indices that cover longer time intervals. For example, instead of creating daily
  328. indices, you can create indices on a weekly or monthly basis.
  329. [discrete]
  330. [[delete-empty-indices]]
  331. ==== Delete empty or unneeded indices
  332. If you're using {ilm-init} and roll over indices based on a `max_age` threshold,
  333. you can inadvertently create indices with no documents. These empty indices
  334. provide no benefit but still consume resources.
  335. You can find these empty indices using the <<cat-count,cat count API>>.
  336. [source,console]
  337. ----
  338. GET _cat/count/my-index-000001?v=true
  339. ----
  340. // TEST[setup:my_index]
  341. Once you have a list of empty indices, you can delete them using the
  342. <<indices-delete-index,delete index API>>. You can also delete any other
  343. unneeded indices.
  344. [source,console]
  345. ----
  346. DELETE my-index-000001
  347. ----
  348. // TEST[setup:my_index]
  349. [discrete]
  350. [[force-merge-during-off-peak-hours]]
  351. ==== Force merge during off-peak hours
  352. If you no longer write to an index, you can use the <<indices-forcemerge,force
  353. merge API>> to <<index-modules-merge,merge>> smaller segments into larger ones.
  354. This can reduce shard overhead and improve search speeds. However, force merges
  355. are resource-intensive. If possible, run the force merge during off-peak hours.
  356. [source,console]
  357. ----
  358. POST my-index-000001/_forcemerge
  359. ----
  360. // TEST[setup:my_index]
  361. [discrete]
  362. [[shrink-existing-index-to-fewer-shards]]
  363. ==== Shrink an existing index to fewer shards
  364. If you no longer write to an index, you can use the
  365. <<indices-shrink-index,shrink index API>> to reduce its shard count.
  366. {ilm-init} also has a <<ilm-shrink,shrink action>> for indices in the
  367. warm phase.
  368. [discrete]
  369. [[combine-smaller-indices]]
  370. ==== Combine smaller indices
  371. You can also use the <<docs-reindex,reindex API>> to combine indices
  372. with similar mappings into a single large index. For time series data, you could
  373. reindex indices for short time periods into a new index covering a
  374. longer period. For example, you could reindex daily indices from October with a
  375. shared index pattern, such as `my-index-2099.10.11`, into a monthly
  376. `my-index-2099.10` index. After the reindex, delete the smaller indices.
  377. [source,console]
  378. ----
  379. POST _reindex
  380. {
  381. "source": {
  382. "index": "my-index-2099.10.*"
  383. },
  384. "dest": {
  385. "index": "my-index-2099.10"
  386. }
  387. }
  388. ----
  389. [discrete]
  390. [[troubleshoot-shard-related-errors]]
  391. === Troubleshoot shard-related errors
  392. Here’s how to resolve common shard-related errors.
  393. [discrete]
  394. ==== this action would add [x] total shards, but this cluster currently has [y]/[z] maximum shards open;
  395. The <<cluster-max-shards-per-node,`cluster.max_shards_per_node`>> cluster
  396. setting limits the maximum number of open shards for a cluster. This error
  397. indicates an action would exceed this limit.
  398. If you're confident your changes won't destabilize the cluster, you can
  399. temporarily increase the limit using the <<cluster-update-settings,cluster
  400. update settings API>> and retry the action.
  401. [source,console]
  402. ----
  403. PUT _cluster/settings
  404. {
  405. "persistent" : {
  406. "cluster.max_shards_per_node": 1200
  407. }
  408. }
  409. ----
  410. This increase should only be temporary. As a long-term solution, we recommend
  411. you add nodes to the oversharded data tier or
  412. <<reduce-cluster-shard-count,reduce your cluster's shard count>>. To get a
  413. cluster's current shard count after making changes, use the
  414. <<cluster-stats,cluster stats API>>.
  415. [source,console]
  416. ----
  417. GET _cluster/stats?filter_path=indices.shards.total
  418. ----
  419. When a long-term solution is in place, we recommend you reset the
  420. `cluster.max_shards_per_node` limit.
  421. [source,console]
  422. ----
  423. PUT _cluster/settings
  424. {
  425. "persistent" : {
  426. "cluster.max_shards_per_node": null
  427. }
  428. }
  429. ----