size-your-shards.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314
  1. [[size-your-shards]]
  2. == How to size your shards
  3. ++++
  4. <titleabbrev>Size your shards</titleabbrev>
  5. ++++
  6. To protect against hardware failure and increase capacity, {es} stores copies of
  7. an index’s data across multiple shards on multiple nodes. The number and size of
  8. these shards can have a significant impact on your cluster's health. One common
  9. problem is _oversharding_, a situation in which a cluster with a large number of
  10. shards becomes unstable.
  11. [discrete]
  12. [[create-a-sharding-strategy]]
  13. === Create a sharding strategy
  14. The best way to prevent oversharding and other shard-related issues
  15. is to create a sharding strategy. A sharding strategy helps you determine and
  16. maintain the optimal number of shards for your cluster while limiting the size
  17. of those shards.
  18. Unfortunately, there is no one-size-fits-all sharding strategy. A strategy that
  19. works in one environment may not scale in another. A good sharding strategy must
  20. account for your infrastructure, use case, and performance expectations.
  21. The best way to create a sharding strategy is to benchmark your production data
  22. on production hardware using the same queries and indexing loads you'd see in
  23. production. For our recommended methodology, watch the
  24. https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[quantitative
  25. cluster sizing video]. As you test different shard configurations, use {kib}'s
  26. {kibana-ref}/elasticsearch-metrics.html[{es} monitoring tools] to track your
  27. cluster's stability and performance.
  28. The following sections provide some reminders and guidelines you should consider
  29. when designing your sharding strategy. If your cluster has shard-related
  30. problems, see <<fix-an-oversharded-cluster>>.
  31. [discrete]
  32. [[shard-sizing-considerations]]
  33. === Sizing considerations
  34. Keep the following things in mind when building your sharding strategy.
  35. [discrete]
  36. [[single-thread-per-shard]]
  37. ==== Searches run on a single thread per shard
  38. Most searches hit multiple shards. Each shard runs the search on a single
  39. CPU thread. While a shard can run multiple concurrent searches, searches across a
  40. large number of shards can deplete a node's <<modules-threadpool,search
  41. thread pool>>. This can result in low throughput and slow search speeds.
  42. [discrete]
  43. [[each-shard-has-overhead]]
  44. ==== Each shard has overhead
  45. Every shard uses memory and CPU resources. In most cases, a small
  46. set of large shards uses fewer resources than many small shards.
  47. Segments play a big role in a shard's resource usage. Most shards contain
  48. several segments, which store its index data. {es} keeps segment metadata in
  49. JVM heap memory so it can be quickly retrieved for searches. As a
  50. shard grows, its segments are <<index-modules-merge,merged>> into fewer, larger
  51. segments. This decreases the number of segments, which means less metadata is
  52. kept in heap memory.
  53. [discrete]
  54. [[shard-auto-balance]]
  55. ==== {es} automatically balances shards within a data tier
  56. A cluster's nodes are grouped into <<data-tiers,data tiers>>. Within each tier,
  57. {es} attempts to spread an index's shards across as many nodes as possible. When
  58. you add a new node or a node fails, {es} automatically rebalances the index's
  59. shards across the tier's remaining nodes.
  60. [discrete]
  61. [[shard-size-best-practices]]
  62. === Best practices
  63. Where applicable, use the following best practices as starting points for your
  64. sharding strategy.
  65. [discrete]
  66. [[delete-indices-not-documents]]
  67. ==== Delete indices, not documents
  68. Deleted documents aren't immediately removed from {es}'s file system.
  69. Instead, {es} marks the document as deleted on each related shard. The marked
  70. document will continue to use resources until it's removed during a periodic
  71. <<index-modules-merge,segment merge>>.
  72. When possible, delete entire indices instead. {es} can immediately remove
  73. deleted indices directly from the file system and free up resources.
  74. [discrete]
  75. [[use-ds-ilm-for-time-series]]
  76. ==== Use data streams and {ilm-init} for time series data
  77. <<data-streams,Data streams>> let you store time series data across multiple,
  78. time-based backing indices. You can use <<index-lifecycle-management,{ilm}
  79. ({ilm-init})>> to automatically manage these backing indices.
  80. [role="screenshot"]
  81. image:images/ilm/index-lifecycle-policies.png[]
  82. One advantage of this setup is
  83. <<getting-started-index-lifecycle-management,automatic rollover>>, which creates
  84. a new write index when the current one meets a defined `max_primary_shard_size`,
  85. `max_age`, `max_docs`, or `max_size` threshold. When an index is no longer
  86. needed, you can use {ilm-init} to automatically delete it and free up resources.
  87. {ilm-init} also makes it easy to change your sharding strategy over time:
  88. * *Want to decrease the shard count for new indices?* +
  89. Change the <<index-number-of-shards,`index.number_of_shards`>> setting in the
  90. data stream's <<data-streams-change-mappings-and-settings,matching index
  91. template>>.
  92. * *Want larger shards?* +
  93. Increase your {ilm-init} policy's <<ilm-rollover,rollover threshold>>.
  94. * *Need indices that span shorter intervals?* +
  95. Offset the increased shard count by deleting older indices sooner. You can do
  96. this by lowering the `min_age` threshold for your policy's
  97. <<ilm-index-lifecycle,delete phase>>.
  98. Every new backing index is an opportunity to further tune your strategy.
  99. [discrete]
  100. [[shard-size-recommendation]]
  101. ==== Aim for shard sizes between 10GB and 65GB
  102. Shards larger than 65GB may make a cluster less likely to recover from failure.
  103. When a node fails, {es} rebalances the node's shards across the data tier's
  104. remaining nodes. Larger shards can be harder to move across a network and may
  105. tax node resources.
  106. If you use {ilm-init}, set the <<ilm-rollover,rollover action>>'s
  107. `max_primary_shard_size` threshold to `65gb` to avoid larger shards.
  108. To see the current size of your shards, use the <<cat-shards,cat shards API>>.
  109. [source,console]
  110. ----
  111. GET _cat/shards?v=true&h=index,prirep,shard,store&s=prirep,store&bytes=gb
  112. ----
  113. // TEST[setup:my_index]
  114. The `pri.store.size` value shows the combined size of all primary shards for
  115. the index.
  116. [source,txt]
  117. ----
  118. index prirep shard store
  119. .ds-my-data-stream-2099.05.06-000001 p 0 65gb
  120. ...
  121. ----
  122. // TESTRESPONSE[non_json]
  123. // TESTRESPONSE[s/\.ds-my-data-stream-2099\.05\.06-000001/my-index-000001/]
  124. // TESTRESPONSE[s/65gb/.*/]
  125. [discrete]
  126. [[shard-count-recommendation]]
  127. ==== Aim for 20 shards or fewer per GB of heap memory
  128. The number of shards a node can hold is proportional to the node's
  129. heap memory. For example, a node with 30GB of heap memory should
  130. have at most 600 shards. The further below this limit you can keep your nodes,
  131. the better. If you find your nodes exceeding more than 20 shards per GB,
  132. consider adding another node.
  133. To check the current size of each node's heap, use the <<cat-nodes,cat nodes
  134. API>>.
  135. [source,console]
  136. ----
  137. GET _cat/nodes?v=true&h=heap.current
  138. ----
  139. // TEST[setup:my_index]
  140. You can use the <<cat-shards,cat shards API>> to check the number of shards per
  141. node.
  142. [source,console]
  143. ----
  144. GET _cat/shards
  145. ----
  146. // TEST[setup:my_index]
  147. [discrete]
  148. [[avoid-node-hotspots]]
  149. ==== Avoid node hotspots
  150. If too many shards are allocated to a specific node, the node can become a
  151. hotspot. For example, if a single node contains too many shards for an index
  152. with a high indexing volume, the node is likely to have issues.
  153. To prevent hotspots, use the
  154. <<total-shards-per-node,`index.routing.allocation.total_shards_per_node`>> index
  155. setting to explicitly limit the number of shards on a single node. You can
  156. configure `index.routing.allocation.total_shards_per_node` using the
  157. <<indices-update-settings,update index settings API>>.
  158. [source,console]
  159. --------------------------------------------------
  160. PUT /my-index-000001/_settings
  161. {
  162. "index" : {
  163. "routing.allocation.total_shards_per_node" : 5
  164. }
  165. }
  166. --------------------------------------------------
  167. // TEST[setup:my_index]
  168. [discrete]
  169. [[fix-an-oversharded-cluster]]
  170. === Fix an oversharded cluster
  171. If your cluster is experiencing stability issues due to oversharded indices,
  172. you can use one or more of the following methods to fix them.
  173. [discrete]
  174. [[create-indices-that-cover-longer-time-periods]]
  175. ==== Create indices that cover longer time periods
  176. If you use {ilm-init} and your retention policy allows it, avoid using a
  177. `max_age` threshold for the rollover action. Instead, use
  178. `max_primary_shard_size` to avoid creating empty indices or many small shards.
  179. If your retention policy requires a `max_age` threshold, increase it to create
  180. indices that cover longer time intervals. For example, instead of creating daily
  181. indices, you can create indices on a weekly or monthly basis.
  182. [discrete]
  183. [[delete-empty-indices]]
  184. ==== Delete empty or unneeded indices
  185. If you're using {ilm-init} and roll over indices based on a `max_age` threshold,
  186. you can inadvertently create indices with no documents. These empty indices
  187. provide no benefit but still consume resources.
  188. You can find these empty indices using the <<cat-count,cat count API>>.
  189. [source,console]
  190. ----
  191. GET /_cat/count/my-index-000001?v=true
  192. ----
  193. // TEST[setup:my_index]
  194. Once you have a list of empty indices, you can delete them using the
  195. <<indices-delete-index,delete index API>>. You can also delete any other
  196. unneeded indices.
  197. [source,console]
  198. ----
  199. DELETE /my-index-*
  200. ----
  201. // TEST[setup:my_index]
  202. [discrete]
  203. [[force-merge-during-off-peak-hours]]
  204. ==== Force merge during off-peak hours
  205. If you no longer write to an index, you can use the <<indices-forcemerge,force
  206. merge API>> to <<index-modules-merge,merge>> smaller segments into larger ones.
  207. This can reduce shard overhead and improve search speeds. However, force merges
  208. are resource-intensive. If possible, run the force merge during off-peak hours.
  209. [source,console]
  210. ----
  211. POST /my-index-000001/_forcemerge
  212. ----
  213. // TEST[setup:my_index]
  214. [discrete]
  215. [[shrink-existing-index-to-fewer-shards]]
  216. ==== Shrink an existing index to fewer shards
  217. If you no longer write to an index, you can use the
  218. <<indices-shrink-index,shrink index API>> to reduce its shard count.
  219. [source,console]
  220. ----
  221. POST /my-index-000001/_shrink/my-shrunken-index-000001
  222. ----
  223. // TEST[s/^/PUT my-index-000001\n{"settings":{"index.number_of_shards":2,"blocks.write":true}}\n/]
  224. {ilm-init} also has a <<ilm-shrink,shrink action>> for indices in the
  225. warm phase.
  226. [discrete]
  227. [[combine-smaller-indices]]
  228. ==== Combine smaller indices
  229. You can also use the <<docs-reindex,reindex API>> to combine indices
  230. with similar mappings into a single large index. For time series data, you could
  231. reindex indices for short time periods into a new index covering a
  232. longer period. For example, you could reindex daily indices from October with a
  233. shared index pattern, such as `my-index-2099.10.11`, into a monthly
  234. `my-index-2099.10` index. After the reindex, delete the smaller indices.
  235. [source,console]
  236. ----
  237. POST /_reindex
  238. {
  239. "source": {
  240. "index": "my-index-2099.10.*"
  241. },
  242. "dest": {
  243. "index": "my-index-2099.10"
  244. }
  245. }
  246. ----