| 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113 | [[index-modules-merge]]== Mergeexperimental[All of the settings exposed in the `merge` module are expert only and may be removed in the future]A shard in elasticsearch is a Lucene index, and a Lucene index is brokendown into segments. Segments are internal storage elements in the indexwhere the index data is stored, and are immutable up to delete markers.Segments are, periodically, merged into larger segments to keep theindex size at bay and expunge deletes.Merges segments of approximately equal size, subject to an allowednumber of segments per tier. The merge policy is able to mergenon-adjacent segments, and separates how many segments are merged at once from how manysegments are allowed per tier. It also does not over-merge (i.e., cascade merges).[float][[merge-settings]]=== Merge policy settingsAll merge policy settings are _dynamic_ and can be updated on a live index.The merge policy has the following settings:`index.merge.policy.expunge_deletes_allowed`::	When expungeDeletes is called, we only merge away a segment if its delete	percentage is over this threshold. Default is `10`.`index.merge.policy.floor_segment`::	Segments smaller than this are "rounded up" to this size, i.e. treated as	equal (floor) size for merge selection. This is to prevent frequent	flushing of tiny segments, thus preventing a long tail in the index. Default	is `2mb`.`index.merge.policy.max_merge_at_once`::	Maximum number of segments to be merged at a time during "normal" merging.	Default is `10`.`index.merge.policy.max_merge_at_once_explicit`::	Maximum number of segments to be merged at a time, during optimize or	expungeDeletes. Default is `30`.`index.merge.policy.max_merged_segment`::	Maximum sized segment to produce during normal merging (not explicit	optimize). This setting is approximate: the estimate of the merged segment	size is made by summing sizes of to-be-merged segments (compensating for	percent deleted docs). Default is `5gb`.`index.merge.policy.segments_per_tier`::	Sets the allowed number of segments per tier. Smaller values mean more	merging but fewer segments. Default is `10`. Note, this value needs to be	>= than the `max_merge_at_once` otherwise you'll force too many merges to	occur.`index.merge.policy.reclaim_deletes_weight`::	Controls how aggressively merges that reclaim more deletions are favored.	Higher values favor selecting merges that reclaim deletions. A value of	`0.0` means deletions don't impact merge selection. Defaults to `2.0`.For normal merging, the policy first computes a "budget" of how manysegments are allowed to be in the index. If the index is over-budget,then the policy sorts segments by decreasing size (proportionally considering percentdeletes), and then finds the least-cost merge. Merge cost is measured bya combination of the "skew" of the merge (size of largest seg divided bysmallest seg), total merge size and pct deletes reclaimed, so thatmerges with lower skew, smaller size and those reclaiming more deletes,are favored.If a merge will produce a segment that's larger than`max_merged_segment` then the policy will merge fewer segments (down to1 at once, if that one has deletions) to keep the segment size underbudget.Note, this can mean that for large shards that holds many gigabytes ofdata, the default of `max_merged_segment` (`5gb`) can cause for manysegments to be in an index, and causing searches to be slower. Use theindices segments API to see the segments that an index has, andpossibly either increase the `max_merged_segment` or issue an optimizecall for the index (try and aim to issue it on a low traffic time).[float][[merge-scheduling]]=== Merge schedulingThe merge scheduler (ConcurrentMergeScheduler) controls the execution ofmerge operations once they are needed (according to the merge policy).  Mergesrun in separate threads, and when the maximum number of threads is reached,further merges will wait until a merge thread becomes available.The merge scheduler supports the following _dynamic_ settings:`index.merge.scheduler.max_thread_count`::    The maximum number of threads that may be merging at once. Defaults to    `Math.max(1, Math.min(4, Runtime.getRuntime().availableProcessors() / 2))`    which works well for a good solid-state-disk (SSD).  If your index is on    spinning platter drives instead, decrease this to 1.`index.merge.scheduler.auto_throttle`::    If this is true (the default), then the merge scheduler will rate-limit IO    (writes) for merges to an adaptive value depending on how many merges are    requested over time.  An application with a low indexing rate that    unluckily suddenly requires a large merge will see that merge aggressively    throttled, while an application doing heavy indexing will see the throttle    move higher to allow merges to keep up with ongoing indexing.
 |