search-speed.asciidoc 8.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312
  1. [[tune-for-search-speed]]
  2. == Tune for search speed
  3. [float]
  4. === Give memory to the filesystem cache
  5. Elasticsearch heavily relies on the filesystem cache in order to make search
  6. fast. In general, you should make sure that at least half the available memory
  7. goes to the filesystem cache so that elasticsearch can keep hot regions of the
  8. index in physical memory.
  9. [float]
  10. === Use faster hardware
  11. If your search is I/O bound, you should investigate giving more memory to the
  12. filesystem cache (see above) or buying faster drives. In particular SSD drives
  13. are known to perform better than spinning disks. Always use local storage,
  14. remote filesystems such as `NFS` or `SMB` should be avoided. Also beware of
  15. virtualized storage such as Amazon's `Elastic Block Storage`. Virtualized
  16. storage works very well with Elasticsearch, and it is appealing since it is so
  17. fast and simple to set up, but it is also unfortunately inherently slower on an
  18. ongoing basis when compared to dedicated local storage. If you put an index on
  19. `EBS`, be sure to use provisioned IOPS otherwise operations could be quickly
  20. throttled.
  21. If your search is CPU-bound, you should investigate buying faster CPUs.
  22. [float]
  23. === Document modeling
  24. Documents should be modeled so that search-time operations are as cheap as possible.
  25. In particular, joins should be avoided. <<nested,`nested`>> can make queries
  26. several times slower and <<mapping-parent-field,parent-child>> relations can make
  27. queries hundreds of times slower. So if the same questions can be answered without
  28. joins by denormalizing documents, significant speedups can be expected.
  29. [float]
  30. === Pre-index data
  31. You should leverage patterns in your queries to optimize the way data is indexed.
  32. For instance, if all your documents have a `price` field and most queries run
  33. <<search-aggregations-bucket-range-aggregation,`range`>> aggregations on a fixed
  34. list of ranges, you could make this aggregation faster by pre-indexing the ranges
  35. into the index and using a <<search-aggregations-bucket-terms-aggregation,`terms`>>
  36. aggregations.
  37. For instance, if documents look like:
  38. [source,js]
  39. --------------------------------------------------
  40. PUT index/type/1
  41. {
  42. "designation": "spoon",
  43. "price": 13
  44. }
  45. --------------------------------------------------
  46. // CONSOLE
  47. and search requests look like:
  48. [source,js]
  49. --------------------------------------------------
  50. GET index/_search
  51. {
  52. "aggs": {
  53. "price_ranges": {
  54. "range": {
  55. "field": "price",
  56. "ranges": [
  57. { "to": 10 },
  58. { "from": 10, "to": 100 },
  59. { "from": 100 }
  60. ]
  61. }
  62. }
  63. }
  64. }
  65. --------------------------------------------------
  66. // CONSOLE
  67. // TEST[continued]
  68. Then documents could be enriched by a `price_range` field at index time, which
  69. should be mapped as a <<keyword,`keyword`>>:
  70. [source,js]
  71. --------------------------------------------------
  72. PUT index
  73. {
  74. "mappings": {
  75. "type": {
  76. "properties": {
  77. "price_range": {
  78. "type": "keyword"
  79. }
  80. }
  81. }
  82. }
  83. }
  84. PUT index/type/1
  85. {
  86. "designation": "spoon",
  87. "price": 13,
  88. "price_range": "10-100"
  89. }
  90. --------------------------------------------------
  91. // CONSOLE
  92. And then search requests could aggregate this new field rather than running a
  93. `range` aggregation on the `price` field.
  94. [source,js]
  95. --------------------------------------------------
  96. GET index/_search
  97. {
  98. "aggs": {
  99. "price_ranges": {
  100. "terms": {
  101. "field": "price_range"
  102. }
  103. }
  104. }
  105. }
  106. --------------------------------------------------
  107. // CONSOLE
  108. // TEST[continued]
  109. [float]
  110. === Mappings
  111. The fact that some data is numeric does not mean it should always be mapped as a
  112. <<number,numeric field>>. Typically, fields storing identifiers such as an `ISBN`
  113. or any number identifying a record from another database, might benefit from
  114. being mapped as <<keyword,`keyword`>> rather than `integer` or `long`.
  115. [float]
  116. === Avoid scripts
  117. In general, scripts should be avoided. If they are absolutely needed, you
  118. should prefer the `painless` and `expressions` engines.
  119. [float]
  120. === Search rounded dates
  121. Queries on date fields that use `now` are typically not cacheable since the
  122. range that is being matched changes all the time. However switching to a
  123. rounded date is often acceptable in terms of user experience, and has the
  124. benefit of making better use of the query cache.
  125. For instance the below query:
  126. [source,js]
  127. --------------------------------------------------
  128. PUT index/type/1
  129. {
  130. "my_date": "2016-05-11T16:30:55.328Z"
  131. }
  132. GET index/_search
  133. {
  134. "query": {
  135. "constant_score": {
  136. "filter": {
  137. "range": {
  138. "my_date": {
  139. "gte": "now-1h",
  140. "lte": "now"
  141. }
  142. }
  143. }
  144. }
  145. }
  146. }
  147. --------------------------------------------------
  148. // CONSOLE
  149. could be replaced with the following query:
  150. [source,js]
  151. --------------------------------------------------
  152. GET index/_search
  153. {
  154. "query": {
  155. "constant_score": {
  156. "filter": {
  157. "range": {
  158. "my_date": {
  159. "gte": "now-1h/m",
  160. "lte": "now/m"
  161. }
  162. }
  163. }
  164. }
  165. }
  166. }
  167. --------------------------------------------------
  168. // CONSOLE
  169. // TEST[continued]
  170. In that case we rounded to the minute, so if the current time is `16:31:29`,
  171. the range query will match everything whose value of the `my_date` field is
  172. between `15:31:00` and `16:31:59`. And if several users run a query that
  173. contains this range in the same minute, the query cache could help speed things
  174. up a bit. The longer the interval that is used for rounding, the more the query
  175. cache can help, but beware that too aggressive rounding might also hurt user
  176. experience.
  177. NOTE: It might be tempting to split ranges into a large cacheable part and
  178. smaller not cacheable parts in order to be able to leverage the query cache,
  179. as shown below:
  180. [source,js]
  181. --------------------------------------------------
  182. GET index/_search
  183. {
  184. "query": {
  185. "constant_score": {
  186. "filter": {
  187. "bool": {
  188. "should": [
  189. {
  190. "range": {
  191. "my_date": {
  192. "gte": "now-1h",
  193. "lte": "now-1h/m"
  194. }
  195. }
  196. },
  197. {
  198. "range": {
  199. "my_date": {
  200. "gt": "now-1h/m",
  201. "lt": "now/m"
  202. }
  203. }
  204. },
  205. {
  206. "range": {
  207. "my_date": {
  208. "gte": "now/m",
  209. "lte": "now"
  210. }
  211. }
  212. }
  213. ]
  214. }
  215. }
  216. }
  217. }
  218. }
  219. --------------------------------------------------
  220. // CONSOLE
  221. // TEST[continued]
  222. However such practice might make the query run slower in some cases since the
  223. overhead introduced by the `bool` query may defeat the savings from better
  224. leveraging the query cache.
  225. [float]
  226. === Force-merge read-only indices
  227. Indices that are read-only would benefit from being
  228. <<indices-forcemerge,merged down to a single segment>>. This is typically the
  229. case with time-based indices: only the index for the current time frame is
  230. getting new documents while older indices are read-only.
  231. IMPORTANT: Don't force-merge indices that are still being written to -- leave
  232. merging to the background merge process.
  233. [float]
  234. === Warm up global ordinals
  235. Global ordinals are a data-structure that is used in order to run
  236. <<search-aggregations-bucket-terms-aggregation,`terms`>> aggregations on
  237. <<keyword,`keyword`>> fields. They are loaded lazily in memory because
  238. elasticsearch does not know which fields will be used in `terms` aggregations
  239. and which fields won't. You can tell elasticsearch to load global ordinals
  240. eagerly at refresh-time by configuring mappings as described below:
  241. [source,js]
  242. --------------------------------------------------
  243. PUT index
  244. {
  245. "mappings": {
  246. "type": {
  247. "properties": {
  248. "foo": {
  249. "type": "keyword",
  250. "eager_global_ordinals": true
  251. }
  252. }
  253. }
  254. }
  255. }
  256. --------------------------------------------------
  257. // CONSOLE
  258. [float]
  259. === Warm up the filesystem cache
  260. If the machine running elasticsearch is restarted, the filesystem cache will be
  261. empty, so it will take some time before the operating system loads hot regions
  262. of the index into memory so that search operations are fast. You can explicitly
  263. tell the operating system which files should be loaded into memory eagerly
  264. depending on the file extension using the <<file-system,`index.store.preload`>>
  265. setting.
  266. WARNING: Loading data into the filesystem cache eagerly on too many indices or
  267. too many files will make search _slower_ if the filesystem cache is not large
  268. enough to hold all the data. Use with caution.