search-speed.asciidoc 20 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688
  1. [[tune-for-search-speed]]
  2. == Tune for search speed
  3. [float]
  4. === Give memory to the filesystem cache
  5. Elasticsearch heavily relies on the filesystem cache in order to make search
  6. fast. In general, you should make sure that at least half the available memory
  7. goes to the filesystem cache so that Elasticsearch can keep hot regions of the
  8. index in physical memory.
  9. [float]
  10. === Use faster hardware
  11. If your search is I/O bound, you should investigate giving more memory to the
  12. filesystem cache (see above) or buying faster drives. In particular SSD drives
  13. are known to perform better than spinning disks. Always use local storage,
  14. remote filesystems such as `NFS` or `SMB` should be avoided. Also beware of
  15. virtualized storage such as Amazon's `Elastic Block Storage`. Virtualized
  16. storage works very well with Elasticsearch, and it is appealing since it is so
  17. fast and simple to set up, but it is also unfortunately inherently slower on an
  18. ongoing basis when compared to dedicated local storage. If you put an index on
  19. `EBS`, be sure to use provisioned IOPS otherwise operations could be quickly
  20. throttled.
  21. If your search is CPU-bound, you should investigate buying faster CPUs.
  22. [float]
  23. === Document modeling
  24. Documents should be modeled so that search-time operations are as cheap as possible.
  25. In particular, joins should be avoided. <<nested,`nested`>> can make queries
  26. several times slower and <<parent-join,parent-child>> relations can make
  27. queries hundreds of times slower. So if the same questions can be answered without
  28. joins by denormalizing documents, significant speedups can be expected.
  29. [float]
  30. === Search as few fields as possible
  31. The more fields a <<query-dsl-query-string-query,`query_string`>> or
  32. <<query-dsl-multi-match-query,`multi_match`>> query targets, the slower it is.
  33. A common technique to improve search speed over multiple fields is to copy
  34. their values into a single field at index time, and then use this field at
  35. search time. This can be automated with the <<copy-to,`copy-to`>> directive of
  36. mappings without having to change the source of documents. Here is an example
  37. of an index containing movies that optimizes queries that search over both the
  38. name and the plot of the movie by indexing both values into the `name_and_plot`
  39. field.
  40. [source,console]
  41. --------------------------------------------------
  42. PUT movies
  43. {
  44. "mappings": {
  45. "properties": {
  46. "name_and_plot": {
  47. "type": "text"
  48. },
  49. "name": {
  50. "type": "text",
  51. "copy_to": "name_and_plot"
  52. },
  53. "plot": {
  54. "type": "text",
  55. "copy_to": "name_and_plot"
  56. }
  57. }
  58. }
  59. }
  60. --------------------------------------------------
  61. [float]
  62. === Pre-index data
  63. You should leverage patterns in your queries to optimize the way data is indexed.
  64. For instance, if all your documents have a `price` field and most queries run
  65. <<search-aggregations-bucket-range-aggregation,`range`>> aggregations on a fixed
  66. list of ranges, you could make this aggregation faster by pre-indexing the ranges
  67. into the index and using a <<search-aggregations-bucket-terms-aggregation,`terms`>>
  68. aggregations.
  69. For instance, if documents look like:
  70. [source,console]
  71. --------------------------------------------------
  72. PUT index/_doc/1
  73. {
  74. "designation": "spoon",
  75. "price": 13
  76. }
  77. --------------------------------------------------
  78. and search requests look like:
  79. [source,console]
  80. --------------------------------------------------
  81. GET index/_search
  82. {
  83. "aggs": {
  84. "price_ranges": {
  85. "range": {
  86. "field": "price",
  87. "ranges": [
  88. { "to": 10 },
  89. { "from": 10, "to": 100 },
  90. { "from": 100 }
  91. ]
  92. }
  93. }
  94. }
  95. }
  96. --------------------------------------------------
  97. // TEST[continued]
  98. Then documents could be enriched by a `price_range` field at index time, which
  99. should be mapped as a <<keyword,`keyword`>>:
  100. [source,console]
  101. --------------------------------------------------
  102. PUT index
  103. {
  104. "mappings": {
  105. "properties": {
  106. "price_range": {
  107. "type": "keyword"
  108. }
  109. }
  110. }
  111. }
  112. PUT index/_doc/1
  113. {
  114. "designation": "spoon",
  115. "price": 13,
  116. "price_range": "10-100"
  117. }
  118. --------------------------------------------------
  119. And then search requests could aggregate this new field rather than running a
  120. `range` aggregation on the `price` field.
  121. [source,console]
  122. --------------------------------------------------
  123. GET index/_search
  124. {
  125. "aggs": {
  126. "price_ranges": {
  127. "terms": {
  128. "field": "price_range"
  129. }
  130. }
  131. }
  132. }
  133. --------------------------------------------------
  134. // TEST[continued]
  135. [float]
  136. [[map-ids-as-keyword]]
  137. === Consider mapping identifiers as `keyword`
  138. include::../mapping/types/numeric.asciidoc[tag=map-ids-as-keyword]
  139. [float]
  140. === Avoid scripts
  141. If possible, avoid using <<modules-scripting,scripts>> or
  142. <<request-body-search-script-fields,scripted fields>> in searches. Because
  143. scripts can't make use of index structures, using scripts in search queries can
  144. result in slower search speeds.
  145. If you often use scripts to transform indexed data, you can speed up search by
  146. making these changes during ingest instead. However, that often means slower
  147. index speeds.
  148. .*Example*
  149. [%collapsible]
  150. ====
  151. An index, `my_test_scores`, contains two `long` fields:
  152. * `math_score`
  153. * `verbal_score`
  154. When running searches, users often use a script to sort results by the sum of
  155. these two field's values.
  156. [source,console]
  157. ----
  158. GET /my_test_scores/_search
  159. {
  160. "query": {
  161. "term": {
  162. "grad_year": "2020"
  163. }
  164. },
  165. "sort": [
  166. {
  167. "_script": {
  168. "type": "number",
  169. "script": {
  170. "source": "doc['math_score'].value + doc['verbal_score'].value"
  171. },
  172. "order": "desc"
  173. }
  174. }
  175. ]
  176. }
  177. ----
  178. // TEST[s/^/PUT my_test_scores\n/]
  179. To speed up search, you can perform this calculation during ingest and index the
  180. sum to a field instead.
  181. First, <<indices-put-mapping,add a new field>>, `total_score`, to the index. The
  182. `total_score` field will contain sum of the `math_score` and `verbal_score`
  183. field values.
  184. [source,console]
  185. ----
  186. PUT /my_test_scores/_mapping
  187. {
  188. "properties": {
  189. "total_score": {
  190. "type": "long"
  191. }
  192. }
  193. }
  194. ----
  195. // TEST[continued]
  196. Next, use an <<ingest,ingest pipeline>> containing the
  197. <<script-processor,`script`>> processor to calculate the sum of `math_score` and
  198. `verbal_score` and index it in the `total_score` field.
  199. [source,console]
  200. ----
  201. PUT _ingest/pipeline/my_test_scores_pipeline
  202. {
  203. "description": "Calculates the total test score",
  204. "processors": [
  205. {
  206. "script": {
  207. "source": "ctx.total_score = (ctx.math_score + ctx.verbal_score)"
  208. }
  209. }
  210. ]
  211. }
  212. ----
  213. // TEST[continued]
  214. To update existing data, use this pipeline to <<docs-reindex,reindex>> any
  215. documents from `my_test_scores` to a new index, `my_test_scores_2`.
  216. [source,console]
  217. ----
  218. POST /_reindex
  219. {
  220. "source": {
  221. "index": "my_test_scores"
  222. },
  223. "dest": {
  224. "index": "my_test_scores_2",
  225. "pipeline": "my_test_scores_pipeline"
  226. }
  227. }
  228. ----
  229. // TEST[continued]
  230. Continue using the pipeline to index any new documents to `my_test_scores_2`.
  231. [source,console]
  232. ----
  233. POST /my_test_scores_2/_doc/?pipeline=my_test_scores_pipeline
  234. {
  235. "student": "kimchy",
  236. "grad_year": "2020",
  237. "math_score": 800,
  238. "verbal_score": 800
  239. }
  240. ----
  241. // TEST[continued]
  242. These changes may slow indexing but allow for faster searches. Users can now
  243. sort searches made on `my_test_scores_2` using the `total_score` field instead
  244. of using a script.
  245. [source,console]
  246. ----
  247. GET /my_test_scores_2/_search
  248. {
  249. "query": {
  250. "term": {
  251. "grad_year": "2020"
  252. }
  253. },
  254. "sort": [
  255. {
  256. "total_score": {
  257. "order": "desc"
  258. }
  259. }
  260. ]
  261. }
  262. ----
  263. // TEST[continued]
  264. ////
  265. [source,console]
  266. ----
  267. DELETE /_ingest/pipeline/my_test_scores_pipeline
  268. ----
  269. // TEST[continued]
  270. [source,console-result]
  271. ----
  272. {
  273. "acknowledged": true
  274. }
  275. ----
  276. ////
  277. ====
  278. We recommend testing and benchmarking any indexing changes before deploying them
  279. in production.
  280. [float]
  281. === Search rounded dates
  282. Queries on date fields that use `now` are typically not cacheable since the
  283. range that is being matched changes all the time. However switching to a
  284. rounded date is often acceptable in terms of user experience, and has the
  285. benefit of making better use of the query cache.
  286. For instance the below query:
  287. [source,console]
  288. --------------------------------------------------
  289. PUT index/_doc/1
  290. {
  291. "my_date": "2016-05-11T16:30:55.328Z"
  292. }
  293. GET index/_search
  294. {
  295. "query": {
  296. "constant_score": {
  297. "filter": {
  298. "range": {
  299. "my_date": {
  300. "gte": "now-1h",
  301. "lte": "now"
  302. }
  303. }
  304. }
  305. }
  306. }
  307. }
  308. --------------------------------------------------
  309. could be replaced with the following query:
  310. [source,console]
  311. --------------------------------------------------
  312. GET index/_search
  313. {
  314. "query": {
  315. "constant_score": {
  316. "filter": {
  317. "range": {
  318. "my_date": {
  319. "gte": "now-1h/m",
  320. "lte": "now/m"
  321. }
  322. }
  323. }
  324. }
  325. }
  326. }
  327. --------------------------------------------------
  328. // TEST[continued]
  329. In that case we rounded to the minute, so if the current time is `16:31:29`,
  330. the range query will match everything whose value of the `my_date` field is
  331. between `15:31:00` and `16:31:59`. And if several users run a query that
  332. contains this range in the same minute, the query cache could help speed things
  333. up a bit. The longer the interval that is used for rounding, the more the query
  334. cache can help, but beware that too aggressive rounding might also hurt user
  335. experience.
  336. NOTE: It might be tempting to split ranges into a large cacheable part and
  337. smaller not cacheable parts in order to be able to leverage the query cache,
  338. as shown below:
  339. [source,console]
  340. --------------------------------------------------
  341. GET index/_search
  342. {
  343. "query": {
  344. "constant_score": {
  345. "filter": {
  346. "bool": {
  347. "should": [
  348. {
  349. "range": {
  350. "my_date": {
  351. "gte": "now-1h",
  352. "lte": "now-1h/m"
  353. }
  354. }
  355. },
  356. {
  357. "range": {
  358. "my_date": {
  359. "gt": "now-1h/m",
  360. "lt": "now/m"
  361. }
  362. }
  363. },
  364. {
  365. "range": {
  366. "my_date": {
  367. "gte": "now/m",
  368. "lte": "now"
  369. }
  370. }
  371. }
  372. ]
  373. }
  374. }
  375. }
  376. }
  377. }
  378. --------------------------------------------------
  379. // TEST[continued]
  380. However such practice might make the query run slower in some cases since the
  381. overhead introduced by the `bool` query may defeat the savings from better
  382. leveraging the query cache.
  383. [float]
  384. === Force-merge read-only indices
  385. Indices that are read-only may benefit from being <<indices-forcemerge,merged
  386. down to a single segment>>. This is typically the case with time-based indices:
  387. only the index for the current time frame is getting new documents while older
  388. indices are read-only. Shards that have been force-merged into a single segment
  389. can use simpler and more efficient data structures to perform searches.
  390. IMPORTANT: Do not force-merge indices to which you are still writing, or to
  391. which you will write again in the future. Instead, rely on the automatic
  392. background merge process to perform merges as needed to keep the index running
  393. smoothly. If you continue to write to a force-merged index then its performance
  394. may become much worse.
  395. [float]
  396. === Warm up global ordinals
  397. Global ordinals are a data-structure that is used in order to run
  398. <<search-aggregations-bucket-terms-aggregation,`terms`>> aggregations on
  399. <<keyword,`keyword`>> fields. They are loaded lazily in memory because
  400. Elasticsearch does not know which fields will be used in `terms` aggregations
  401. and which fields won't. You can tell Elasticsearch to load global ordinals
  402. eagerly when starting or refreshing a shard by configuring mappings as
  403. described below:
  404. [source,console]
  405. --------------------------------------------------
  406. PUT index
  407. {
  408. "mappings": {
  409. "properties": {
  410. "foo": {
  411. "type": "keyword",
  412. "eager_global_ordinals": true
  413. }
  414. }
  415. }
  416. }
  417. --------------------------------------------------
  418. [float]
  419. === Warm up the filesystem cache
  420. If the machine running Elasticsearch is restarted, the filesystem cache will be
  421. empty, so it will take some time before the operating system loads hot regions
  422. of the index into memory so that search operations are fast. You can explicitly
  423. tell the operating system which files should be loaded into memory eagerly
  424. depending on the file extension using the
  425. <<preload-data-to-file-system-cache,`index.store.preload`>> setting.
  426. WARNING: Loading data into the filesystem cache eagerly on too many indices or
  427. too many files will make search _slower_ if the filesystem cache is not large
  428. enough to hold all the data. Use with caution.
  429. [float]
  430. === Use index sorting to speed up conjunctions
  431. <<index-modules-index-sorting,Index sorting>> can be useful in order to make
  432. conjunctions faster at the cost of slightly slower indexing. Read more about it
  433. in the <<index-modules-index-sorting-conjunctions,index sorting documentation>>.
  434. [float]
  435. [[preference-cache-optimization]]
  436. === Use `preference` to optimize cache utilization
  437. There are multiple caches that can help with search performance, such as the
  438. https://en.wikipedia.org/wiki/Page_cache[filesystem cache], the
  439. <<shard-request-cache,request cache>> or the <<query-cache,query cache>>. Yet
  440. all these caches are maintained at the node level, meaning that if you run the
  441. same request twice in a row, have 1 <<glossary-replica-shard,replica>> or more
  442. and use https://en.wikipedia.org/wiki/Round-robin_DNS[round-robin], the default
  443. routing algorithm, then those two requests will go to different shard copies,
  444. preventing node-level caches from helping.
  445. Since it is common for users of a search application to run similar requests
  446. one after another, for instance in order to analyze a narrower subset of the
  447. index, using a preference value that identifies the current user or session
  448. could help optimize usage of the caches.
  449. [float]
  450. === Replicas might help with throughput, but not always
  451. In addition to improving resiliency, replicas can help improve throughput. For
  452. instance if you have a single-shard index and three nodes, you will need to
  453. set the number of replicas to 2 in order to have 3 copies of your shard in
  454. total so that all nodes are utilized.
  455. Now imagine that you have a 2-shards index and two nodes. In one case, the
  456. number of replicas is 0, meaning that each node holds a single shard. In the
  457. second case the number of replicas is 1, meaning that each node has two shards.
  458. Which setup is going to perform best in terms of search performance? Usually,
  459. the setup that has fewer shards per node in total will perform better. The
  460. reason for that is that it gives a greater share of the available filesystem
  461. cache to each shard, and the filesystem cache is probably Elasticsearch's
  462. number 1 performance factor. At the same time, beware that a setup that does
  463. not have replicas is subject to failure in case of a single node failure, so
  464. there is a trade-off between throughput and availability.
  465. So what is the right number of replicas? If you have a cluster that has
  466. `num_nodes` nodes, `num_primaries` primary shards _in total_ and if you want to
  467. be able to cope with `max_failures` node failures at once at most, then the
  468. right number of replicas for you is
  469. `max(max_failures, ceil(num_nodes / num_primaries) - 1)`.
  470. === Tune your queries with the Profile API
  471. You can also analyse how expensive each component of your queries and
  472. aggregations are using the {ref}/search-profile.html[Profile API]. This might
  473. allow you to tune your queries to be less expensive, resulting in a positive
  474. performance result and reduced load. Also note that Profile API payloads can be
  475. easily visualised for better readability in the
  476. {kibana-ref}/xpack-profiler.html[Search Profiler], which is a Kibana dev tools
  477. UI available in all X-Pack licenses, including the free X-Pack Basic license.
  478. Some caveats to the Profile API are that:
  479. - the Profile API as a debugging tool adds significant overhead to search execution and can also have a very verbose output
  480. - given the added overhead, the resulting took times are not reliable indicators of actual took time, but can be used comparatively between clauses for relative timing differences
  481. - the Profile API is best for exploring possible reasons behind the most costly clauses of a query but isn't intended for accurately measuring absolute timings of each clause
  482. [[faster-phrase-queries]]
  483. === Faster phrase queries with `index_phrases`
  484. The <<text,`text`>> field has an <<index-phrases,`index_phrases`>> option that
  485. indexes 2-shingles and is automatically leveraged by query parsers to run phrase
  486. queries that don't have a slop. If your use-case involves running lots of phrase
  487. queries, this can speed up queries significantly.
  488. [[faster-prefix-queries]]
  489. === Faster prefix queries with `index_prefixes`
  490. The <<text,`text`>> field has an <<index-prefixes,`index_prefixes`>> option that
  491. indexes prefixes of all terms and is automatically leveraged by query parsers to
  492. run prefix queries. If your use-case involves running lots of prefix queries,
  493. this can speed up queries significantly.
  494. [[faster-filtering-with-constant-keyword]]
  495. === Use <<constant-keyword,`constant_keyword`>> to speed up filtering
  496. There is a general rule that the cost of a filter is mostly a function of the
  497. number of matched documents. Imagine that you have an index containing cycles.
  498. There are a large number of bicycles and many searches perform a filter on
  499. `cycle_type: bicycle`. This very common filter is unfortunately also very costly
  500. since it matches most documents. There is a simple way to avoid running this
  501. filter: move bicycles to their own index and filter bicycles by searching this
  502. index instead of adding a filter to the query.
  503. Unfortunately this can make client-side logic tricky, which is where
  504. `constant_keyword` helps. By mapping `cycle_type` as a `constant_keyword` with
  505. value `bicycle` on the index that contains bicycles, clients can keep running
  506. the exact same queries as they used to run on the monolithic index and
  507. Elasticsearch will do the right thing on the bicycles index by ignoring filters
  508. on `cycle_type` if the value is `bicycle` and returning no hits otherwise.
  509. Here is what mappings could look like:
  510. [source,console]
  511. --------------------------------------------------
  512. PUT bicycles
  513. {
  514. "mappings": {
  515. "properties": {
  516. "cycle_type": {
  517. "type": "constant_keyword",
  518. "value": "bicycle"
  519. },
  520. "name": {
  521. "type": "text"
  522. }
  523. }
  524. }
  525. }
  526. PUT other_cycles
  527. {
  528. "mappings": {
  529. "properties": {
  530. "cycle_type": {
  531. "type": "keyword"
  532. },
  533. "name": {
  534. "type": "text"
  535. }
  536. }
  537. }
  538. }
  539. --------------------------------------------------
  540. We are splitting our index in two: one that will contain only bicycles, and
  541. another one that contains other cycles: unicycles, tricycles, etc. Then at
  542. search time, we need to search both indices, but we don't need to modify
  543. queries.
  544. [source,console]
  545. --------------------------------------------------
  546. GET bicycles,other_cycles/_search
  547. {
  548. "query": {
  549. "bool": {
  550. "must": {
  551. "match": {
  552. "description": "dutch"
  553. }
  554. },
  555. "filter": {
  556. "term": {
  557. "cycle_type": "bicycle"
  558. }
  559. }
  560. }
  561. }
  562. }
  563. --------------------------------------------------
  564. // TEST[continued]
  565. On the `bicycles` index, Elasticsearch will simply ignore the `cycle_type`
  566. filter and rewrite the search request to the one below:
  567. [source,console]
  568. --------------------------------------------------
  569. GET bicycles,other_cycles/_search
  570. {
  571. "query": {
  572. "match": {
  573. "description": "dutch"
  574. }
  575. }
  576. }
  577. --------------------------------------------------
  578. // TEST[continued]
  579. On the `other_cycles` index, Elasticsearch will quickly figure out that
  580. `bicycle` doesn't exist in the terms dictionary of the `cycle_type` field and
  581. return a search response with no hits.
  582. This is a powerful way of making queries cheaper by putting common values in a
  583. dedicated index. This idea can also be combined across multiple fields: for
  584. instance if you track the color of each cycle and your `bicycles` index ends up
  585. having a majority of black bikes, you could split it into a `bicycles-black`
  586. and a `bicycles-other-colors` indices.
  587. The `constant_keyword` is not strictly required for this optimization: it is
  588. also possible to update the client-side logic in order to route queries to the
  589. relevant indices based on filters. However `constant_keyword` makes it
  590. transparently and allows to decouple search requests from the index topology in
  591. exchange of very little overhead.