knn-search.asciidoc 7.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144
  1. [[tune-knn-search]]
  2. == Tune approximate kNN search
  3. {es} supports <<approximate-knn, approximate k-nearest neighbor search>> for
  4. efficiently finding the _k_ nearest vectors to a query vector. Since
  5. approximate kNN search works differently from other queries, there are special
  6. considerations around its performance.
  7. Many of these recommendations help improve search speed. With approximate kNN,
  8. the indexing algorithm runs searches under the hood to create the vector index
  9. structures. So these same recommendations also help with indexing speed.
  10. [discrete]
  11. === Reduce vector memory foot-print
  12. The default <<dense-vector-element-type,`element_type`>> is `float`. But this
  13. can be automatically quantized during index time through
  14. <<dense-vector-quantization,`quantization`>>. Quantization will reduce the
  15. required memory by 4x, but it will also reduce the precision of the vectors. For
  16. `float` vectors with `dim` greater than or equal to `384`, using a
  17. <<dense-vector-quantization,`quantized`>> index is highly recommended.
  18. [discrete]
  19. === Reduce vector dimensionality
  20. The speed of kNN search scales linearly with the number of vector dimensions,
  21. because each similarity computation considers each element in the two vectors.
  22. Whenever possible, it's better to use vectors with a lower dimension. Some
  23. embedding models come in different "sizes", with both lower and higher
  24. dimensional options available. You could also experiment with dimensionality
  25. reduction techniques like PCA. When experimenting with different approaches,
  26. it's important to measure the impact on relevance to ensure the search
  27. quality is still acceptable.
  28. [discrete]
  29. === Exclude vector fields from `_source`
  30. {es} stores the original JSON document that was passed at index time in the
  31. <<mapping-source-field, `_source` field>>. By default, each hit in the search
  32. results contains the full document `_source`. When the documents contain
  33. high-dimensional `dense_vector` fields, the `_source` can be quite large and
  34. expensive to load. This could significantly slow down the speed of kNN search.
  35. You can disable storing `dense_vector` fields in the `_source` through the
  36. <<include-exclude, `excludes`>> mapping parameter. This prevents loading and
  37. returning large vectors during search, and also cuts down on the index size.
  38. Vectors that have been omitted from `_source` can still be used in kNN search,
  39. since it relies on separate data structures to perform the search. Before
  40. using the <<include-exclude, `excludes`>> parameter, make sure to review the
  41. downsides of omitting fields from `_source`.
  42. Another option is to use <<synthetic-source,synthetic `_source`>> if all
  43. your index fields support it.
  44. [discrete]
  45. === Ensure data nodes have enough memory
  46. {es} uses the https://arxiv.org/abs/1603.09320[HNSW] algorithm for approximate
  47. kNN search. HNSW is a graph-based algorithm which only works efficiently when
  48. most vector data is held in memory. You should ensure that data nodes have at
  49. least enough RAM to hold the vector data and index structures. To check the
  50. size of the vector data, you can use the <<indices-disk-usage>> API. As a
  51. loose rule of thumb, and assuming the default HNSW options, the bytes used will
  52. be `num_vectors * 4 * (num_dimensions + 12)`. When using the `byte` <<dense-vector-element-type,`element_type`>>
  53. the space required will be closer to `num_vectors * (num_dimensions + 12)`. Note that
  54. the required RAM is for the filesystem cache, which is separate from the Java
  55. heap.
  56. The data nodes should also leave a buffer for other ways that RAM is needed.
  57. For example your index might also include text fields and numerics, which also
  58. benefit from using filesystem cache. It's recommended to run benchmarks with
  59. your specific dataset to ensure there's a sufficient amount of memory to give
  60. good search performance.
  61. You can find https://elasticsearch-benchmarks.elastic.co/#tracks/so_vector[here]
  62. and https://elasticsearch-benchmarks.elastic.co/#tracks/dense_vector[here] some examples
  63. of datasets and configurations that we use for our nightly benchmarks.
  64. [discrete]
  65. include::search-speed.asciidoc[tag=warm-fs-cache]
  66. The following file extensions are used for the approximate kNN search:
  67. * `vec` and `veq` for vector values
  68. * `vex` for HNSW graph
  69. * `vem`, `vemf`, and `vemq` for metadata
  70. [discrete]
  71. === Reduce the number of index segments
  72. {es} shards are composed of segments, which are internal storage elements in
  73. the index. For approximate kNN search, {es} stores the vector values of
  74. each segment as a separate HNSW graph, so kNN search must check each segment.
  75. The recent parallelization of kNN search made it much faster to search across
  76. multiple segments, but still kNN search can be up to several times
  77. faster if there are fewer segments. By default, {es} periodically
  78. merges smaller segments into larger ones through a background
  79. <<index-modules-merge, merge process>>. If this isn't sufficient, you can take
  80. explicit steps to reduce the number of index segments.
  81. [discrete]
  82. ==== Force merge to one segment
  83. The <<indices-forcemerge,force merge>> operation forces an index merge. If you
  84. force merge to one segment, the kNN search only need to check a single,
  85. all-inclusive HNSW graph. Force merging `dense_vector` fields is an expensive
  86. operation that can take significant time to complete.
  87. include::{es-repo-dir}/indices/forcemerge.asciidoc[tag=force-merge-read-only-warn]
  88. [discrete]
  89. ==== Create large segments during bulk indexing
  90. A common pattern is to first perform an initial bulk upload, then make an
  91. index available for searches. Instead of force merging, you can adjust the
  92. index settings to encourage {es} to create larger initial segments:
  93. * Ensure there are no searches during the bulk upload and disable
  94. <<index-refresh-interval-setting,`index.refresh_interval`>> by setting it to
  95. `-1`. This prevents refresh operations and avoids creating extra segments.
  96. * Give {es} a large indexing buffer so it can accept more documents before
  97. flushing. By default, the <<indexing-buffer,`indices.memory.index_buffer_size`>>
  98. is set to 10% of the heap size. With a substantial heap size like 32GB, this
  99. is often enough. To allow the full indexing buffer to be used, you should also
  100. increase the limit <<index-modules-translog,`index.translog.flush_threshold_size`>>.
  101. [discrete]
  102. === Avoid heavy indexing during searches
  103. Actively indexing documents can have a negative impact on approximate kNN
  104. search performance, since indexing threads steal compute resources from
  105. search. When indexing and searching at the same time, {es} also refreshes
  106. frequently, which creates several small segments. This also hurts search
  107. performance, since approximate kNN search is slower when there are more
  108. segments.
  109. When possible, it's best to avoid heavy indexing during approximate kNN
  110. search. If you need to reindex all the data, perhaps because the vector
  111. embedding model changed, then it's better to reindex the new documents into a
  112. separate index rather than update them in-place. This helps avoid the slowdown
  113. mentioned above, and prevents expensive merge operations due to frequent
  114. document updates.
  115. [discrete]
  116. include::search-speed.asciidoc[tag=readahead]