소스 검색

[DOCS] Adds more detail on disk usage of kNN quantized vectors (#105724) (#105743)

Co-authored-by: Liam Thompson <32779855+leemthompo@users.noreply.github.com>
Co-authored-by: Benjamin Trent <ben.w.trent@gmail.com>
(cherry picked from commit 6073e748a3f5d7b77385a1b5326e54f3c4b5caf7)
István Zoltán Szabó 1 년 전
부모
커밋
6c72ceb6c9
1개의 변경된 파일3개의 추가작업 그리고 1개의 파일을 삭제
  1. 3 1
      docs/reference/how-to/knn-search.asciidoc

+ 3 - 1
docs/reference/how-to/knn-search.asciidoc

@@ -17,7 +17,9 @@ The default <<dense-vector-element-type,`element_type`>> is `float`. But this
 can be automatically quantized during index time through
 <<dense-vector-quantization,`quantization`>>. Quantization will reduce the
 required memory by 4x, but it will also reduce the precision of the vectors and
-increase disk usage for the field (by up to 25%).
+increase disk usage for the field (by up to 25%). Increased disk usage is a
+result of {es} storing both the quantized and the unquantized vectors.
+For example, when quantizing 40GB of floating point vectors an extra 10GB of data will be stored for the quantized vectors. The total disk usage amounts to 50GB, but the memory usage for fast search will be reduced to 10GB.
 
 For `float` vectors with `dim` greater than or equal to `384`, using a
 <<dense-vector-quantization,`quantized`>> index is highly recommended.