fielddata.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322
  1. [[index-modules-fielddata]]
  2. == Field data
  3. The field data cache is used mainly when sorting on or faceting on a
  4. field. It loads all the field values to memory in order to provide fast
  5. document based access to those values. The field data cache can be
  6. expensive to build for a field, so its recommended to have enough memory
  7. to allocate it, and to keep it loaded.
  8. The amount of memory used for the field
  9. data cache can be controlled using `indices.fielddata.cache.size`. Note:
  10. reloading the field data which does not fit into your cache will be expensive
  11. and perform poorly.
  12. [cols="<,<",options="header",]
  13. |=======================================================================
  14. |Setting |Description
  15. |`indices.fielddata.cache.size` |The max size of the field data cache,
  16. eg `30%` of node heap space, or an absolute value, eg `12GB`. Defaults
  17. to unbounded.
  18. |`indices.fielddata.cache.expire` |A time based setting that expires
  19. field data after a certain time of inactivity. Defaults to `-1`. For
  20. example, can be set to `5m` for a 5 minute expiry.
  21. |=======================================================================
  22. [float]
  23. [[fielddata-circuit-breaker]]
  24. === Field data circuit breaker
  25. The field data circuit breaker allows Elasticsearch to estimate the amount of
  26. memory a field will required to be loaded into memory. It can then prevent the
  27. field data loading by raising an exception. By default the limit is configured
  28. to 60% of the maximum JVM heap. It can be configured with the following
  29. parameters:
  30. [cols="<,<",options="header",]
  31. |=======================================================================
  32. |Setting |Description
  33. |`indices.fielddata.breaker.limit` |Maximum size of estimated field data
  34. to allow loading. Defaults to 60% of the maximum JVM heap.
  35. |`indices.fielddata.breaker.overhead` |A constant that all field data
  36. estimations are multiplied with to determine a final estimation. Defaults to
  37. 1.03
  38. |=======================================================================
  39. Both the `indices.fielddata.breaker.limit` and
  40. `indices.fielddata.breaker.overhead` can be changed dynamically using the
  41. cluster update settings API.
  42. [float]
  43. [[fielddata-monitoring]]
  44. === Monitoring field data
  45. You can monitor memory usage for field data as well as the field data circuit
  46. breaker using
  47. <<cluster-nodes-stats,Nodes Stats API>>
  48. [[fielddata-formats]]
  49. == Field data formats
  50. The field data format controls how field data should be stored.
  51. Depending on the field type, there might be several field data types
  52. available. In particular, string and numeric types support the `doc_values`
  53. format which allows for computing the field data data-structures at indexing
  54. time and storing them on disk. Although it will make the index larger and may
  55. be slightly slower, this implementation will be more near-realtime-friendly
  56. and will require much less memory from the JVM than other implementations.
  57. Here is an example of how to configure the `tag` field to use the `fst` field
  58. data format.
  59. [source,js]
  60. --------------------------------------------------
  61. {
  62. "tag": {
  63. "type": "string",
  64. "fielddata": {
  65. "format": "fst"
  66. }
  67. }
  68. }
  69. --------------------------------------------------
  70. It is possible to change the field data format (and the field data settings
  71. in general) on a live index by using the update mapping API. When doing so,
  72. field data which had already been loaded for existing segments will remain
  73. alive while new segments will use the new field data configuration. Thanks to
  74. the background merging process, all segments will eventually use the new
  75. field data format.
  76. [float]
  77. ==== String field data types
  78. `paged_bytes` (default)::
  79. Stores unique terms sequentially in a large buffer and maps documents to
  80. the indices of the terms they contain in this large buffer.
  81. `fst`::
  82. Stores terms in a FST. Slower to build than `paged_bytes` but can help lower
  83. memory usage if many terms share common prefixes and/or suffixes.
  84. `doc_values`::
  85. Computes and stores field data data-structures on disk at indexing time.
  86. Lowers memory usage but only works on non-analyzed strings (`index`: `no` or
  87. `not_analyzed`) and doesn't support filtering.
  88. [float]
  89. ==== Numeric field data types
  90. `array` (default)::
  91. Stores field values in memory using arrays.
  92. `doc_values`::
  93. Computes and stores field data data-structures on disk at indexing time.
  94. Doesn't support filtering.
  95. [float]
  96. ==== Geo point field data types
  97. `array` (default)::
  98. Stores latitudes and longitudes in arrays.
  99. `doc_values`::
  100. Computes and stores field data data-structures on disk at indexing time.
  101. [float]
  102. ==== Global ordinals
  103. added[1.2.0]
  104. Global ordinals is a data-structure on top of field data, that maintains an
  105. incremental numbering for all the terms in field data in a lexicographic order.
  106. Each term has a unique number and the number of term 'A' is lower than the number
  107. of term 'B'. Global ordinals are only supported on string fields.
  108. Field data on string also has ordinals, which is a unique numbering for all terms
  109. in a particular segment and field. Global ordinals just build on top of this,
  110. by providing a mapping between the segment ordinals and the global ordinals.
  111. The latter being unique across the entire shard.
  112. Global ordinals can be beneficial in search features that use segment ordinals already
  113. such as the terms aggregator to improve the execution time. Often these search features
  114. need to merge the segment ordinal results to a cross segment terms result. With
  115. global ordinals this mapping happens during field data load time instead of during each
  116. query execution. With global ordinals search features only need to resolve the actual
  117. term when building the (shard) response, but during the execution there is no need
  118. at all to use the actual terms and the unique numbering global ordinals provided is
  119. sufficient and improves the execution time.
  120. Global ordinals for a specified field are tied to all the segments of a shard (Lucene index),
  121. which is different than for field data for a specific field which is tied to a single segment.
  122. For this reason global ordinals need to be rebuilt in its entirety once new segments
  123. become visible. This one time cost would happen anyway without global ordinals, but
  124. then it would happen for each search execution instead!
  125. The loading time of global ordinals depends on the number of terms in a field, but in general
  126. it is low, since it source field data has already been loaded. The memory overhead of global
  127. ordinals is a small because it is very efficiently compressed. Eager loading of global ordinals
  128. can move the loading time from the first search request, to the refresh itself.
  129. [float]
  130. === Fielddata loading
  131. By default, field data is loaded lazily, ie. the first time that a query that
  132. requires them is executed. However, this can make the first requests that
  133. follow a merge operation quite slow since fielddata loading is a heavy
  134. operation.
  135. It is possible to force field data to be loaded and cached eagerly through the
  136. `loading` setting of fielddata:
  137. [source,js]
  138. --------------------------------------------------
  139. {
  140. "category": {
  141. "type": "string",
  142. "fielddata": {
  143. "loading": "eager"
  144. }
  145. }
  146. }
  147. --------------------------------------------------
  148. Global ordinals can also be eagerly loaded:
  149. [source,js]
  150. --------------------------------------------------
  151. {
  152. "category": {
  153. "type": "string",
  154. "fielddata": {
  155. "loading": "eager_global_ordinals"
  156. }
  157. }
  158. }
  159. --------------------------------------------------
  160. With the above setting both field data and global ordinals for a specific field
  161. are eagerly loaded.
  162. [float]
  163. ==== Disabling field data loading
  164. Field data can take a lot of RAM so it makes sense to disable field data
  165. loading on the fields that don't need field data, for example those that are
  166. used for full-text search only. In order to disable field data loading, just
  167. change the field data format to `disabled`. When disabled, all requests that
  168. will try to load field data, e.g. when they include aggregations and/or sorting,
  169. will return an error.
  170. [source,js]
  171. --------------------------------------------------
  172. {
  173. "text": {
  174. "type": "string",
  175. "fielddata": {
  176. "format": "disabled"
  177. }
  178. }
  179. }
  180. --------------------------------------------------
  181. The `disabled` format is supported by all field types.
  182. [float]
  183. [[field-data-filtering]]
  184. === Filtering fielddata
  185. It is possible to control which field values are loaded into memory,
  186. which is particularly useful for string fields. When specifying the
  187. <<mapping-core-types,mapping>> for a field, you
  188. can also specify a fielddata filter.
  189. Fielddata filters can be changed using the
  190. <<indices-put-mapping,PUT mapping>>
  191. API. After changing the filters, use the
  192. <<indices-clearcache,Clear Cache>> API
  193. to reload the fielddata using the new filters.
  194. [float]
  195. ==== Filtering by frequency:
  196. The frequency filter allows you to only load terms whose frequency falls
  197. between a `min` and `max` value, which can be expressed an absolute
  198. number or as a percentage (eg `0.01` is `1%`). Frequency is calculated
  199. *per segment*. Percentages are based on the number of docs which have a
  200. value for the field, as opposed to all docs in the segment.
  201. Small segments can be excluded completely by specifying the minimum
  202. number of docs that the segment should contain with `min_segment_size`:
  203. [source,js]
  204. --------------------------------------------------
  205. {
  206. "tag": {
  207. "type": "string",
  208. "fielddata": {
  209. "filter": {
  210. "frequency": {
  211. "min": 0.001,
  212. "max": 0.1,
  213. "min_segment_size": 500
  214. }
  215. }
  216. }
  217. }
  218. }
  219. --------------------------------------------------
  220. [float]
  221. ==== Filtering by regex
  222. Terms can also be filtered by regular expression - only values which
  223. match the regular expression are loaded. Note: the regular expression is
  224. applied to each term in the field, not to the whole field value. For
  225. instance, to only load hashtags from a tweet, we can use a regular
  226. expression which matches terms beginning with `#`:
  227. [source,js]
  228. --------------------------------------------------
  229. {
  230. "tweet": {
  231. "type": "string",
  232. "analyzer": "whitespace"
  233. "fielddata": {
  234. "filter": {
  235. "regex": {
  236. "pattern": "^#.*"
  237. }
  238. }
  239. }
  240. }
  241. }
  242. --------------------------------------------------
  243. [float]
  244. ==== Combining filters
  245. The `frequency` and `regex` filters can be combined:
  246. [source,js]
  247. --------------------------------------------------
  248. {
  249. "tweet": {
  250. "type": "string",
  251. "analyzer": "whitespace"
  252. "fielddata": {
  253. "filter": {
  254. "regex": {
  255. "pattern": "^#.*",
  256. },
  257. "frequency": {
  258. "min": 0.001,
  259. "max": 0.1,
  260. "min_segment_size": 500
  261. }
  262. }
  263. }
  264. }
  265. }
  266. --------------------------------------------------