terms-aggregation.asciidoc 36 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903
  1. [[search-aggregations-bucket-terms-aggregation]]
  2. === Terms Aggregation
  3. A multi-bucket value source based aggregation where buckets are dynamically built - one per unique value.
  4. //////////////////////////
  5. [source,js]
  6. --------------------------------------------------
  7. PUT /products
  8. {
  9. "mappings": {
  10. "properties": {
  11. "genre": {
  12. "type": "keyword"
  13. },
  14. "product": {
  15. "type": "keyword"
  16. }
  17. }
  18. }
  19. }
  20. POST /products/_bulk?refresh
  21. {"index":{"_id":0}}
  22. {"genre": "rock", "product": "Product A"}
  23. {"index":{"_id":1}}
  24. {"genre": "rock"}
  25. {"index":{"_id":2}}
  26. {"genre": "rock"}
  27. {"index":{"_id":3}}
  28. {"genre": "jazz", "product": "Product Z"}
  29. {"index":{"_id":4}}
  30. {"genre": "jazz"}
  31. {"index":{"_id":5}}
  32. {"genre": "electronic"}
  33. {"index":{"_id":6}}
  34. {"genre": "electronic"}
  35. {"index":{"_id":7}}
  36. {"genre": "electronic"}
  37. {"index":{"_id":8}}
  38. {"genre": "electronic"}
  39. {"index":{"_id":9}}
  40. {"genre": "electronic"}
  41. {"index":{"_id":10}}
  42. {"genre": "electronic"}
  43. -------------------------------------------------
  44. // NOTCONSOLE
  45. // TESTSETUP
  46. //////////////////////////
  47. Example:
  48. [source,console,id=terms-aggregation-example]
  49. --------------------------------------------------
  50. GET /_search
  51. {
  52. "aggs" : {
  53. "genres" : {
  54. "terms" : { "field" : "genre" } <1>
  55. }
  56. }
  57. }
  58. --------------------------------------------------
  59. // TEST[s/_search/_search\?filter_path=aggregations/]
  60. <1> `terms` aggregation should be a field of type `keyword` or any other data type suitable for bucket aggregations. In order to use it with `text` you will need to enable
  61. <<fielddata, fielddata>>.
  62. Response:
  63. [source,console-result]
  64. --------------------------------------------------
  65. {
  66. ...
  67. "aggregations" : {
  68. "genres" : {
  69. "doc_count_error_upper_bound": 0, <1>
  70. "sum_other_doc_count": 0, <2>
  71. "buckets" : [ <3>
  72. {
  73. "key" : "electronic",
  74. "doc_count" : 6
  75. },
  76. {
  77. "key" : "rock",
  78. "doc_count" : 3
  79. },
  80. {
  81. "key" : "jazz",
  82. "doc_count" : 2
  83. }
  84. ]
  85. }
  86. }
  87. }
  88. --------------------------------------------------
  89. // TESTRESPONSE[s/\.\.\.//]
  90. <1> an upper bound of the error on the document counts for each term, see <<search-aggregations-bucket-terms-aggregation-approximate-counts,below>>
  91. <2> when there are lots of unique terms, Elasticsearch only returns the top terms; this number is the sum of the document counts for all buckets that are not part of the response
  92. <3> the list of the top buckets, the meaning of `top` being defined by the <<search-aggregations-bucket-terms-aggregation-order,order>>
  93. By default, the `terms` aggregation will return the buckets for the top ten terms ordered by the `doc_count`. One can
  94. change this default behaviour by setting the `size` parameter.
  95. [[search-aggregations-bucket-terms-aggregation-size]]
  96. ==== Size
  97. The `size` parameter can be set to define how many term buckets should be returned out of the overall terms list. By
  98. default, the node coordinating the search process will request each shard to provide its own top `size` term buckets
  99. and once all shards respond, it will reduce the results to the final list that will then be returned to the client.
  100. This means that if the number of unique terms is greater than `size`, the returned list is slightly off and not accurate
  101. (it could be that the term counts are slightly off and it could even be that a term that should have been in the top
  102. size buckets was not returned).
  103. NOTE: If you want to retrieve **all** terms or all combinations of terms in a nested `terms` aggregation
  104. you should use the <<search-aggregations-bucket-composite-aggregation,Composite>> aggregation which
  105. allows to paginate over all possible terms rather than setting a size greater than the cardinality of the field in the
  106. `terms` aggregation. The `terms` aggregation is meant to return the `top` terms and does not allow pagination.
  107. [[search-aggregations-bucket-terms-aggregation-approximate-counts]]
  108. ==== Document counts are approximate
  109. As described above, the document counts (and the results of any sub aggregations) in the terms aggregation are not always
  110. accurate. This is because each shard provides its own view of what the ordered list of terms should be and these are
  111. combined to give a final view. Consider the following scenario:
  112. A request is made to obtain the top 5 terms in the field product, ordered by descending document count from an index with
  113. 3 shards. In this case each shard is asked to give its top 5 terms.
  114. [source,console,id=terms-aggregation-doc-counts-example]
  115. --------------------------------------------------
  116. GET /_search
  117. {
  118. "aggs" : {
  119. "products" : {
  120. "terms" : {
  121. "field" : "product",
  122. "size" : 5
  123. }
  124. }
  125. }
  126. }
  127. --------------------------------------------------
  128. // TEST[s/_search/_search\?filter_path=aggregations/]
  129. The terms for each of the three shards are shown below with their
  130. respective document counts in brackets:
  131. [width="100%",cols="^2,^2,^2,^2",options="header"]
  132. |=========================================================
  133. | | Shard A | Shard B | Shard C
  134. | 1 | Product A (25) | Product A (30) | Product A (45)
  135. | 2 | Product B (18) | Product B (25) | Product C (44)
  136. | 3 | Product C (6) | Product F (17) | Product Z (36)
  137. | 4 | Product D (3) | Product Z (16) | Product G (30)
  138. | 5 | Product E (2) | Product G (15) | Product E (29)
  139. | 6 | Product F (2) | Product H (14) | Product H (28)
  140. | 7 | Product G (2) | Product I (10) | Product Q (2)
  141. | 8 | Product H (2) | Product Q (6) | Product D (1)
  142. | 9 | Product I (1) | Product J (6) |
  143. | 10 | Product J (1) | Product C (4) |
  144. |=========================================================
  145. The shards will return their top 5 terms so the results from the shards will be:
  146. [width="100%",cols="^2,^2,^2,^2",options="header"]
  147. |=========================================================
  148. | | Shard A | Shard B | Shard C
  149. | 1 | Product A (25) | Product A (30) | Product A (45)
  150. | 2 | Product B (18) | Product B (25) | Product C (44)
  151. | 3 | Product C (6) | Product F (17) | Product Z (36)
  152. | 4 | Product D (3) | Product Z (16) | Product G (30)
  153. | 5 | Product E (2) | Product G (15) | Product E (29)
  154. |=========================================================
  155. Taking the top 5 results from each of the shards (as requested) and combining them to make a final top 5 list produces
  156. the following:
  157. [width="40%",cols="^2,^2"]
  158. |=========================================================
  159. | 1 | Product A (100)
  160. | 2 | Product Z (52)
  161. | 3 | Product C (50)
  162. | 4 | Product G (45)
  163. | 5 | Product B (43)
  164. |=========================================================
  165. Because Product A was returned from all shards we know that its document count value is accurate. Product C was only
  166. returned by shards A and C so its document count is shown as 50 but this is not an accurate count. Product C exists on
  167. shard B, but its count of 4 was not high enough to put Product C into the top 5 list for that shard. Product Z was also
  168. returned only by 2 shards but the third shard does not contain the term. There is no way of knowing, at the point of
  169. combining the results to produce the final list of terms, that there is an error in the document count for Product C and
  170. not for Product Z. Product H has a document count of 44 across all 3 shards but was not included in the final list of
  171. terms because it did not make it into the top five terms on any of the shards.
  172. ==== Shard Size
  173. The higher the requested `size` is, the more accurate the results will be, but also, the more expensive it will be to
  174. compute the final results (both due to bigger priority queues that are managed on a shard level and due to bigger data
  175. transfers between the nodes and the client).
  176. The `shard_size` parameter can be used to minimize the extra work that comes with bigger requested `size`. When defined,
  177. it will determine how many terms the coordinating node will request from each shard. Once all the shards responded, the
  178. coordinating node will then reduce them to a final result which will be based on the `size` parameter - this way,
  179. one can increase the accuracy of the returned terms and avoid the overhead of streaming a big list of buckets back to
  180. the client.
  181. NOTE: `shard_size` cannot be smaller than `size` (as it doesn't make much sense). When it is, Elasticsearch will
  182. override it and reset it to be equal to `size`.
  183. The default `shard_size` is `(size * 1.5 + 10)`.
  184. ==== Calculating Document Count Error
  185. There are two error values which can be shown on the terms aggregation. The first gives a value for the aggregation as
  186. a whole which represents the maximum potential document count for a term which did not make it into the final list of
  187. terms. This is calculated as the sum of the document count from the last term returned from each shard. For the example
  188. given above the value would be 46 (2 + 15 + 29). This means that in the worst case scenario a term which was not returned
  189. could have the 4th highest document count.
  190. [source,console-result]
  191. --------------------------------------------------
  192. {
  193. ...
  194. "aggregations" : {
  195. "products" : {
  196. "doc_count_error_upper_bound" : 46,
  197. "sum_other_doc_count" : 79,
  198. "buckets" : [
  199. {
  200. "key" : "Product A",
  201. "doc_count" : 100
  202. },
  203. {
  204. "key" : "Product Z",
  205. "doc_count" : 52
  206. }
  207. ...
  208. ]
  209. }
  210. }
  211. }
  212. --------------------------------------------------
  213. // TESTRESPONSE[s/\.\.\.//]
  214. // TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/]
  215. ==== Per bucket document count error
  216. The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true:
  217. [source,console,id=terms-aggregation-doc-count-error-example]
  218. --------------------------------------------------
  219. GET /_search
  220. {
  221. "aggs" : {
  222. "products" : {
  223. "terms" : {
  224. "field" : "product",
  225. "size" : 5,
  226. "show_term_doc_count_error": true
  227. }
  228. }
  229. }
  230. }
  231. --------------------------------------------------
  232. // TEST[s/_search/_search\?filter_path=aggregations/]
  233. This shows an error value for each term returned by the aggregation which represents the 'worst case' error in the document count
  234. and can be useful when deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for
  235. the last term returned by all shards which did not return the term. In the example above the error in the document count for Product C
  236. would be 15 as Shard B was the only shard not to return the term and the document count of the last term it did return was 15.
  237. The actual document count of Product C was 54 so the document count was only actually off by 4 even though the worst case was that
  238. it would be off by 15. Product A, however has an error of 0 for its document count, since every shard returned it we can be confident
  239. that the count returned is accurate.
  240. [source,console-result]
  241. --------------------------------------------------
  242. {
  243. ...
  244. "aggregations" : {
  245. "products" : {
  246. "doc_count_error_upper_bound" : 46,
  247. "sum_other_doc_count" : 79,
  248. "buckets" : [
  249. {
  250. "key" : "Product A",
  251. "doc_count" : 100,
  252. "doc_count_error_upper_bound" : 0
  253. },
  254. {
  255. "key" : "Product Z",
  256. "doc_count" : 52,
  257. "doc_count_error_upper_bound" : 2
  258. }
  259. ...
  260. ]
  261. }
  262. }
  263. }
  264. --------------------------------------------------
  265. // TESTRESPONSE[s/\.\.\.//]
  266. // TESTRESPONSE[s/: (\-)?[0-9]+/: $body.$_path/]
  267. These errors can only be calculated in this way when the terms are ordered by descending document count. When the aggregation is
  268. ordered by the terms values themselves (either ascending or descending) there is no error in the document count since if a shard
  269. does not return a particular term which appears in the results from another shard, it must not have that term in its index. When the
  270. aggregation is either sorted by a sub aggregation or in order of ascending document count, the error in the document counts cannot be
  271. determined and is given a value of -1 to indicate this.
  272. [[search-aggregations-bucket-terms-aggregation-order]]
  273. ==== Order
  274. The order of the buckets can be customized by setting the `order` parameter. By default, the buckets are ordered by
  275. their `doc_count` descending. It is possible to change this behaviour as documented below:
  276. WARNING: Sorting by ascending `_count` or by sub aggregation is discouraged as it increases the
  277. <<search-aggregations-bucket-terms-aggregation-approximate-counts,error>> on document counts.
  278. It is fine when a single shard is queried, or when the field that is being aggregated was used
  279. as a routing key at index time: in these cases results will be accurate since shards have disjoint
  280. values. However otherwise, errors are unbounded. One particular case that could still be useful
  281. is sorting by <<search-aggregations-metrics-min-aggregation,`min`>> or
  282. <<search-aggregations-metrics-max-aggregation,`max`>> aggregation: counts will not be accurate
  283. but at least the top buckets will be correctly picked.
  284. Ordering the buckets by their doc `_count` in an ascending manner:
  285. [source,console,id=terms-aggregation-count-example]
  286. --------------------------------------------------
  287. GET /_search
  288. {
  289. "aggs" : {
  290. "genres" : {
  291. "terms" : {
  292. "field" : "genre",
  293. "order" : { "_count" : "asc" }
  294. }
  295. }
  296. }
  297. }
  298. --------------------------------------------------
  299. Ordering the buckets alphabetically by their terms in an ascending manner:
  300. [source,console,id=terms-aggregation-asc-example]
  301. --------------------------------------------------
  302. GET /_search
  303. {
  304. "aggs" : {
  305. "genres" : {
  306. "terms" : {
  307. "field" : "genre",
  308. "order" : { "_key" : "asc" }
  309. }
  310. }
  311. }
  312. }
  313. --------------------------------------------------
  314. deprecated[6.0.0, Use `_key` instead of `_term` to order buckets by their term]
  315. Ordering the buckets by single value metrics sub-aggregation (identified by the aggregation name):
  316. [source,console,id=terms-aggregation-subaggregation-example]
  317. --------------------------------------------------
  318. GET /_search
  319. {
  320. "aggs" : {
  321. "genres" : {
  322. "terms" : {
  323. "field" : "genre",
  324. "order" : { "max_play_count" : "desc" }
  325. },
  326. "aggs" : {
  327. "max_play_count" : { "max" : { "field" : "play_count" } }
  328. }
  329. }
  330. }
  331. }
  332. --------------------------------------------------
  333. Ordering the buckets by multi value metrics sub-aggregation (identified by the aggregation name):
  334. [source,console,id=terms-aggregation-multivalue-subaggregation-example]
  335. --------------------------------------------------
  336. GET /_search
  337. {
  338. "aggs" : {
  339. "genres" : {
  340. "terms" : {
  341. "field" : "genre",
  342. "order" : { "playback_stats.max" : "desc" }
  343. },
  344. "aggs" : {
  345. "playback_stats" : { "stats" : { "field" : "play_count" } }
  346. }
  347. }
  348. }
  349. }
  350. --------------------------------------------------
  351. [NOTE]
  352. .Pipeline aggs cannot be used for sorting
  353. =======================================
  354. <<search-aggregations-pipeline,Pipeline aggregations>> are run during the
  355. reduce phase after all other aggregations have already completed. For this
  356. reason, they cannot be used for ordering.
  357. =======================================
  358. It is also possible to order the buckets based on a "deeper" aggregation in the hierarchy. This is supported as long
  359. as the aggregations path are of a single-bucket type, where the last aggregation in the path may either be a single-bucket
  360. one or a metrics one. If it's a single-bucket type, the order will be defined by the number of docs in the bucket (i.e. `doc_count`),
  361. in case it's a metrics one, the same rules as above apply (where the path must indicate the metric name to sort by in case of
  362. a multi-value metrics aggregation, and in case of a single-value metrics aggregation the sort will be applied on that value).
  363. The path must be defined in the following form:
  364. // https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form
  365. [source,ebnf]
  366. --------------------------------------------------
  367. AGG_SEPARATOR = '>' ;
  368. METRIC_SEPARATOR = '.' ;
  369. AGG_NAME = <the name of the aggregation> ;
  370. METRIC = <the name of the metric (in case of multi-value metrics aggregation)> ;
  371. PATH = <AGG_NAME> [ <AGG_SEPARATOR>, <AGG_NAME> ]* [ <METRIC_SEPARATOR>, <METRIC> ] ;
  372. --------------------------------------------------
  373. [source,console,id=terms-aggregation-hierarchy-example]
  374. --------------------------------------------------
  375. GET /_search
  376. {
  377. "aggs" : {
  378. "countries" : {
  379. "terms" : {
  380. "field" : "artist.country",
  381. "order" : { "rock>playback_stats.avg" : "desc" }
  382. },
  383. "aggs" : {
  384. "rock" : {
  385. "filter" : { "term" : { "genre" : "rock" }},
  386. "aggs" : {
  387. "playback_stats" : { "stats" : { "field" : "play_count" }}
  388. }
  389. }
  390. }
  391. }
  392. }
  393. }
  394. --------------------------------------------------
  395. The above will sort the artist's countries buckets based on the average play count among the rock songs.
  396. Multiple criteria can be used to order the buckets by providing an array of order criteria such as the following:
  397. [source,console,id=terms-aggregation-multicriteria-example]
  398. --------------------------------------------------
  399. GET /_search
  400. {
  401. "aggs" : {
  402. "countries" : {
  403. "terms" : {
  404. "field" : "artist.country",
  405. "order" : [ { "rock>playback_stats.avg" : "desc" }, { "_count" : "desc" } ]
  406. },
  407. "aggs" : {
  408. "rock" : {
  409. "filter" : { "term" : { "genre" : "rock" }},
  410. "aggs" : {
  411. "playback_stats" : { "stats" : { "field" : "play_count" }}
  412. }
  413. }
  414. }
  415. }
  416. }
  417. }
  418. --------------------------------------------------
  419. The above will sort the artist's countries buckets based on the average play count among the rock songs and then by
  420. their `doc_count` in descending order.
  421. NOTE: In the event that two buckets share the same values for all order criteria the bucket's term value is used as a
  422. tie-breaker in ascending alphabetical order to prevent non-deterministic ordering of buckets.
  423. ==== Minimum document count
  424. It is possible to only return terms that match more than a configured number of hits using the `min_doc_count` option:
  425. [source,console,id=terms-aggregation-min-doc-count-example]
  426. --------------------------------------------------
  427. GET /_search
  428. {
  429. "aggs" : {
  430. "tags" : {
  431. "terms" : {
  432. "field" : "tags",
  433. "min_doc_count": 10
  434. }
  435. }
  436. }
  437. }
  438. --------------------------------------------------
  439. The above aggregation would only return tags which have been found in 10 hits or more. Default value is `1`.
  440. Terms are collected and ordered on a shard level and merged with the terms collected from other shards in a second step. However, the shard does not have the information about the global document count available. The decision if a term is added to a candidate list depends only on the order computed on the shard using local shard frequencies. The `min_doc_count` criterion is only applied after merging local terms statistics of all shards. In a way the decision to add the term as a candidate is made without being very _certain_ about if the term will actually reach the required `min_doc_count`. This might cause many (globally) high frequent terms to be missing in the final result if low frequent terms populated the candidate lists. To avoid this, the `shard_size` parameter can be increased to allow more candidate terms on the shards. However, this increases memory consumption and network traffic.
  441. `shard_min_doc_count` parameter
  442. The parameter `shard_min_doc_count` regulates the _certainty_ a shard has if the term should actually be added to the candidate list or not with respect to the `min_doc_count`. Terms will only be considered if their local shard frequency within the set is higher than the `shard_min_doc_count`. If your dictionary contains many low frequent terms and you are not interested in those (for example misspellings), then you can set the `shard_min_doc_count` parameter to filter out candidate terms on a shard level that will with a reasonable certainty not reach the required `min_doc_count` even after merging the local counts. `shard_min_doc_count` is set to `0` per default and has no effect unless you explicitly set it.
  443. NOTE: Setting `min_doc_count`=`0` will also return buckets for terms that didn't match any hit. However, some of
  444. the returned terms which have a document count of zero might only belong to deleted documents or documents
  445. from other types, so there is no warranty that a `match_all` query would find a positive document count for
  446. those terms.
  447. WARNING: When NOT sorting on `doc_count` descending, high values of `min_doc_count` may return a number of buckets
  448. which is less than `size` because not enough data was gathered from the shards. Missing buckets can be
  449. back by increasing `shard_size`.
  450. Setting `shard_min_doc_count` too high will cause terms to be filtered out on a shard level. This value should be set much lower than `min_doc_count/#shards`.
  451. [[search-aggregations-bucket-terms-aggregation-script]]
  452. ==== Script
  453. Generating the terms using a script:
  454. [source,console,id=terms-aggregation-script-example]
  455. --------------------------------------------------
  456. GET /_search
  457. {
  458. "aggs" : {
  459. "genres" : {
  460. "terms" : {
  461. "script" : {
  462. "source": "doc['genre'].value",
  463. "lang": "painless"
  464. }
  465. }
  466. }
  467. }
  468. }
  469. --------------------------------------------------
  470. This will interpret the `script` parameter as an `inline` script with the default script language and no script parameters. To use a stored script use the following syntax:
  471. //////////////////////////
  472. [source,console,id=terms-aggregation-stored-example]
  473. --------------------------------------------------
  474. POST /_scripts/my_script
  475. {
  476. "script": {
  477. "lang": "painless",
  478. "source": "doc[params.field].value"
  479. }
  480. }
  481. --------------------------------------------------
  482. //////////////////////////
  483. [source,console]
  484. --------------------------------------------------
  485. GET /_search
  486. {
  487. "aggs" : {
  488. "genres" : {
  489. "terms" : {
  490. "script" : {
  491. "id": "my_script",
  492. "params": {
  493. "field": "genre"
  494. }
  495. }
  496. }
  497. }
  498. }
  499. }
  500. --------------------------------------------------
  501. // TEST[continued]
  502. ==== Value Script
  503. [source,console,id=terms-aggregation-value-script-example]
  504. --------------------------------------------------
  505. GET /_search
  506. {
  507. "aggs" : {
  508. "genres" : {
  509. "terms" : {
  510. "field" : "genre",
  511. "script" : {
  512. "source" : "'Genre: ' +_value",
  513. "lang" : "painless"
  514. }
  515. }
  516. }
  517. }
  518. }
  519. --------------------------------------------------
  520. ==== Filtering Values
  521. It is possible to filter the values for which buckets will be created. This can be done using the `include` and
  522. `exclude` parameters which are based on regular expression strings or arrays of exact values. Additionally,
  523. `include` clauses can filter using `partition` expressions.
  524. ===== Filtering Values with regular expressions
  525. [source,console,id=terms-aggregation-regex-example]
  526. --------------------------------------------------
  527. GET /_search
  528. {
  529. "aggs" : {
  530. "tags" : {
  531. "terms" : {
  532. "field" : "tags",
  533. "include" : ".*sport.*",
  534. "exclude" : "water_.*"
  535. }
  536. }
  537. }
  538. }
  539. --------------------------------------------------
  540. In the above example, buckets will be created for all the tags that has the word `sport` in them, except those starting
  541. with `water_` (so the tag `water_sports` will not be aggregated). The `include` regular expression will determine what
  542. values are "allowed" to be aggregated, while the `exclude` determines the values that should not be aggregated. When
  543. both are defined, the `exclude` has precedence, meaning, the `include` is evaluated first and only then the `exclude`.
  544. The syntax is the same as <<regexp-syntax,regexp queries>>.
  545. ===== Filtering Values with exact values
  546. For matching based on exact values the `include` and `exclude` parameters can simply take an array of
  547. strings that represent the terms as they are found in the index:
  548. [source,console,id=terms-aggregation-exact-example]
  549. --------------------------------------------------
  550. GET /_search
  551. {
  552. "aggs" : {
  553. "JapaneseCars" : {
  554. "terms" : {
  555. "field" : "make",
  556. "include" : ["mazda", "honda"]
  557. }
  558. },
  559. "ActiveCarManufacturers" : {
  560. "terms" : {
  561. "field" : "make",
  562. "exclude" : ["rover", "jensen"]
  563. }
  564. }
  565. }
  566. }
  567. --------------------------------------------------
  568. ===== Filtering Values with partitions
  569. Sometimes there are too many unique terms to process in a single request/response pair so
  570. it can be useful to break the analysis up into multiple requests.
  571. This can be achieved by grouping the field's values into a number of partitions at query-time and processing
  572. only one partition in each request.
  573. Consider this request which is looking for accounts that have not logged any access recently:
  574. [source,console,id=terms-aggregation-partitions-example]
  575. --------------------------------------------------
  576. GET /_search
  577. {
  578. "size": 0,
  579. "aggs": {
  580. "expired_sessions": {
  581. "terms": {
  582. "field": "account_id",
  583. "include": {
  584. "partition": 0,
  585. "num_partitions": 20
  586. },
  587. "size": 10000,
  588. "order": {
  589. "last_access": "asc"
  590. }
  591. },
  592. "aggs": {
  593. "last_access": {
  594. "max": {
  595. "field": "access_date"
  596. }
  597. }
  598. }
  599. }
  600. }
  601. }
  602. --------------------------------------------------
  603. This request is finding the last logged access date for a subset of customer accounts because we
  604. might want to expire some customer accounts who haven't been seen for a long while.
  605. The `num_partitions` setting has requested that the unique account_ids are organized evenly into twenty
  606. partitions (0 to 19). and the `partition` setting in this request filters to only consider account_ids falling
  607. into partition 0. Subsequent requests should ask for partitions 1 then 2 etc to complete the expired-account analysis.
  608. Note that the `size` setting for the number of results returned needs to be tuned with the `num_partitions`.
  609. For this particular account-expiration example the process for balancing values for `size` and `num_partitions` would be as follows:
  610. 1. Use the `cardinality` aggregation to estimate the total number of unique account_id values
  611. 2. Pick a value for `num_partitions` to break the number from 1) up into more manageable chunks
  612. 3. Pick a `size` value for the number of responses we want from each partition
  613. 4. Run a test request
  614. If we have a circuit-breaker error we are trying to do too much in one request and must increase `num_partitions`.
  615. If the request was successful but the last account ID in the date-sorted test response was still an account we might want to
  616. expire then we may be missing accounts of interest and have set our numbers too low. We must either
  617. * increase the `size` parameter to return more results per partition (could be heavy on memory) or
  618. * increase the `num_partitions` to consider less accounts per request (could increase overall processing time as we need to make more requests)
  619. Ultimately this is a balancing act between managing the Elasticsearch resources required to process a single request and the volume
  620. of requests that the client application must issue to complete a task.
  621. ==== Multi-field terms aggregation
  622. The `terms` aggregation does not support collecting terms from multiple fields
  623. in the same document. The reason is that the `terms` agg doesn't collect the
  624. string term values themselves, but rather uses
  625. <<search-aggregations-bucket-terms-aggregation-execution-hint,global ordinals>>
  626. to produce a list of all of the unique values in the field. Global ordinals
  627. results in an important performance boost which would not be possible across
  628. multiple fields.
  629. There are two approaches that you can use to perform a `terms` agg across
  630. multiple fields:
  631. <<search-aggregations-bucket-terms-aggregation-script,Script>>::
  632. Use a script to retrieve terms from multiple fields. This disables the global
  633. ordinals optimization and will be slower than collecting terms from a single
  634. field, but it gives you the flexibility to implement this option at search
  635. time.
  636. <<copy-to,`copy_to` field>>::
  637. If you know ahead of time that you want to collect the terms from two or more
  638. fields, then use `copy_to` in your mapping to create a new dedicated field at
  639. index time which contains the values from both fields. You can aggregate on
  640. this single field, which will benefit from the global ordinals optimization.
  641. [[search-aggregations-bucket-terms-aggregation-collect]]
  642. ==== Collect mode
  643. Deferring calculation of child aggregations
  644. For fields with many unique terms and a small number of required results it can be more efficient to delay the calculation
  645. of child aggregations until the top parent-level aggs have been pruned. Ordinarily, all branches of the aggregation tree
  646. are expanded in one depth-first pass and only then any pruning occurs.
  647. In some scenarios this can be very wasteful and can hit memory constraints.
  648. An example problem scenario is querying a movie database for the 10 most popular actors and their 5 most common co-stars:
  649. [source,console,id=terms-aggregation-collect-mode-example]
  650. --------------------------------------------------
  651. GET /_search
  652. {
  653. "aggs" : {
  654. "actors" : {
  655. "terms" : {
  656. "field" : "actors",
  657. "size" : 10
  658. },
  659. "aggs" : {
  660. "costars" : {
  661. "terms" : {
  662. "field" : "actors",
  663. "size" : 5
  664. }
  665. }
  666. }
  667. }
  668. }
  669. }
  670. --------------------------------------------------
  671. Even though the number of actors may be comparatively small and we want only 50 result buckets there is a combinatorial explosion of buckets
  672. during calculation - a single actor can produce n² buckets where n is the number of actors. The sane option would be to first determine
  673. the 10 most popular actors and only then examine the top co-stars for these 10 actors. This alternative strategy is what we call the `breadth_first` collection
  674. mode as opposed to the `depth_first` mode.
  675. NOTE: The `breadth_first` is the default mode for fields with a cardinality bigger than the requested size or when the cardinality is unknown (numeric fields or scripts for instance).
  676. It is possible to override the default heuristic and to provide a collect mode directly in the request:
  677. [source,console,id=terms-aggregation-breadth-first-example]
  678. --------------------------------------------------
  679. GET /_search
  680. {
  681. "aggs" : {
  682. "actors" : {
  683. "terms" : {
  684. "field" : "actors",
  685. "size" : 10,
  686. "collect_mode" : "breadth_first" <1>
  687. },
  688. "aggs" : {
  689. "costars" : {
  690. "terms" : {
  691. "field" : "actors",
  692. "size" : 5
  693. }
  694. }
  695. }
  696. }
  697. }
  698. }
  699. --------------------------------------------------
  700. <1> the possible values are `breadth_first` and `depth_first`
  701. When using `breadth_first` mode the set of documents that fall into the uppermost buckets are
  702. cached for subsequent replay so there is a memory overhead in doing this which is linear with the number of matching documents.
  703. Note that the `order` parameter can still be used to refer to data from a child aggregation when using the `breadth_first` setting - the parent
  704. aggregation understands that this child aggregation will need to be called first before any of the other child aggregations.
  705. WARNING: Nested aggregations such as `top_hits` which require access to score information under an aggregation that uses the `breadth_first`
  706. collection mode need to replay the query on the second pass but only for the documents belonging to the top buckets.
  707. [[search-aggregations-bucket-terms-aggregation-execution-hint]]
  708. ==== Execution hint
  709. There are different mechanisms by which terms aggregations can be executed:
  710. - by using field values directly in order to aggregate data per-bucket (`map`)
  711. - by using global ordinals of the field and allocating one bucket per global ordinal (`global_ordinals`)
  712. Elasticsearch tries to have sensible defaults so this is something that generally doesn't need to be configured.
  713. `global_ordinals` is the default option for `keyword` field, it uses global ordinals to allocates buckets dynamically
  714. so memory usage is linear to the number of values of the documents that are part of the aggregation scope.
  715. `map` should only be considered when very few documents match a query. Otherwise the ordinals-based execution mode
  716. is significantly faster. By default, `map` is only used when running an aggregation on scripts, since they don't have
  717. ordinals.
  718. [source,console,id=terms-aggregation-execution-hint-example]
  719. --------------------------------------------------
  720. GET /_search
  721. {
  722. "aggs" : {
  723. "tags" : {
  724. "terms" : {
  725. "field" : "tags",
  726. "execution_hint": "map" <1>
  727. }
  728. }
  729. }
  730. }
  731. --------------------------------------------------
  732. <1> The possible values are `map`, `global_ordinals`
  733. Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.
  734. ==== Missing value
  735. The `missing` parameter defines how documents that are missing a value should be treated.
  736. By default they will be ignored but it is also possible to treat them as if they
  737. had a value.
  738. [source,console,id=terms-aggregation-missing-example]
  739. --------------------------------------------------
  740. GET /_search
  741. {
  742. "aggs" : {
  743. "tags" : {
  744. "terms" : {
  745. "field" : "tags",
  746. "missing": "N/A" <1>
  747. }
  748. }
  749. }
  750. }
  751. --------------------------------------------------
  752. <1> Documents without a value in the `tags` field will fall into the same bucket as documents that have the value `N/A`.
  753. ==== Mixing field types
  754. WARNING: When aggregating on multiple indices the type of the aggregated field may not be the same in all indices.
  755. Some types are compatible with each other (`integer` and `long` or `float` and `double`) but when the types are a mix
  756. of decimal and non-decimal number the terms aggregation will promote the non-decimal numbers to decimal numbers.
  757. This can result in a loss of precision in the bucket values.