pipeline.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288
  1. [[search-aggregations-pipeline]]
  2. == Pipeline Aggregations
  3. Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding
  4. information to the output tree. There are many different types of pipeline aggregation, each computing different information from
  5. other aggregations, but these types can be broken down into two families:
  6. _Parent_::
  7. A family of pipeline aggregations that is provided with the output of its parent aggregation and is able
  8. to compute new buckets or new aggregations to add to existing buckets.
  9. _Sibling_::
  10. Pipeline aggregations that are provided with the output of a sibling aggregation and are able to compute a
  11. new aggregation which will be at the same level as the sibling aggregation.
  12. Pipeline aggregations can reference the aggregations they need to perform their computation by using the `buckets_path`
  13. parameter to indicate the paths to the required metrics. The syntax for defining these paths can be found in the
  14. <<buckets-path-syntax, `buckets_path` Syntax>> section below.
  15. Pipeline aggregations cannot have sub-aggregations but depending on the type it can reference another pipeline in the `buckets_path`
  16. allowing pipeline aggregations to be chained. For example, you can chain together two derivatives to calculate the second derivative
  17. (i.e. a derivative of a derivative).
  18. NOTE: Because pipeline aggregations only add to the output, when chaining pipeline aggregations the output of each pipeline aggregation
  19. will be included in the final output.
  20. [[buckets-path-syntax]]
  21. [float]
  22. === `buckets_path` Syntax
  23. Most pipeline aggregations require another aggregation as their input. The input aggregation is defined via the `buckets_path`
  24. parameter, which follows a specific format:
  25. // https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form
  26. [source,ebnf]
  27. --------------------------------------------------
  28. AGG_SEPARATOR = `>` ;
  29. METRIC_SEPARATOR = `.` ;
  30. AGG_NAME = <the name of the aggregation> ;
  31. METRIC = <the name of the metric (in case of multi-value metrics aggregation)> ;
  32. MULTIBUCKET_KEY = `[<KEY_NAME>]`
  33. PATH = <AGG_NAME><MULTIBUCKET_KEY>? (<AGG_SEPARATOR>, <AGG_NAME> )* ( <METRIC_SEPARATOR>, <METRIC> ) ;
  34. --------------------------------------------------
  35. For example, the path `"my_bucket>my_stats.avg"` will path to the `avg` value in the `"my_stats"` metric, which is
  36. contained in the `"my_bucket"` bucket aggregation.
  37. Paths are relative from the position of the pipeline aggregation; they are not absolute paths, and the path cannot go back "up" the
  38. aggregation tree. For example, this derivative is embedded inside a date_histogram and refers to a "sibling"
  39. metric `"the_sum"`:
  40. [source,console,id=buckets-path-example]
  41. --------------------------------------------------
  42. POST /_search
  43. {
  44. "aggs": {
  45. "my_date_histo":{
  46. "date_histogram":{
  47. "field":"timestamp",
  48. "calendar_interval":"day"
  49. },
  50. "aggs":{
  51. "the_sum":{
  52. "sum":{ "field": "lemmings" } <1>
  53. },
  54. "the_deriv":{
  55. "derivative":{ "buckets_path": "the_sum" } <2>
  56. }
  57. }
  58. }
  59. }
  60. }
  61. --------------------------------------------------
  62. <1> The metric is called `"the_sum"`
  63. <2> The `buckets_path` refers to the metric via a relative path `"the_sum"`
  64. `buckets_path` is also used for Sibling pipeline aggregations, where the aggregation is "next" to a series of buckets
  65. instead of embedded "inside" them. For example, the `max_bucket` aggregation uses the `buckets_path` to specify
  66. a metric embedded inside a sibling aggregation:
  67. [source,console,id=buckets-path-sibling-example]
  68. --------------------------------------------------
  69. POST /_search
  70. {
  71. "aggs" : {
  72. "sales_per_month" : {
  73. "date_histogram" : {
  74. "field" : "date",
  75. "calendar_interval" : "month"
  76. },
  77. "aggs": {
  78. "sales": {
  79. "sum": {
  80. "field": "price"
  81. }
  82. }
  83. }
  84. },
  85. "max_monthly_sales": {
  86. "max_bucket": {
  87. "buckets_path": "sales_per_month>sales" <1>
  88. }
  89. }
  90. }
  91. }
  92. --------------------------------------------------
  93. // TEST[setup:sales]
  94. <1> `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the
  95. `sales_per_month` date histogram.
  96. If a Sibling pipeline agg references a multi-bucket aggregation, such as a `terms` agg, it also has the option to
  97. select specific keys from the multi-bucket. For example, a `bucket_script` could select two specific buckets (via
  98. their bucket keys) to perform the calculation:
  99. [source,console,id=buckets-path-specific-bucket-example]
  100. --------------------------------------------------
  101. POST /_search
  102. {
  103. "aggs" : {
  104. "sales_per_month" : {
  105. "date_histogram" : {
  106. "field" : "date",
  107. "calendar_interval" : "month"
  108. },
  109. "aggs": {
  110. "sale_type": {
  111. "terms": {
  112. "field": "type"
  113. },
  114. "aggs": {
  115. "sales": {
  116. "sum": {
  117. "field": "price"
  118. }
  119. }
  120. }
  121. },
  122. "hat_vs_bag_ratio": {
  123. "bucket_script": {
  124. "buckets_path": {
  125. "hats": "sale_type['hat']>sales", <1>
  126. "bags": "sale_type['bag']>sales" <1>
  127. },
  128. "script": "params.hats / params.bags"
  129. }
  130. }
  131. }
  132. }
  133. }
  134. }
  135. --------------------------------------------------
  136. // TEST[setup:sales]
  137. <1> `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically,
  138. instead of fetching all the buckets from `sale_type` aggregation
  139. [float]
  140. === Special Paths
  141. Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path. This instructs
  142. the pipeline aggregation to use the document count as its input. For example, a derivative can be calculated
  143. on the document count of each bucket, instead of a specific metric:
  144. [source,console,id=buckets-path-count-example]
  145. --------------------------------------------------
  146. POST /_search
  147. {
  148. "aggs": {
  149. "my_date_histo": {
  150. "date_histogram": {
  151. "field":"timestamp",
  152. "calendar_interval":"day"
  153. },
  154. "aggs": {
  155. "the_deriv": {
  156. "derivative": { "buckets_path": "_count" } <1>
  157. }
  158. }
  159. }
  160. }
  161. }
  162. --------------------------------------------------
  163. <1> By using `_count` instead of a metric name, we can calculate the derivative of document counts in the histogram
  164. The `buckets_path` can also use `"_bucket_count"` and path to a multi-bucket aggregation to use the number of buckets
  165. returned by that aggregation in the pipeline aggregation instead of a metric. For example, a `bucket_selector` can be
  166. used here to filter out buckets which contain no buckets for an inner terms aggregation:
  167. [source,console,id=buckets-path-bucket-count-example]
  168. --------------------------------------------------
  169. POST /sales/_search
  170. {
  171. "size": 0,
  172. "aggs": {
  173. "histo": {
  174. "date_histogram": {
  175. "field": "date",
  176. "calendar_interval": "day"
  177. },
  178. "aggs": {
  179. "categories": {
  180. "terms": {
  181. "field": "category"
  182. }
  183. },
  184. "min_bucket_selector": {
  185. "bucket_selector": {
  186. "buckets_path": {
  187. "count": "categories._bucket_count" <1>
  188. },
  189. "script": {
  190. "source": "params.count != 0"
  191. }
  192. }
  193. }
  194. }
  195. }
  196. }
  197. }
  198. --------------------------------------------------
  199. // TEST[setup:sales]
  200. <1> By using `_bucket_count` instead of a metric name, we can filter out `histo` buckets where they contain no buckets
  201. for the `categories` aggregation
  202. [[dots-in-agg-names]]
  203. [float]
  204. === Dealing with dots in agg names
  205. An alternate syntax is supported to cope with aggregations or metrics which
  206. have dots in the name, such as the ++99.9++th
  207. <<search-aggregations-metrics-percentile-aggregation,percentile>>. This metric
  208. may be referred to as:
  209. [source,js]
  210. ---------------
  211. "buckets_path": "my_percentile[99.9]"
  212. ---------------
  213. // NOTCONSOLE
  214. [[gap-policy]]
  215. [float]
  216. === Dealing with gaps in the data
  217. Data in the real world is often noisy and sometimes contains *gaps* -- places where data simply doesn't exist. This can
  218. occur for a variety of reasons, the most common being:
  219. * Documents falling into a bucket do not contain a required field
  220. * There are no documents matching the query for one or more buckets
  221. * The metric being calculated is unable to generate a value, likely because another dependent bucket is missing a value.
  222. Some pipeline aggregations have specific requirements that must be met (e.g. a derivative cannot calculate a metric for the
  223. first value because there is no previous value, HoltWinters moving average need "warmup" data to begin calculating, etc)
  224. Gap policies are a mechanism to inform the pipeline aggregation about the desired behavior when "gappy" or missing
  225. data is encountered. All pipeline aggregations accept the `gap_policy` parameter. There are currently two gap policies
  226. to choose from:
  227. _skip_::
  228. This option treats missing data as if the bucket does not exist. It will skip the bucket and continue
  229. calculating using the next available value.
  230. _insert_zeros_::
  231. This option will replace missing values with a zero (`0`) and pipeline aggregation computation will
  232. proceed as normal.
  233. include::pipeline/avg-bucket-aggregation.asciidoc[]
  234. include::pipeline/derivative-aggregation.asciidoc[]
  235. include::pipeline/max-bucket-aggregation.asciidoc[]
  236. include::pipeline/min-bucket-aggregation.asciidoc[]
  237. include::pipeline/sum-bucket-aggregation.asciidoc[]
  238. include::pipeline/stats-bucket-aggregation.asciidoc[]
  239. include::pipeline/extended-stats-bucket-aggregation.asciidoc[]
  240. include::pipeline/percentiles-bucket-aggregation.asciidoc[]
  241. include::pipeline/movavg-aggregation.asciidoc[]
  242. include::pipeline/movfn-aggregation.asciidoc[]
  243. include::pipeline/cumulative-sum-aggregation.asciidoc[]
  244. include::pipeline/cumulative-cardinality-aggregation.asciidoc[]
  245. include::pipeline/bucket-script-aggregation.asciidoc[]
  246. include::pipeline/bucket-selector-aggregation.asciidoc[]
  247. include::pipeline/bucket-sort-aggregation.asciidoc[]
  248. include::pipeline/serial-diff-aggregation.asciidoc[]