1
0

percentile-aggregation.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371
  1. [[search-aggregations-metrics-percentile-aggregation]]
  2. === Percentiles aggregation
  3. ++++
  4. <titleabbrev>Percentiles</titleabbrev>
  5. ++++
  6. A `multi-value` metrics aggregation that calculates one or more percentiles
  7. over numeric values extracted from the aggregated documents. These values can be
  8. extracted from specific numeric or <<histogram,histogram fields>> in the documents.
  9. Percentiles show the point at which a certain percentage of observed values
  10. occur. For example, the 95th percentile is the value which is greater than 95%
  11. of the observed values.
  12. Percentiles are often used to find outliers. In normal distributions, the
  13. 0.13th and 99.87th percentiles represents three standard deviations from the
  14. mean. Any data which falls outside three standard deviations is often considered
  15. an anomaly.
  16. When a range of percentiles are retrieved, they can be used to estimate the
  17. data distribution and determine if the data is skewed, bimodal, etc.
  18. Assume your data consists of website load times. The average and median
  19. load times are not overly useful to an administrator. The max may be interesting,
  20. but it can be easily skewed by a single slow response.
  21. Let's look at a range of percentiles representing load time:
  22. [source,console]
  23. --------------------------------------------------
  24. GET latency/_search
  25. {
  26. "size": 0,
  27. "aggs": {
  28. "load_time_outlier": {
  29. "percentiles": {
  30. "field": "load_time" <1>
  31. }
  32. }
  33. }
  34. }
  35. --------------------------------------------------
  36. // TEST[setup:latency]
  37. <1> The field `load_time` must be a numeric field
  38. By default, the `percentile` metric will generate a range of
  39. percentiles: `[ 1, 5, 25, 50, 75, 95, 99 ]`. The response will look like this:
  40. [source,console-result]
  41. --------------------------------------------------
  42. {
  43. ...
  44. "aggregations": {
  45. "load_time_outlier": {
  46. "values": {
  47. "1.0": 5.0,
  48. "5.0": 25.0,
  49. "25.0": 165.0,
  50. "50.0": 445.0,
  51. "75.0": 725.0,
  52. "95.0": 945.0,
  53. "99.0": 985.0
  54. }
  55. }
  56. }
  57. }
  58. --------------------------------------------------
  59. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  60. As you can see, the aggregation will return a calculated value for each percentile
  61. in the default range. If we assume response times are in milliseconds, it is
  62. immediately obvious that the webpage normally loads in 10-725ms, but occasionally
  63. spikes to 945-985ms.
  64. Often, administrators are only interested in outliers -- the extreme percentiles.
  65. We can specify just the percents we are interested in (requested percentiles
  66. must be a value between 0-100 inclusive):
  67. [source,console]
  68. --------------------------------------------------
  69. GET latency/_search
  70. {
  71. "size": 0,
  72. "aggs": {
  73. "load_time_outlier": {
  74. "percentiles": {
  75. "field": "load_time",
  76. "percents": [ 95, 99, 99.9 ] <1>
  77. }
  78. }
  79. }
  80. }
  81. --------------------------------------------------
  82. // TEST[setup:latency]
  83. <1> Use the `percents` parameter to specify particular percentiles to calculate
  84. ==== Keyed Response
  85. By default the `keyed` flag is set to `true` which associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior:
  86. [source,console]
  87. --------------------------------------------------
  88. GET latency/_search
  89. {
  90. "size": 0,
  91. "aggs": {
  92. "load_time_outlier": {
  93. "percentiles": {
  94. "field": "load_time",
  95. "keyed": false
  96. }
  97. }
  98. }
  99. }
  100. --------------------------------------------------
  101. // TEST[setup:latency]
  102. Response:
  103. [source,console-result]
  104. --------------------------------------------------
  105. {
  106. ...
  107. "aggregations": {
  108. "load_time_outlier": {
  109. "values": [
  110. {
  111. "key": 1.0,
  112. "value": 5.0
  113. },
  114. {
  115. "key": 5.0,
  116. "value": 25.0
  117. },
  118. {
  119. "key": 25.0,
  120. "value": 165.0
  121. },
  122. {
  123. "key": 50.0,
  124. "value": 445.0
  125. },
  126. {
  127. "key": 75.0,
  128. "value": 725.0
  129. },
  130. {
  131. "key": 95.0,
  132. "value": 945.0
  133. },
  134. {
  135. "key": 99.0,
  136. "value": 985.0
  137. }
  138. ]
  139. }
  140. }
  141. }
  142. --------------------------------------------------
  143. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  144. ==== Script
  145. If you need to run the aggregation against values that aren't indexed, use
  146. a <<runtime,runtime field>>. For example, if our load times
  147. are in milliseconds but you want percentiles calculated in seconds:
  148. [source,console]
  149. ----
  150. GET latency/_search
  151. {
  152. "size": 0,
  153. "runtime_mappings": {
  154. "load_time.seconds": {
  155. "type": "long",
  156. "script": {
  157. "source": "emit(doc['load_time'].value / params.timeUnit)",
  158. "params": {
  159. "timeUnit": 1000
  160. }
  161. }
  162. }
  163. },
  164. "aggs": {
  165. "load_time_outlier": {
  166. "percentiles": {
  167. "field": "load_time.seconds"
  168. }
  169. }
  170. }
  171. }
  172. ----
  173. // TEST[setup:latency]
  174. // TEST[s/_search/_search?filter_path=aggregations/]
  175. // TEST[s/"timeUnit": 1000/"timeUnit": 10/]
  176. ////
  177. [source,console-result]
  178. ----
  179. {
  180. "aggregations": {
  181. "load_time_outlier": {
  182. "values": {
  183. "1.0": 0.5,
  184. "5.0": 2.5,
  185. "25.0": 16.5,
  186. "50.0": 44.5,
  187. "75.0": 72.5,
  188. "95.0": 94.5,
  189. "99.0": 98.5
  190. }
  191. }
  192. }
  193. }
  194. ----
  195. ////
  196. [[search-aggregations-metrics-percentile-aggregation-approximation]]
  197. ==== Percentiles are (usually) approximate
  198. There are many different algorithms to calculate percentiles. The naive
  199. implementation simply stores all the values in a sorted array. To find the 50th
  200. percentile, you simply find the value that is at `my_array[count(my_array) * 0.5]`.
  201. Clearly, the naive implementation does not scale -- the sorted array grows
  202. linearly with the number of values in your dataset. To calculate percentiles
  203. across potentially billions of values in an Elasticsearch cluster, _approximate_
  204. percentiles are calculated.
  205. The algorithm used by the `percentile` metric is called TDigest (introduced by
  206. Ted Dunning in
  207. https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[Computing Accurate Quantiles using T-Digests]).
  208. When using this metric, there are a few guidelines to keep in mind:
  209. - Accuracy is proportional to `q(1-q)`. This means that extreme percentiles (e.g. 99%)
  210. are more accurate than less extreme percentiles, such as the median
  211. - For small sets of values, percentiles are highly accurate (and potentially
  212. 100% accurate if the data is small enough).
  213. - As the quantity of values in a bucket grows, the algorithm begins to approximate
  214. the percentiles. It is effectively trading accuracy for memory savings. The
  215. exact level of inaccuracy is difficult to generalize, since it depends on your
  216. data distribution and volume of data being aggregated
  217. The following chart shows the relative error on a uniform distribution depending
  218. on the number of collected values and the requested percentile:
  219. image:images/percentiles_error.png[]
  220. It shows how precision is better for extreme percentiles. The reason why error diminishes
  221. for large number of values is that the law of large numbers makes the distribution of
  222. values more and more uniform and the t-digest tree can do a better job at summarizing
  223. it. It would not be the case on more skewed distributions.
  224. [WARNING]
  225. ====
  226. Percentile aggregations are also
  227. {wikipedia}/Nondeterministic_algorithm[non-deterministic].
  228. This means you can get slightly different results using the same data.
  229. ====
  230. [[search-aggregations-metrics-percentile-aggregation-compression]]
  231. ==== Compression
  232. Approximate algorithms must balance memory utilization with estimation accuracy.
  233. This balance can be controlled using a `compression` parameter:
  234. [source,console]
  235. --------------------------------------------------
  236. GET latency/_search
  237. {
  238. "size": 0,
  239. "aggs": {
  240. "load_time_outlier": {
  241. "percentiles": {
  242. "field": "load_time",
  243. "tdigest": {
  244. "compression": 200 <1>
  245. }
  246. }
  247. }
  248. }
  249. }
  250. --------------------------------------------------
  251. // TEST[setup:latency]
  252. <1> Compression controls memory usage and approximation error
  253. // tag::t-digest[]
  254. The TDigest algorithm uses a number of "nodes" to approximate percentiles -- the
  255. more nodes available, the higher the accuracy (and large memory footprint) proportional
  256. to the volume of data. The `compression` parameter limits the maximum number of
  257. nodes to `20 * compression`.
  258. Therefore, by increasing the compression value, you can increase the accuracy of
  259. your percentiles at the cost of more memory. Larger compression values also
  260. make the algorithm slower since the underlying tree data structure grows in size,
  261. resulting in more expensive operations. The default compression value is
  262. `100`.
  263. A "node" uses roughly 32 bytes of memory, so under worst-case scenarios (large amount
  264. of data which arrives sorted and in-order) the default settings will produce a
  265. TDigest roughly 64KB in size. In practice data tends to be more random and
  266. the TDigest will use less memory.
  267. // end::t-digest[]
  268. ==== HDR Histogram
  269. NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future.
  270. https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
  271. that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation
  272. with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified
  273. as a number of significant digits). This means that if data is recorded with values from 1 microsecond up to 1 hour
  274. (3,600,000,000 microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond
  275. for values up to 1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour).
  276. The HDR Histogram can be used by specifying the `method` parameter in the request:
  277. [source,console]
  278. --------------------------------------------------
  279. GET latency/_search
  280. {
  281. "size": 0,
  282. "aggs": {
  283. "load_time_outlier": {
  284. "percentiles": {
  285. "field": "load_time",
  286. "percents": [ 95, 99, 99.9 ],
  287. "hdr": { <1>
  288. "number_of_significant_value_digits": 3 <2>
  289. }
  290. }
  291. }
  292. }
  293. }
  294. --------------------------------------------------
  295. // TEST[setup:latency]
  296. <1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object
  297. <2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits
  298. The HDRHistogram only supports positive values and will error if it is passed a negative value. It is also not a good idea to use
  299. the HDRHistogram if the range of values is unknown as this could lead to high memory usage.
  300. ==== Missing value
  301. The `missing` parameter defines how documents that are missing a value should be treated.
  302. By default they will be ignored but it is also possible to treat them as if they
  303. had a value.
  304. [source,console]
  305. --------------------------------------------------
  306. GET latency/_search
  307. {
  308. "size": 0,
  309. "aggs": {
  310. "grade_percentiles": {
  311. "percentiles": {
  312. "field": "grade",
  313. "missing": 10 <1>
  314. }
  315. }
  316. }
  317. }
  318. --------------------------------------------------
  319. // TEST[setup:latency]
  320. <1> Documents without a value in the `grade` field will fall into the same bucket as documents that have the value `10`.