percentile-aggregation.asciidoc 7.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192
  1. [[search-aggregations-metrics-percentile-aggregation]]
  2. === Percentiles Aggregation
  3. A `multi-value` metrics aggregation that calculates one or more percentiles
  4. over numeric values extracted from the aggregated documents. These values
  5. can be extracted either from specific numeric fields in the documents, or
  6. be generated by a provided script.
  7. Percentiles show the point at which a certain percentage of observed values
  8. occur. For example, the 95th percentile is the value which is greater than 95%
  9. of the observed values.
  10. Percentiles are often used to find outliers. In normal distributions, the
  11. 0.13th and 99.87th percentiles represents three standard deviations from the
  12. mean. Any data which falls outside three standard deviations is often considered
  13. an anomaly.
  14. When a range of percentiles are retrieved, they can be used to estimate the
  15. data distribution and determine if the data is skewed, bimodal, etc.
  16. Assume your data consists of website load times. The average and median
  17. load times are not overly useful to an administrator. The max may be interesting,
  18. but it can be easily skewed by a single slow response.
  19. Let's look at a range of percentiles representing load time:
  20. [source,js]
  21. --------------------------------------------------
  22. {
  23. "aggs" : {
  24. "load_time_outlier" : {
  25. "percentiles" : {
  26. "field" : "load_time" <1>
  27. }
  28. }
  29. }
  30. }
  31. --------------------------------------------------
  32. <1> The field `load_time` must be a numeric field
  33. By default, the `percentile` metric will generate a range of
  34. percentiles: `[ 1, 5, 25, 50, 75, 95, 99 ]`. The response will look like this:
  35. [source,js]
  36. --------------------------------------------------
  37. {
  38. ...
  39. "aggregations": {
  40. "load_time_outlier": {
  41. "values" : {
  42. "1.0": 15,
  43. "5.0": 20,
  44. "25.0": 23,
  45. "50.0": 25,
  46. "75.0": 29,
  47. "95.0": 60,
  48. "99.0": 150
  49. }
  50. }
  51. }
  52. }
  53. --------------------------------------------------
  54. As you can see, the aggregation will return a calculated value for each percentile
  55. in the default range. If we assume response times are in milliseconds, it is
  56. immediately obvious that the webpage normally loads in 15-30ms, but occasionally
  57. spikes to 60-150ms.
  58. Often, administrators are only interested in outliers -- the extreme percentiles.
  59. We can specify just the percents we are interested in (requested percentiles
  60. must be a value between 0-100 inclusive):
  61. [source,js]
  62. --------------------------------------------------
  63. {
  64. "aggs" : {
  65. "load_time_outlier" : {
  66. "percentiles" : {
  67. "field" : "load_time",
  68. "percents" : [95, 99, 99.9] <1>
  69. }
  70. }
  71. }
  72. }
  73. --------------------------------------------------
  74. <1> Use the `percents` parameter to specify particular percentiles to calculate
  75. ==== Script
  76. The percentile metric supports scripting. For example, if our load times
  77. are in milliseconds but we want percentiles calculated in seconds, we could use
  78. a script to convert them on-the-fly:
  79. [source,js]
  80. --------------------------------------------------
  81. {
  82. "aggs" : {
  83. "load_time_outlier" : {
  84. "percentiles" : {
  85. "script" : "doc['load_time'].value / timeUnit", <1>
  86. "params" : {
  87. "timeUnit" : 1000 <2>
  88. }
  89. }
  90. }
  91. }
  92. }
  93. --------------------------------------------------
  94. <1> The `field` parameter is replaced with a `script` parameter, which uses the
  95. script to generate values which percentiles are calculated on
  96. <2> Scripting supports parameterized input just like any other script
  97. TIP: The `script` parameter expects an inline script. Use `script_id` for indexed scripts and `script_file` for scripts in the `config/scripts/` directory.
  98. [[search-aggregations-metrics-percentile-aggregation-approximation]]
  99. ==== Percentiles are (usually) approximate
  100. There are many different algorithms to calculate percentiles. The naive
  101. implementation simply stores all the values in a sorted array. To find the 50th
  102. percentile, you simply find the value that is at `my_array[count(my_array) * 0.5]`.
  103. Clearly, the naive implementation does not scale -- the sorted array grows
  104. linearly with the number of values in your dataset. To calculate percentiles
  105. across potentially billions of values in an Elasticsearch cluster, _approximate_
  106. percentiles are calculated.
  107. The algorithm used by the `percentile` metric is called TDigest (introduced by
  108. Ted Dunning in
  109. https://github.com/tdunning/t-digest/blob/master/docs/t-digest-paper/histo.pdf[Computing Accurate Quantiles using T-Digests]).
  110. When using this metric, there are a few guidelines to keep in mind:
  111. - Accuracy is proportional to `q(1-q)`. This means that extreme percentiles (e.g. 99%)
  112. are more accurate than less extreme percentiles, such as the median
  113. - For small sets of values, percentiles are highly accurate (and potentially
  114. 100% accurate if the data is small enough).
  115. - As the quantity of values in a bucket grows, the algorithm begins to approximate
  116. the percentiles. It is effectively trading accuracy for memory savings. The
  117. exact level of inaccuracy is difficult to generalize, since it depends on your
  118. data distribution and volume of data being aggregated
  119. The following chart shows the relative error on a uniform distribution depending
  120. on the number of collected values and the requested percentile:
  121. image:images/percentiles_error.png[]
  122. It shows how precision is better for extreme percentiles. The reason why error diminishes
  123. for large number of values is that the law of large numbers makes the distribution of
  124. values more and more uniform and the t-digest tree can do a better job at summarizing
  125. it. It would not be the case on more skewed distributions.
  126. [[search-aggregations-metrics-percentile-aggregation-compression]]
  127. ==== Compression
  128. experimental[The `compression` parameter is specific to the current internal implementation of percentiles, and may change in the future]
  129. Approximate algorithms must balance memory utilization with estimation accuracy.
  130. This balance can be controlled using a `compression` parameter:
  131. [source,js]
  132. --------------------------------------------------
  133. {
  134. "aggs" : {
  135. "load_time_outlier" : {
  136. "percentiles" : {
  137. "field" : "load_time",
  138. "compression" : 200 <1>
  139. }
  140. }
  141. }
  142. }
  143. --------------------------------------------------
  144. <1> Compression controls memory usage and approximation error
  145. The TDigest algorithm uses a number of "nodes" to approximate percentiles -- the
  146. more nodes available, the higher the accuracy (and large memory footprint) proportional
  147. to the volume of data. The `compression` parameter limits the maximum number of
  148. nodes to `20 * compression`.
  149. Therefore, by increasing the compression value, you can increase the accuracy of
  150. your percentiles at the cost of more memory. Larger compression values also
  151. make the algorithm slower since the underlying tree data structure grows in size,
  152. resulting in more expensive operations. The default compression value is
  153. `100`.
  154. A "node" uses roughly 32 bytes of memory, so under worst-case scenarios (large amount
  155. of data which arrives sorted and in-order) the default settings will produce a
  156. TDigest roughly 64KB in size. In practice data tends to be more random and
  157. the TDigest will use less memory.