percentile-rank-aggregation.asciidoc 7.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234
  1. [[search-aggregations-metrics-percentile-rank-aggregation]]
  2. === Percentile Ranks Aggregation
  3. A `multi-value` metrics aggregation that calculates one or more percentile ranks
  4. over numeric values extracted from the aggregated documents. These values can be
  5. generated by a provided script or extracted from specific numeric or
  6. <<histogram,histogram fields>> in the documents.
  7. [NOTE]
  8. ==================================================
  9. Please see <<search-aggregations-metrics-percentile-aggregation-approximation>>
  10. and <<search-aggregations-metrics-percentile-aggregation-compression>> for advice
  11. regarding approximation and memory use of the percentile ranks aggregation
  12. ==================================================
  13. Percentile rank show the percentage of observed values which are below certain
  14. value. For example, if a value is greater than or equal to 95% of the observed values
  15. it is said to be at the 95th percentile rank.
  16. Assume your data consists of website load times. You may have a service agreement that
  17. 95% of page loads complete within 500ms and 99% of page loads complete within 600ms.
  18. Let's look at a range of percentiles representing load time:
  19. [source,console]
  20. --------------------------------------------------
  21. GET latency/_search
  22. {
  23. "size": 0,
  24. "aggs" : {
  25. "load_time_ranks" : {
  26. "percentile_ranks" : {
  27. "field" : "load_time", <1>
  28. "values" : [500, 600]
  29. }
  30. }
  31. }
  32. }
  33. --------------------------------------------------
  34. // TEST[setup:latency]
  35. <1> The field `load_time` must be a numeric field
  36. The response will look like this:
  37. [source,console-result]
  38. --------------------------------------------------
  39. {
  40. ...
  41. "aggregations": {
  42. "load_time_ranks": {
  43. "values" : {
  44. "500.0": 55.00000000000001,
  45. "600.0": 64.0
  46. }
  47. }
  48. }
  49. }
  50. --------------------------------------------------
  51. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  52. From this information you can determine you are hitting the 99% load time target but not quite
  53. hitting the 95% load time target
  54. ==== Keyed Response
  55. By default the `keyed` flag is set to `true` associates a unique string key with each bucket and returns the ranges as a hash rather than an array. Setting the `keyed` flag to `false` will disable this behavior:
  56. [source,console]
  57. --------------------------------------------------
  58. GET latency/_search
  59. {
  60. "size": 0,
  61. "aggs": {
  62. "load_time_ranks": {
  63. "percentile_ranks": {
  64. "field": "load_time",
  65. "values": [500, 600],
  66. "keyed": false
  67. }
  68. }
  69. }
  70. }
  71. --------------------------------------------------
  72. // TEST[setup:latency]
  73. Response:
  74. [source,console-result]
  75. --------------------------------------------------
  76. {
  77. ...
  78. "aggregations": {
  79. "load_time_ranks": {
  80. "values": [
  81. {
  82. "key": 500.0,
  83. "value": 55.00000000000001
  84. },
  85. {
  86. "key": 600.0,
  87. "value": 64.0
  88. }
  89. ]
  90. }
  91. }
  92. }
  93. --------------------------------------------------
  94. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  95. ==== Script
  96. The percentile rank metric supports scripting. For example, if our load times
  97. are in milliseconds but we want to specify values in seconds, we could use
  98. a script to convert them on-the-fly:
  99. [source,console]
  100. --------------------------------------------------
  101. GET latency/_search
  102. {
  103. "size": 0,
  104. "aggs" : {
  105. "load_time_ranks" : {
  106. "percentile_ranks" : {
  107. "values" : [500, 600],
  108. "script" : {
  109. "lang": "painless",
  110. "source": "doc['load_time'].value / params.timeUnit", <1>
  111. "params" : {
  112. "timeUnit" : 1000 <2>
  113. }
  114. }
  115. }
  116. }
  117. }
  118. }
  119. --------------------------------------------------
  120. // TEST[setup:latency]
  121. <1> The `field` parameter is replaced with a `script` parameter, which uses the
  122. script to generate values which percentile ranks are calculated on
  123. <2> Scripting supports parameterized input just like any other script
  124. This will interpret the `script` parameter as an `inline` script with the `painless` script language and no script parameters. To use a stored script use the following syntax:
  125. [source,console]
  126. --------------------------------------------------
  127. GET latency/_search
  128. {
  129. "size": 0,
  130. "aggs" : {
  131. "load_time_ranks" : {
  132. "percentile_ranks" : {
  133. "values" : [500, 600],
  134. "script" : {
  135. "id": "my_script",
  136. "params": {
  137. "field": "load_time"
  138. }
  139. }
  140. }
  141. }
  142. }
  143. }
  144. --------------------------------------------------
  145. // TEST[setup:latency,stored_example_script]
  146. ==== HDR Histogram
  147. NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future.
  148. https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
  149. that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation
  150. with the trade-off of a larger memory footprint. This implementation maintains a fixed worse-case percentage error (specified as a
  151. number of significant digits). This means that if data is recorded with values from 1 microsecond up to 1 hour (3,600,000,000
  152. microseconds) in a histogram set to 3 significant digits, it will maintain a value resolution of 1 microsecond for values up to
  153. 1 millisecond and 3.6 seconds (or better) for the maximum tracked value (1 hour).
  154. The HDR Histogram can be used by specifying the `method` parameter in the request:
  155. [source,console]
  156. --------------------------------------------------
  157. GET latency/_search
  158. {
  159. "size": 0,
  160. "aggs" : {
  161. "load_time_ranks" : {
  162. "percentile_ranks" : {
  163. "field" : "load_time",
  164. "values" : [500, 600],
  165. "hdr": { <1>
  166. "number_of_significant_value_digits" : 3 <2>
  167. }
  168. }
  169. }
  170. }
  171. }
  172. --------------------------------------------------
  173. // TEST[setup:latency]
  174. <1> `hdr` object indicates that HDR Histogram should be used to calculate the percentiles and specific settings for this algorithm can be specified inside the object
  175. <2> `number_of_significant_value_digits` specifies the resolution of values for the histogram in number of significant digits
  176. The HDRHistogram only supports positive values and will error if it is passed a negative value. It is also not a good idea to use
  177. the HDRHistogram if the range of values is unknown as this could lead to high memory usage.
  178. ==== Missing value
  179. The `missing` parameter defines how documents that are missing a value should be treated.
  180. By default they will be ignored but it is also possible to treat them as if they
  181. had a value.
  182. [source,console]
  183. --------------------------------------------------
  184. GET latency/_search
  185. {
  186. "size": 0,
  187. "aggs" : {
  188. "load_time_ranks" : {
  189. "percentile_ranks" : {
  190. "field" : "load_time",
  191. "values" : [500, 600],
  192. "missing": 10 <1>
  193. }
  194. }
  195. }
  196. }
  197. --------------------------------------------------
  198. // TEST[setup:latency]
  199. <1> Documents without a value in the `load_time` field will fall into the same bucket as documents that have the value `10`.