geohashgrid-aggregation.asciidoc 4.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131
  1. [[search-aggregations-bucket-geohashgrid-aggregation]]
  2. === GeoHash grid Aggregation
  3. A multi-bucket aggregation that works on `geo_point` fields and groups points into buckets that represent cells in a grid.
  4. The resulting grid can be sparse and only contains cells that have matching data. Each cell is labeled using a http://en.wikipedia.org/wiki/Geohash[geohash] which is of user-definable precision.
  5. * High precision geohashes have a long string length and represent cells that cover only a small area.
  6. * Low precision geohashes have a short string length and represent cells that each cover a large area.
  7. Geohashes used in this aggregation can have a choice of precision between 1 and 12.
  8. WARNING: The highest-precision geohash of length 12 produces cells that cover less than a square metre of land and so high-precision requests can be very costly in terms of RAM and result sizes.
  9. Please see the example below on how to first filter the aggregation to a smaller geographic area before requesting high-levels of detail.
  10. The specified field must be of type `geo_point` (which can only be set explicitly in the mappings) and it can also hold an array of `geo_point` fields, in which case all points will be taken into account during aggregation.
  11. ==== Simple low-precision request
  12. [source,js]
  13. --------------------------------------------------
  14. {
  15. "aggregations" : {
  16. "myLarge-GrainGeoHashGrid" : {
  17. "geohash_grid" : {
  18. "field" : "location",
  19. "precision" : 3
  20. }
  21. }
  22. }
  23. }
  24. --------------------------------------------------
  25. Response:
  26. [source,js]
  27. --------------------------------------------------
  28. {
  29. "aggregations": {
  30. "myLarge-GrainGeoHashGrid": {
  31. "buckets": [
  32. {
  33. "key": "svz",
  34. "doc_count": 10964
  35. },
  36. {
  37. "key": "sv8",
  38. "doc_count": 3198
  39. }
  40. ]
  41. }
  42. }
  43. }
  44. --------------------------------------------------
  45. ==== High-precision requests
  46. When requesting detailed buckets (typically for displaying a "zoomed in" map) a filter like <<query-dsl-geo-bounding-box-query,geo_bounding_box>> should be applied to narrow the subject area otherwise potentially millions of buckets will be created and returned.
  47. [source,js]
  48. --------------------------------------------------
  49. {
  50. "aggregations" : {
  51. "zoomedInView" : {
  52. "filter" : {
  53. "geo_bounding_box" : {
  54. "location" : {
  55. "top_left" : "51.73, 0.9",
  56. "bottom_right" : "51.55, 1.1"
  57. }
  58. }
  59. },
  60. "aggregations":{
  61. "zoom1":{
  62. "geohash_grid" : {
  63. "field":"location",
  64. "precision":8,
  65. }
  66. }
  67. }
  68. }
  69. }
  70. }
  71. --------------------------------------------------
  72. ==== Cell dimensions at the equator
  73. The table below shows the metric dimensions for cells covered by various string lengths of geohash.
  74. Cell dimensions vary with latitude and so the table is for the worst-case scenario at the equator.
  75. [horizontal]
  76. *GeoHash length*:: *Area width x height*
  77. 1:: 5,009.4km x 4,992.6km
  78. 2:: 1,252.3km x 624.1km
  79. 3:: 156.5km x 156km
  80. 4:: 39.1km x 19.5km
  81. 5:: 4.9km x 4.9km
  82. 6:: 1.2km x 609.4m
  83. 7:: 152.9m x 152.4m
  84. 8:: 38.2m x 19m
  85. 9:: 4.8m x 4.8m
  86. 10:: 1.2m x 59.5cm
  87. 11:: 14.9cm x 14.9cm
  88. 12:: 3.7cm x 1.9cm
  89. ==== Options
  90. [horizontal]
  91. field:: Mandatory. The name of the field indexed with GeoPoints.
  92. precision:: Optional. The string length of the geohashes used to define
  93. cells/buckets in the results. Defaults to 5.
  94. size:: Optional. The maximum number of geohash buckets to return
  95. (defaults to 10,000). When results are trimmed, buckets are
  96. prioritised based on the volumes of documents they contain.
  97. A value of `0` will return all buckets that
  98. contain a hit, use with caution as this could use a lot of CPU
  99. and network bandwidth if there are many buckets.
  100. shard_size:: Optional. To allow for more accurate counting of the top cells
  101. returned in the final result the aggregation defaults to
  102. returning `max(10,(size x number-of-shards))` buckets from each
  103. shard. If this heuristic is undesirable, the number considered
  104. from each shard can be over-ridden using this parameter.
  105. A value of `0` makes the shard size unlimited.