datehistogram-aggregation.asciidoc 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579
  1. [[search-aggregations-bucket-datehistogram-aggregation]]
  2. === Date Histogram Aggregation
  3. This multi-bucket aggregation is similar to the normal
  4. <<search-aggregations-bucket-histogram-aggregation,histogram>>, but it can
  5. only be used with date values. Because dates are represented internally in
  6. Elasticsearch as long values, it is possible, but not as accurate, to use the
  7. normal `histogram` on dates as well. The main difference in the two APIs is
  8. that here the interval can be specified using date/time expressions. Time-based
  9. data requires special support because time-based intervals are not always a
  10. fixed length.
  11. ==== Setting intervals
  12. There seems to be no limit to the creativity we humans apply to setting our
  13. clocks and calendars. We've invented leap years and leap seconds, standard and
  14. daylight savings times, and timezone offsets of 30 or 45 minutes rather than a
  15. full hour. While these creations help keep us in sync with the cosmos and our
  16. environment, they can make specifying time intervals accurately a real challenge.
  17. The only universal truth our researchers have yet to disprove is that a
  18. millisecond is always the same duration, and a second is always 1000 milliseconds.
  19. Beyond that, things get complicated.
  20. Generally speaking, when you specify a single time unit, such as 1 hour or 1 day, you
  21. are working with a _calendar interval_, but multiples, such as 6 hours or 3 days, are
  22. _fixed-length intervals_.
  23. For example, a specification of 1 day (1d) from now is a calendar interval that
  24. means "at
  25. this exact time tomorrow" no matter the length of the day. A change to or from
  26. daylight savings time that results in a 23 or 25 hour day is compensated for and the
  27. specification of "this exact time tomorrow" is maintained. But if you specify 2 or
  28. more days, each day must be of the same fixed duration (24 hours). In this case, if
  29. the specified interval includes the change to or from daylight savings time, the
  30. interval will end an hour sooner or later than you expect.
  31. There are similar differences to consider when you specify single versus multiple
  32. minutes or hours. Multiple time periods longer than a day are not supported.
  33. Here are the valid time specifications and their meanings:
  34. milliseconds (ms) ::
  35. Fixed length interval; supports multiples.
  36. seconds (s) ::
  37. 1000 milliseconds; fixed length interval (except for the last second of a
  38. minute that contains a leap-second, which is 2000ms long); supports multiples.
  39. minutes (m) ::
  40. All minutes begin at 00 seconds.
  41. * One minute (1m) is the interval between 00 seconds of the first minute and 00
  42. seconds of the following minute in the specified timezone, compensating for any
  43. intervening leap seconds, so that the number of minutes and seconds past the
  44. hour is the same at the start and end.
  45. * Multiple minutes (__n__m) are intervals of exactly 60x1000=60,000 milliseconds
  46. each.
  47. hours (h) ::
  48. All hours begin at 00 minutes and 00 seconds.
  49. * One hour (1h) is the interval between 00:00 minutes of the first hour and 00:00
  50. minutes of the following hour in the specified timezone, compensating for any
  51. intervening leap seconds, so that the number of minutes and seconds past the hour
  52. is the same at the start and end.
  53. * Multiple hours (__n__h) are intervals of exactly 60x60x1000=3,600,000 milliseconds
  54. each.
  55. days (d) ::
  56. All days begin at the earliest possible time, which is usually 00:00:00
  57. (midnight).
  58. * One day (1d) is the interval between the start of the day and the start of
  59. of the following day in the specified timezone, compensating for any intervening
  60. time changes.
  61. * Multiple days (__n__d) are intervals of exactly 24x60x60x1000=86,400,000
  62. milliseconds each.
  63. weeks (w) ::
  64. * One week (1w) is the interval between the start day_of_week:hour:minute:second
  65. and the same day of the week and time of the following week in the specified
  66. timezone.
  67. * Multiple weeks (__n__w) are not supported.
  68. months (M) ::
  69. * One month (1M) is the interval between the start day of the month and time of
  70. day and the same day of the month and time of the following month in the specified
  71. timezone, so that the day of the month and time of day are the same at the start
  72. and end.
  73. * Multiple months (__n__M) are not supported.
  74. quarters (q) ::
  75. * One quarter (1q) is the interval between the start day of the month and
  76. time of day and the same day of the month and time of day three months later,
  77. so that the day of the month and time of day are the same at the start and end. +
  78. * Multiple quarters (__n__q) are not supported.
  79. years (y) ::
  80. * One year (1y) is the interval between the start day of the month and time of
  81. day and the same day of the month and time of day the following year in the
  82. specified timezone, so that the date and time are the same at the start and end. +
  83. * Multiple years (__n__y) are not supported.
  84. NOTE:
  85. In all cases, when the specified end time does not exist, the actual end time is
  86. the closest available time after the specified end.
  87. Widely distributed applications must also consider vagaries such as countries that
  88. start and stop daylight savings time at 12:01 A.M., so end up with one minute of
  89. Sunday followed by an additional 59 minutes of Saturday once a year, and countries
  90. that decide to move across the international date line. Situations like
  91. that can make irregular timezone offsets seem easy.
  92. As always, rigorous testing, especially around time-change events, will ensure
  93. that your time interval specification is
  94. what you intend it to be.
  95. WARNING:
  96. To avoid unexpected results, all connected servers and clients must sync to a
  97. reliable network time service.
  98. ==== Examples
  99. Requesting bucket intervals of a month.
  100. [source,js]
  101. --------------------------------------------------
  102. POST /sales/_search?size=0
  103. {
  104. "aggs" : {
  105. "sales_over_time" : {
  106. "date_histogram" : {
  107. "field" : "date",
  108. "interval" : "month"
  109. }
  110. }
  111. }
  112. }
  113. --------------------------------------------------
  114. // CONSOLE
  115. // TEST[setup:sales]
  116. You can also specify time values using abbreviations supported by
  117. <<time-units,time units>> parsing.
  118. Note that fractional time values are not supported, but you can address this by
  119. shifting to another
  120. time unit (e.g., `1.5h` could instead be specified as `90m`).
  121. [source,js]
  122. --------------------------------------------------
  123. POST /sales/_search?size=0
  124. {
  125. "aggs" : {
  126. "sales_over_time" : {
  127. "date_histogram" : {
  128. "field" : "date",
  129. "interval" : "90m"
  130. }
  131. }
  132. }
  133. }
  134. --------------------------------------------------
  135. // CONSOLE
  136. // TEST[setup:sales]
  137. ===== Keys
  138. Internally, a date is represented as a 64 bit number representing a timestamp
  139. in milliseconds-since-the-epoch (01/01/1970 midnight UTC). These timestamps are
  140. returned as the ++key++ name of the bucket. The `key_as_string` is the same
  141. timestamp converted to a formatted
  142. date string using the `format` parameter sprcification:
  143. TIP: If you don't specify `format`, the first date
  144. <<mapping-date-format,format>> specified in the field mapping is used.
  145. [source,js]
  146. --------------------------------------------------
  147. POST /sales/_search?size=0
  148. {
  149. "aggs" : {
  150. "sales_over_time" : {
  151. "date_histogram" : {
  152. "field" : "date",
  153. "interval" : "1M",
  154. "format" : "yyyy-MM-dd" <1>
  155. }
  156. }
  157. }
  158. }
  159. --------------------------------------------------
  160. // CONSOLE
  161. // TEST[setup:sales]
  162. <1> Supports expressive date <<date-format-pattern,format pattern>>
  163. Response:
  164. [source,js]
  165. --------------------------------------------------
  166. {
  167. ...
  168. "aggregations": {
  169. "sales_over_time": {
  170. "buckets": [
  171. {
  172. "key_as_string": "2015-01-01",
  173. "key": 1420070400000,
  174. "doc_count": 3
  175. },
  176. {
  177. "key_as_string": "2015-02-01",
  178. "key": 1422748800000,
  179. "doc_count": 2
  180. },
  181. {
  182. "key_as_string": "2015-03-01",
  183. "key": 1425168000000,
  184. "doc_count": 2
  185. }
  186. ]
  187. }
  188. }
  189. }
  190. --------------------------------------------------
  191. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  192. ===== Timezone
  193. Date-times are stored in Elasticsearch in UTC. By default, all bucketing and
  194. rounding is also done in UTC. Use the `time_zone` parameter to indicate
  195. that bucketing should use a different timezone.
  196. You can specify timezones as either an ISO 8601 UTC offset (e.g. `+01:00` or
  197. `-08:00`) or as a timezone ID as specified in the IANA timezone database,
  198. such as`America/Los_Angeles`.
  199. Consider the following example:
  200. [source,js]
  201. ---------------------------------
  202. PUT my_index/_doc/1?refresh
  203. {
  204. "date": "2015-10-01T00:30:00Z"
  205. }
  206. PUT my_index/_doc/2?refresh
  207. {
  208. "date": "2015-10-01T01:30:00Z"
  209. }
  210. GET my_index/_search?size=0
  211. {
  212. "aggs": {
  213. "by_day": {
  214. "date_histogram": {
  215. "field": "date",
  216. "interval": "day"
  217. }
  218. }
  219. }
  220. }
  221. ---------------------------------
  222. // CONSOLE
  223. If you don't specify a timezone, UTC is used. This would result in both of these
  224. documents being placed into the same day bucket, which starts at midnight UTC
  225. on 1 October 2015:
  226. [source,js]
  227. ---------------------------------
  228. {
  229. ...
  230. "aggregations": {
  231. "by_day": {
  232. "buckets": [
  233. {
  234. "key_as_string": "2015-10-01T00:00:00.000Z",
  235. "key": 1443657600000,
  236. "doc_count": 2
  237. }
  238. ]
  239. }
  240. }
  241. }
  242. ---------------------------------
  243. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  244. If you specify a `time_zone` of `-01:00`, midnight in that timezone is one hour
  245. before midnight UTC:
  246. [source,js]
  247. ---------------------------------
  248. GET my_index/_search?size=0
  249. {
  250. "aggs": {
  251. "by_day": {
  252. "date_histogram": {
  253. "field": "date",
  254. "interval": "day",
  255. "time_zone": "-01:00"
  256. }
  257. }
  258. }
  259. }
  260. ---------------------------------
  261. // CONSOLE
  262. // TEST[continued]
  263. Now the first document falls into the bucket for 30 September 2015, while the
  264. second document falls into the bucket for 1 October 2015:
  265. [source,js]
  266. ---------------------------------
  267. {
  268. ...
  269. "aggregations": {
  270. "by_day": {
  271. "buckets": [
  272. {
  273. "key_as_string": "2015-09-30T00:00:00.000-01:00", <1>
  274. "key": 1443574800000,
  275. "doc_count": 1
  276. },
  277. {
  278. "key_as_string": "2015-10-01T00:00:00.000-01:00", <1>
  279. "key": 1443661200000,
  280. "doc_count": 1
  281. }
  282. ]
  283. }
  284. }
  285. }
  286. ---------------------------------
  287. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  288. <1> The `key_as_string` value represents midnight on each day
  289. in the specified timezone.
  290. WARNING: When using time zones that follow DST (daylight savings time) changes,
  291. buckets close to the moment when those changes happen can have slightly different
  292. sizes than you would expect from the used `interval`.
  293. For example, consider a DST start in the `CET` time zone: on 27 March 2016 at 2am,
  294. clocks were turned forward 1 hour to 3am local time. If you use `day` as `interval`,
  295. the bucket covering that day will only hold data for 23 hours instead of the usual
  296. 24 hours for other buckets. The same is true for shorter intervals, like 12h,
  297. where you'll have only a 11h bucket on the morning of 27 March when the DST shift
  298. happens.
  299. ===== Offset
  300. Use the `offset` parameter to change the start value of each bucket by the
  301. specified positive (`+`) or negative offset (`-`) duration, such as `1h` for
  302. an hour, or `1d` for a day. See <<time-units>> for more possible time
  303. duration options.
  304. For example, when using an interval of `day`, each bucket runs from midnight
  305. to midnight. Setting the `offset` parameter to `+6h` changes each bucket
  306. to run from 6am to 6am:
  307. [source,js]
  308. -----------------------------
  309. PUT my_index/_doc/1?refresh
  310. {
  311. "date": "2015-10-01T05:30:00Z"
  312. }
  313. PUT my_index/_doc/2?refresh
  314. {
  315. "date": "2015-10-01T06:30:00Z"
  316. }
  317. GET my_index/_search?size=0
  318. {
  319. "aggs": {
  320. "by_day": {
  321. "date_histogram": {
  322. "field": "date",
  323. "interval": "day",
  324. "offset": "+6h"
  325. }
  326. }
  327. }
  328. }
  329. -----------------------------
  330. // CONSOLE
  331. Instead of a single bucket starting at midnight, the above request groups the
  332. documents into buckets starting at 6am:
  333. [source,js]
  334. -----------------------------
  335. {
  336. ...
  337. "aggregations": {
  338. "by_day": {
  339. "buckets": [
  340. {
  341. "key_as_string": "2015-09-30T06:00:00.000Z",
  342. "key": 1443592800000,
  343. "doc_count": 1
  344. },
  345. {
  346. "key_as_string": "2015-10-01T06:00:00.000Z",
  347. "key": 1443679200000,
  348. "doc_count": 1
  349. }
  350. ]
  351. }
  352. }
  353. }
  354. -----------------------------
  355. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  356. NOTE: The start `offset` of each bucket is calculated after `time_zone`
  357. adjustments have been made.
  358. ===== Keyed Response
  359. Setting the `keyed` flag to `true` associates a unique string key with each
  360. bucket and returns the ranges as a hash rather than an array:
  361. [source,js]
  362. --------------------------------------------------
  363. POST /sales/_search?size=0
  364. {
  365. "aggs" : {
  366. "sales_over_time" : {
  367. "date_histogram" : {
  368. "field" : "date",
  369. "interval" : "1M",
  370. "format" : "yyyy-MM-dd",
  371. "keyed": true
  372. }
  373. }
  374. }
  375. }
  376. --------------------------------------------------
  377. // CONSOLE
  378. // TEST[setup:sales]
  379. Response:
  380. [source,js]
  381. --------------------------------------------------
  382. {
  383. ...
  384. "aggregations": {
  385. "sales_over_time": {
  386. "buckets": {
  387. "2015-01-01": {
  388. "key_as_string": "2015-01-01",
  389. "key": 1420070400000,
  390. "doc_count": 3
  391. },
  392. "2015-02-01": {
  393. "key_as_string": "2015-02-01",
  394. "key": 1422748800000,
  395. "doc_count": 2
  396. },
  397. "2015-03-01": {
  398. "key_as_string": "2015-03-01",
  399. "key": 1425168000000,
  400. "doc_count": 2
  401. }
  402. }
  403. }
  404. }
  405. }
  406. --------------------------------------------------
  407. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  408. ===== Scripts
  409. As with the normal <<search-aggregations-bucket-histogram-aggregation,histogram>>,
  410. both document-level scripts and
  411. value-level scripts are supported. You can control the order of the returned
  412. buckets using the `order`
  413. settings and filter the returned buckets based on a `min_doc_count` setting
  414. (by default all buckets between the first
  415. bucket that matches documents and the last one are returned). This histogram
  416. also supports the `extended_bounds`
  417. setting, which enables extending the bounds of the histogram beyond the data
  418. itself. For more information, see
  419. <<search-aggregations-bucket-histogram-aggregation-extended-bounds,`Extended Bounds`>>.
  420. ===== Missing value
  421. The `missing` parameter defines how to treat documents that are missing a value.
  422. By default, they are ignored, but it is also possible to treat them as if they
  423. have a value.
  424. [source,js]
  425. --------------------------------------------------
  426. POST /sales/_search?size=0
  427. {
  428. "aggs" : {
  429. "sale_date" : {
  430. "date_histogram" : {
  431. "field" : "date",
  432. "interval": "year",
  433. "missing": "2000/01/01" <1>
  434. }
  435. }
  436. }
  437. }
  438. --------------------------------------------------
  439. // CONSOLE
  440. // TEST[setup:sales]
  441. <1> Documents without a value in the `publish_date` field will fall into the
  442. same bucket as documents that have the value `2000-01-01`.
  443. ===== Order
  444. By default the returned buckets are sorted by their `key` ascending, but you can
  445. control the order using
  446. the `order` setting. This setting supports the same `order` functionality as
  447. <<search-aggregations-bucket-terms-aggregation-order,`Terms Aggregation`>>.
  448. deprecated[6.0.0, Use `_key` instead of `_time` to order buckets by their dates/keys]
  449. ===== Using a script to aggregate by day of the week
  450. When you need to aggregate the results by day of the week, use a script that
  451. returns the day of the week:
  452. [source,js]
  453. --------------------------------------------------
  454. POST /sales/_search?size=0
  455. {
  456. "aggs": {
  457. "dayOfWeek": {
  458. "terms": {
  459. "script": {
  460. "lang": "painless",
  461. "source": "doc['date'].value.dayOfWeekEnum.value"
  462. }
  463. }
  464. }
  465. }
  466. }
  467. --------------------------------------------------
  468. // CONSOLE
  469. // TEST[setup:sales]
  470. Response:
  471. [source,js]
  472. --------------------------------------------------
  473. {
  474. ...
  475. "aggregations": {
  476. "dayOfWeek": {
  477. "doc_count_error_upper_bound": 0,
  478. "sum_other_doc_count": 0,
  479. "buckets": [
  480. {
  481. "key": "7",
  482. "doc_count": 4
  483. },
  484. {
  485. "key": "4",
  486. "doc_count": 3
  487. }
  488. ]
  489. }
  490. }
  491. }
  492. --------------------------------------------------
  493. // TESTRESPONSE[s/\.\.\./"took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits,/]
  494. The response will contain all the buckets having the relative day of
  495. the week as key : 1 for Monday, 2 for Tuesday... 7 for Sunday.