rollup-getting-started.asciidoc 9.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[rollup-getting-started]]
  4. == Getting Started
  5. experimental[]
  6. To use the Rollup feature, you need to create one or more "Rollup Jobs". These jobs run continuously in the background
  7. and rollup the index or indices that you specify, placing the rolled documents in a secondary index (also of your choosing).
  8. Imagine you have a series of daily indices that hold sensor data (`sensor-2017-01-01`, `sensor-2017-01-02`, etc). A sample document might
  9. look like this:
  10. [source,js]
  11. --------------------------------------------------
  12. {
  13. "timestamp": 1516729294000,
  14. "temperature": 200,
  15. "voltage": 5.2,
  16. "node": "a"
  17. }
  18. --------------------------------------------------
  19. // NOTCONSOLE
  20. [float]
  21. === Creating a Rollup Job
  22. We'd like to rollup these documents into hourly summaries, which will allow us to generate reports and dashboards with any time interval
  23. one hour or greater. A rollup job might look like this:
  24. [source,js]
  25. --------------------------------------------------
  26. PUT _xpack/rollup/job/sensor
  27. {
  28. "index_pattern": "sensor-*",
  29. "rollup_index": "sensor_rollup",
  30. "cron": "*/30 * * * * ?",
  31. "page_size" :1000,
  32. "groups" : {
  33. "date_histogram": {
  34. "field": "timestamp",
  35. "interval": "60m"
  36. },
  37. "terms": {
  38. "fields": ["node"]
  39. }
  40. },
  41. "metrics": [
  42. {
  43. "field": "temperature",
  44. "metrics": ["min", "max", "sum"]
  45. },
  46. {
  47. "field": "voltage",
  48. "metrics": ["avg"]
  49. }
  50. ]
  51. }
  52. --------------------------------------------------
  53. // CONSOLE
  54. // TEST[setup:sensor_index]
  55. We give the job the ID of "sensor" (in the url: `PUT _xpack/rollup/job/sensor`), and tell it to rollup the index pattern `"sensor-*"`.
  56. This job will find and rollup any index that matches that pattern. Rollup summaries are then stored in the `"sensor_rollup"` index.
  57. The `cron` parameter controls when and how often the job activates. When a rollup job's cron schedule triggers, it will begin rolling up
  58. from where it left off after the last activation. So if you configure the cron to run every 30 seconds, the job will process the last 30
  59. seconds worth of data that was indexed into the `sensor-*` indices.
  60. If instead the cron was configured to run once a day at midnight, the job would process the last 24 hours worth of data. The choice is largely
  61. preference, based on how "realtime" you want the rollups, and if you wish to process continuously or move it to off-peak hours.
  62. Next, we define a set of `groups` and `metrics`. The metrics are fairly straightforward: we want to save the min/max/sum of the `temperature`
  63. field, and the average of the `voltage` field.
  64. The groups are a little more interesting. Essentially, we are defining the dimensions that we wish to pivot on at a later date when
  65. querying the data. The grouping in this job allows us to use date_histograms aggregations on the `timestamp` field, rolled up at hourly intervals.
  66. It also allows us to run terms aggregations on the `node` field.
  67. .Date histogram interval vs cron schedule
  68. **********************************
  69. You'll note that the job's cron is configured to run every 30 seconds, but the date_histogram is configured to
  70. rollup at 60 minute intervals. How do these relate?
  71. The date_histogram controls the granularity of the saved data. Data will be rolled up into hourly intervals, and you will be unable
  72. to query with finer granularity. The cron simply controls when the process looks for new data to rollup. Every 30 seconds it will see
  73. if there is a new hour's worth of data and roll it up. If not, the job goes back to sleep.
  74. Often, it doesn't make sense to define such a small cron (30s) on a large interval (1h), because the majority of the activations will
  75. simply go back to sleep. But there's nothing wrong with it either, the job will do the right thing.
  76. **********************************
  77. For more details about the job syntax, see <<rollup-job-config>>.
  78. After you execute the above command and create the job, you'll receive the following response:
  79. [source,js]
  80. ----
  81. {
  82. "acknowledged": true
  83. }
  84. ----
  85. // TESTRESPONSE
  86. [float]
  87. === Starting the job
  88. After the job is created, it will be sitting in an inactive state. Jobs need to be started before they begin processing data (this allows
  89. you to stop them later as a way to temporarily pause, without deleting the configuration).
  90. To start the job, execute this command:
  91. [source,js]
  92. --------------------------------------------------
  93. POST _xpack/rollup/job/sensor/_start
  94. --------------------------------------------------
  95. // CONSOLE
  96. // TEST[setup:sensor_rollup_job]
  97. [float]
  98. === Searching the Rolled results
  99. After the job has run and processed some data, we can use the <<rollup-search>> endpoint to do some searching. The Rollup feature is designed
  100. so that you can use the same Query DSL syntax that you are accustomed to... it just happens to run on the rolled up data instead.
  101. For example, take this query:
  102. [source,js]
  103. --------------------------------------------------
  104. GET /sensor_rollup/_rollup_search
  105. {
  106. "size": 0,
  107. "aggregations": {
  108. "max_temperature": {
  109. "max": {
  110. "field": "temperature"
  111. }
  112. }
  113. }
  114. }
  115. --------------------------------------------------
  116. // CONSOLE
  117. // TEST[setup:sensor_prefab_data]
  118. It's a simple aggregation that calculates the maximum of the `temperature` field. But you'll notice that is is being sent to the `sensor_rollup`
  119. index instead of the raw `sensor-*` indices. And you'll also notice that it is using the `_rollup_search` endpoint. Otherwise the syntax
  120. is exactly as you'd expect.
  121. If you were to execute that query, you'd receive a result that looks like a normal aggregation response:
  122. [source,js]
  123. ----
  124. {
  125. "took" : 102,
  126. "timed_out" : false,
  127. "terminated_early" : false,
  128. "_shards" : ... ,
  129. "hits" : {
  130. "total" : 0,
  131. "max_score" : 0.0,
  132. "hits" : [ ]
  133. },
  134. "aggregations" : {
  135. "max_temperature" : {
  136. "value" : 202.0
  137. }
  138. }
  139. }
  140. ----
  141. // TESTRESPONSE[s/"took" : 102/"took" : $body.$_path/]
  142. // TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/]
  143. The only notable difference is that Rollup search results have zero `hits`, because we aren't really searching the original, live data any
  144. more. Otherwise it's identical syntax.
  145. There are a few interesting takeaways here. Firstly, even though the data was rolled up with hourly intervals and partitioned by
  146. node name, the query we ran is just calculating the max temperature across all documents. The `groups` that were configured in the job
  147. are not mandatory elements of a query, they are just extra dimensions you can partition on. Second, the request and response syntax
  148. is nearly identical to normal DSL, making it easy to integrate into dashboards and applications.
  149. Finally, we can use those grouping fields we defined to construct a more complicated query:
  150. [source,js]
  151. --------------------------------------------------
  152. GET /sensor_rollup/_rollup_search
  153. {
  154. "size": 0,
  155. "aggregations": {
  156. "timeline": {
  157. "date_histogram": {
  158. "field": "timestamp",
  159. "interval": "7d"
  160. },
  161. "aggs": {
  162. "nodes": {
  163. "terms": {
  164. "field": "node"
  165. },
  166. "aggs": {
  167. "max_temperature": {
  168. "max": {
  169. "field": "temperature"
  170. }
  171. },
  172. "avg_voltage": {
  173. "avg": {
  174. "field": "voltage"
  175. }
  176. }
  177. }
  178. }
  179. }
  180. }
  181. }
  182. }
  183. --------------------------------------------------
  184. // CONSOLE
  185. // TEST[setup:sensor_prefab_data]
  186. Which returns a corresponding response:
  187. [source,js]
  188. ----
  189. {
  190. "took" : 93,
  191. "timed_out" : false,
  192. "terminated_early" : false,
  193. "_shards" : ... ,
  194. "hits" : {
  195. "total" : 0,
  196. "max_score" : 0.0,
  197. "hits" : [ ]
  198. },
  199. "aggregations" : {
  200. "timeline" : {
  201. "meta" : { },
  202. "buckets" : [
  203. {
  204. "key_as_string" : "2018-01-18T00:00:00.000Z",
  205. "key" : 1516233600000,
  206. "doc_count" : 6,
  207. "nodes" : {
  208. "doc_count_error_upper_bound" : 0,
  209. "sum_other_doc_count" : 0,
  210. "buckets" : [
  211. {
  212. "key" : "a",
  213. "doc_count" : 2,
  214. "max_temperature" : {
  215. "value" : 202.0
  216. },
  217. "avg_voltage" : {
  218. "value" : 5.1499998569488525
  219. }
  220. },
  221. {
  222. "key" : "b",
  223. "doc_count" : 2,
  224. "max_temperature" : {
  225. "value" : 201.0
  226. },
  227. "avg_voltage" : {
  228. "value" : 5.700000047683716
  229. }
  230. },
  231. {
  232. "key" : "c",
  233. "doc_count" : 2,
  234. "max_temperature" : {
  235. "value" : 202.0
  236. },
  237. "avg_voltage" : {
  238. "value" : 4.099999904632568
  239. }
  240. }
  241. ]
  242. }
  243. }
  244. ]
  245. }
  246. }
  247. }
  248. ----
  249. // TESTRESPONSE[s/"took" : 93/"took" : $body.$_path/]
  250. // TESTRESPONSE[s/"_shards" : \.\.\. /"_shards" : $body.$_path/]
  251. In addition to being more complicated (date histogram and a terms aggregation, plus an additional average metric), you'll notice
  252. the date_histogram uses a `7d` interval instead of `60m`.
  253. [float]
  254. === Conclusion
  255. This quickstart should have provided a concise overview of the core functionality that Rollup exposes. There are more tips and things
  256. to consider when setting up Rollups, which you can find throughout the rest of this section. You may also explore the <<rollup-api-quickref,REST API>>
  257. for an overview of what is available.