scroll.asciidoc 10 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284
  1. [[search-request-scroll]]
  2. === Scroll
  3. While a `search` request returns a single ``page'' of results, the `scroll`
  4. API can be used to retrieve large numbers of results (or even all results)
  5. from a single search request, in much the same way as you would use a cursor
  6. on a traditional database.
  7. Scrolling is not intended for real time user requests, but rather for
  8. processing large amounts of data, e.g. in order to reindex the contents of one
  9. index into a new index with a different configuration.
  10. .Client support for scrolling and reindexing
  11. *********************************************
  12. Some of the officially supported clients provide helpers to assist with
  13. scrolled searches and reindexing of documents from one index to another:
  14. Perl::
  15. See https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Bulk[Search::Elasticsearch::Client::5_0::Bulk]
  16. and https://metacpan.org/pod/Search::Elasticsearch::Client::5_0::Scroll[Search::Elasticsearch::Client::5_0::Scroll]
  17. Python::
  18. See http://elasticsearch-py.readthedocs.org/en/master/helpers.html[elasticsearch.helpers.*]
  19. *********************************************
  20. NOTE: The results that are returned from a scroll request reflect the state of
  21. the index at the time that the initial `search` request was made, like a
  22. snapshot in time. Subsequent changes to documents (index, update or delete)
  23. will only affect later search requests.
  24. In order to use scrolling, the initial search request should specify the
  25. `scroll` parameter in the query string, which tells Elasticsearch how long it
  26. should keep the ``search context'' alive (see <<scroll-search-context>>), eg `?scroll=1m`.
  27. [source,js]
  28. --------------------------------------------------
  29. POST /twitter/_search?scroll=1m
  30. {
  31. "size": 100,
  32. "query": {
  33. "match" : {
  34. "title" : "elasticsearch"
  35. }
  36. }
  37. }
  38. --------------------------------------------------
  39. // CONSOLE
  40. // TEST[setup:twitter]
  41. The result from the above request includes a `_scroll_id`, which should
  42. be passed to the `scroll` API in order to retrieve the next batch of
  43. results.
  44. [source,js]
  45. --------------------------------------------------
  46. POST <1> /_search/scroll <2>
  47. {
  48. "scroll" : "1m", <3>
  49. "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==" <4>
  50. }
  51. --------------------------------------------------
  52. // CONSOLE
  53. // TEST[continued s/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==/$body._scroll_id/]
  54. <1> `GET` or `POST` can be used.
  55. <2> The URL should not include the `index` name -- this
  56. is specified in the original `search` request instead.
  57. <3> The `scroll` parameter tells Elasticsearch to keep the search context open
  58. for another `1m`.
  59. <4> The `scroll_id` parameter
  60. The `size` parameter allows you to configure the maximum number of hits to be
  61. returned with each batch of results. Each call to the `scroll` API returns the
  62. next batch of results until there are no more results left to return, ie the
  63. `hits` array is empty.
  64. IMPORTANT: The initial search request and each subsequent scroll request each
  65. return a `_scroll_id`. While the `_scroll_id` may change between requests, it doesn’t
  66. always change — in any case, only the most recently received `_scroll_id` should be used.
  67. NOTE: If the request specifies aggregations, only the initial search response
  68. will contain the aggregations results.
  69. NOTE: Scroll requests have optimizations that make them faster when the sort
  70. order is `_doc`. If you want to iterate over all documents regardless of the
  71. order, this is the most efficient option:
  72. [source,js]
  73. --------------------------------------------------
  74. GET /_search?scroll=1m
  75. {
  76. "sort": [
  77. "_doc"
  78. ]
  79. }
  80. --------------------------------------------------
  81. // CONSOLE
  82. // TEST[setup:twitter]
  83. [[scroll-search-context]]
  84. ==== Keeping the search context alive
  85. The `scroll` parameter (passed to the `search` request and to every `scroll`
  86. request) tells Elasticsearch how long it should keep the search context alive.
  87. Its value (e.g. `1m`, see <<time-units>>) does not need to be long enough to
  88. process all data -- it just needs to be long enough to process the previous
  89. batch of results. Each `scroll` request (with the `scroll` parameter) sets a
  90. new expiry time. If a `scroll` request doesn't pass in the `scroll`
  91. parameter, then the search context will be freed as part of _that_ `scroll`
  92. request.
  93. Normally, the background merge process optimizes the
  94. index by merging together smaller segments to create new bigger segments, at
  95. which time the smaller segments are deleted. This process continues during
  96. scrolling, but an open search context prevents the old segments from being
  97. deleted while they are still in use. This is how Elasticsearch is able to
  98. return the results of the initial search request, regardless of subsequent
  99. changes to documents.
  100. TIP: Keeping older segments alive means that more file handles are needed.
  101. Ensure that you have configured your nodes to have ample free file handles.
  102. See <<file-descriptors>>.
  103. NOTE: To prevent against issues caused by having too many scrolls open, the
  104. user is not allowed to open scrolls past a certain limit. By default, the
  105. maximum number of open scrolls is 500. This limit can be updated with the
  106. `search.max_open_scroll_context` cluster setting.
  107. You can check how many search contexts are open with the
  108. <<cluster-nodes-stats,nodes stats API>>:
  109. [source,js]
  110. ---------------------------------------
  111. GET /_nodes/stats/indices/search
  112. ---------------------------------------
  113. // CONSOLE
  114. ==== Clear scroll API
  115. Search context are automatically removed when the `scroll` timeout has been
  116. exceeded. However keeping scrolls open has a cost, as discussed in the
  117. <<scroll-search-context,previous section>> so scrolls should be explicitly
  118. cleared as soon as the scroll is not being used anymore using the
  119. `clear-scroll` API:
  120. [source,js]
  121. ---------------------------------------
  122. DELETE /_search/scroll
  123. {
  124. "scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
  125. }
  126. ---------------------------------------
  127. // CONSOLE
  128. // TEST[catch:missing]
  129. Multiple scroll IDs can be passed as array:
  130. [source,js]
  131. ---------------------------------------
  132. DELETE /_search/scroll
  133. {
  134. "scroll_id" : [
  135. "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==",
  136. "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB"
  137. ]
  138. }
  139. ---------------------------------------
  140. // CONSOLE
  141. // TEST[catch:missing]
  142. All search contexts can be cleared with the `_all` parameter:
  143. [source,js]
  144. ---------------------------------------
  145. DELETE /_search/scroll/_all
  146. ---------------------------------------
  147. // CONSOLE
  148. The `scroll_id` can also be passed as a query string parameter or in the request body.
  149. Multiple scroll IDs can be passed as comma separated values:
  150. [source,js]
  151. ---------------------------------------
  152. DELETE /_search/scroll/DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ==,DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAABFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAAAxZrUllkUVlCa1NqNmRMaUhiQlZkMWFBAAAAAAAAAAIWa1JZZFFZQmtTajZkTGlIYkJWZDFhQQAAAAAAAAAFFmtSWWRRWUJrU2o2ZExpSGJCVmQxYUEAAAAAAAAABBZrUllkUVlCa1NqNmRMaUhiQlZkMWFB
  153. ---------------------------------------
  154. // CONSOLE
  155. // TEST[catch:missing]
  156. [[sliced-scroll]]
  157. ==== Sliced Scroll
  158. For scroll queries that return a lot of documents it is possible to split the scroll in multiple slices which
  159. can be consumed independently:
  160. [source,js]
  161. --------------------------------------------------
  162. GET /twitter/_search?scroll=1m
  163. {
  164. "slice": {
  165. "id": 0, <1>
  166. "max": 2 <2>
  167. },
  168. "query": {
  169. "match" : {
  170. "title" : "elasticsearch"
  171. }
  172. }
  173. }
  174. GET /twitter/_search?scroll=1m
  175. {
  176. "slice": {
  177. "id": 1,
  178. "max": 2
  179. },
  180. "query": {
  181. "match" : {
  182. "title" : "elasticsearch"
  183. }
  184. }
  185. }
  186. --------------------------------------------------
  187. // CONSOLE
  188. // TEST[setup:big_twitter]
  189. <1> The id of the slice
  190. <2> The maximum number of slices
  191. The result from the first request returned documents that belong to the first slice (id: 0) and the result from the
  192. second request returned documents that belong to the second slice. Since the maximum number of slices is set to 2
  193. the union of the results of the two requests is equivalent to the results of a scroll query without slicing.
  194. By default the splitting is done on the shards first and then locally on each shard using the _id field
  195. with the following formula:
  196. `slice(doc) = floorMod(hashCode(doc._id), max)`
  197. For instance if the number of shards is equal to 2 and the user requested 4 slices then the slices 0 and 2 are assigned
  198. to the first shard and the slices 1 and 3 are assigned to the second shard.
  199. Each scroll is independent and can be processed in parallel like any scroll request.
  200. NOTE: If the number of slices is bigger than the number of shards the slice filter is very slow on the first calls, it has a complexity of O(N) and a memory cost equals
  201. to N bits per slice where N is the total number of documents in the shard.
  202. After few calls the filter should be cached and subsequent calls should be faster but you should limit the number of
  203. sliced query you perform in parallel to avoid the memory explosion.
  204. To avoid this cost entirely it is possible to use the `doc_values` of another field to do the slicing
  205. but the user must ensure that the field has the following properties:
  206. * The field is numeric.
  207. * `doc_values` are enabled on that field
  208. * Every document should contain a single value. If a document has multiple values for the specified field, the first value is used.
  209. * The value for each document should be set once when the document is created and never updated. This ensures that each
  210. slice gets deterministic results.
  211. * The cardinality of the field should be high. This ensures that each slice gets approximately the same amount of documents.
  212. [source,js]
  213. --------------------------------------------------
  214. GET /twitter/_search?scroll=1m
  215. {
  216. "slice": {
  217. "field": "date",
  218. "id": 0,
  219. "max": 10
  220. },
  221. "query": {
  222. "match" : {
  223. "title" : "elasticsearch"
  224. }
  225. }
  226. }
  227. --------------------------------------------------
  228. // CONSOLE
  229. // TEST[setup:big_twitter]
  230. For append only time-based indices, the `timestamp` field can be used safely.
  231. NOTE: By default the maximum number of slices allowed per scroll is limited to 1024.
  232. You can update the `index.max_slices_per_scroll` index setting to bypass this limit.