rest.asciidoc 25 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[sql-rest]]
  4. == SQL REST API
  5. * <<sql-rest-overview>>
  6. * <<sql-rest-format>>
  7. * <<sql-pagination>>
  8. * <<sql-rest-filtering>>
  9. * <<sql-rest-columnar>>
  10. * <<sql-rest-params>>
  11. * <<sql-runtime-fields>>
  12. * <<sql-async>>
  13. [[sql-rest-overview]]
  14. === Overview
  15. The <<sql-search-api,SQL search API>> accepts SQL in a JSON document, executes
  16. it, and returns the results. For example:
  17. [source,console]
  18. --------------------------------------------------
  19. POST /_sql?format=txt
  20. {
  21. "query": "SELECT * FROM library ORDER BY page_count DESC LIMIT 5"
  22. }
  23. --------------------------------------------------
  24. // TEST[setup:library]
  25. Which returns:
  26. [source,text]
  27. --------------------------------------------------
  28. author | name | page_count | release_date
  29. -----------------+--------------------+---------------+------------------------
  30. Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00.000Z
  31. Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00.000Z
  32. Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
  33. Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00.000Z
  34. James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00.000Z
  35. --------------------------------------------------
  36. // TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
  37. // TESTRESPONSE[non_json]
  38. [[sql-kibana-console]]
  39. [TIP]
  40. .Using Kibana Console
  41. ====
  42. If you are using {kibana-ref}/console-kibana.html[Kibana Console]
  43. (which is highly recommended), take advantage of the
  44. triple quotes `"""` when creating the query. This not only automatically escapes double
  45. quotes (`"`) inside the query string but also support multi-line as shown below:
  46. image:images/sql/rest/console-triple-quotes.png[]
  47. ====
  48. [[sql-rest-format]]
  49. === Response Data Formats
  50. While the textual format is nice for humans, computers prefer something
  51. more structured.
  52. {es-sql} can return the data in the following formats which can be set
  53. either through the `format` property in the URL or by setting the `Accept` HTTP header:
  54. NOTE: The URL parameter takes precedence over the `Accept` HTTP header.
  55. If neither is specified then the response is returned in the same format as the request.
  56. [cols="^m,^4m,^8"]
  57. |===
  58. s|format
  59. s|`Accept` HTTP header
  60. s|Description
  61. 3+h| Human Readable
  62. |csv
  63. |text/csv
  64. |{wikipedia}/Comma-separated_values[Comma-separated values]
  65. |json
  66. |application/json
  67. |https://www.json.org/[JSON] (JavaScript Object Notation) human-readable format
  68. |tsv
  69. |text/tab-separated-values
  70. |{wikipedia}/Tab-separated_values[Tab-separated values]
  71. |txt
  72. |text/plain
  73. |CLI-like representation
  74. |yaml
  75. |application/yaml
  76. |{wikipedia}/YAML[YAML] (YAML Ain't Markup Language) human-readable format
  77. 3+h| Binary Formats
  78. |cbor
  79. |application/cbor
  80. |https://cbor.io/[Concise Binary Object Representation]
  81. |smile
  82. |application/smile
  83. |{wikipedia}/Smile_(data_interchange_format)[Smile] binary data format similar to CBOR
  84. |===
  85. The `CSV` format accepts a formatting URL query attribute, `delimiter`, which indicates which character should be used to separate the CSV
  86. values. It defaults to comma (`,`) and cannot take any of the following values: double quote (`"`), carriage-return (`\r`) and new-line (`\n`).
  87. The tab (`\t`) can also not be used, the `tsv` format needs to be used instead.
  88. Here are some examples for the human readable formats:
  89. ==== CSV
  90. [source,console]
  91. --------------------------------------------------
  92. POST /_sql?format=csv
  93. {
  94. "query": "SELECT * FROM library ORDER BY page_count DESC",
  95. "fetch_size": 5
  96. }
  97. --------------------------------------------------
  98. // TEST[setup:library]
  99. which returns:
  100. [source,text]
  101. --------------------------------------------------
  102. author,name,page_count,release_date
  103. Peter F. Hamilton,Pandora's Star,768,2004-03-02T00:00:00.000Z
  104. Vernor Vinge,A Fire Upon the Deep,613,1992-06-01T00:00:00.000Z
  105. Frank Herbert,Dune,604,1965-06-01T00:00:00.000Z
  106. Alastair Reynolds,Revelation Space,585,2000-03-15T00:00:00.000Z
  107. James S.A. Corey,Leviathan Wakes,561,2011-06-02T00:00:00.000Z
  108. --------------------------------------------------
  109. // TESTRESPONSE[non_json]
  110. or:
  111. [source,console]
  112. --------------------------------------------------
  113. POST /_sql?format=csv&delimiter=%3b
  114. {
  115. "query": "SELECT * FROM library ORDER BY page_count DESC",
  116. "fetch_size": 5
  117. }
  118. --------------------------------------------------
  119. // TEST[setup:library]
  120. which returns:
  121. [source,text]
  122. --------------------------------------------------
  123. author;name;page_count;release_date
  124. Peter F. Hamilton;Pandora's Star;768;2004-03-02T00:00:00.000Z
  125. Vernor Vinge;A Fire Upon the Deep;613;1992-06-01T00:00:00.000Z
  126. Frank Herbert;Dune;604;1965-06-01T00:00:00.000Z
  127. Alastair Reynolds;Revelation Space;585;2000-03-15T00:00:00.000Z
  128. James S.A. Corey;Leviathan Wakes;561;2011-06-02T00:00:00.000Z
  129. --------------------------------------------------
  130. // TESTRESPONSE[non_json]
  131. ==== JSON
  132. [source,console]
  133. --------------------------------------------------
  134. POST /_sql?format=json
  135. {
  136. "query": "SELECT * FROM library ORDER BY page_count DESC",
  137. "fetch_size": 5
  138. }
  139. --------------------------------------------------
  140. // TEST[setup:library]
  141. Which returns:
  142. [source,console-result]
  143. --------------------------------------------------
  144. {
  145. "columns": [
  146. {"name": "author", "type": "text"},
  147. {"name": "name", "type": "text"},
  148. {"name": "page_count", "type": "short"},
  149. {"name": "release_date", "type": "datetime"}
  150. ],
  151. "rows": [
  152. ["Peter F. Hamilton", "Pandora's Star", 768, "2004-03-02T00:00:00.000Z"],
  153. ["Vernor Vinge", "A Fire Upon the Deep", 613, "1992-06-01T00:00:00.000Z"],
  154. ["Frank Herbert", "Dune", 604, "1965-06-01T00:00:00.000Z"],
  155. ["Alastair Reynolds", "Revelation Space", 585, "2000-03-15T00:00:00.000Z"],
  156. ["James S.A. Corey", "Leviathan Wakes", 561, "2011-06-02T00:00:00.000Z"]
  157. ],
  158. "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
  159. }
  160. --------------------------------------------------
  161. // TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
  162. ==== TSV
  163. [source,console]
  164. --------------------------------------------------
  165. POST /_sql?format=tsv
  166. {
  167. "query": "SELECT * FROM library ORDER BY page_count DESC",
  168. "fetch_size": 5
  169. }
  170. --------------------------------------------------
  171. // TEST[setup:library]
  172. Which returns:
  173. [source,text]
  174. --------------------------------------------------
  175. author name page_count release_date
  176. Peter F. Hamilton Pandora's Star 768 2004-03-02T00:00:00.000Z
  177. Vernor Vinge A Fire Upon the Deep 613 1992-06-01T00:00:00.000Z
  178. Frank Herbert Dune 604 1965-06-01T00:00:00.000Z
  179. Alastair Reynolds Revelation Space 585 2000-03-15T00:00:00.000Z
  180. James S.A. Corey Leviathan Wakes 561 2011-06-02T00:00:00.000Z
  181. --------------------------------------------------
  182. // TESTRESPONSE[s/\t/ /]
  183. // TESTRESPONSE[non_json]
  184. ==== TXT
  185. [source,console]
  186. --------------------------------------------------
  187. POST /_sql?format=txt
  188. {
  189. "query": "SELECT * FROM library ORDER BY page_count DESC",
  190. "fetch_size": 5
  191. }
  192. --------------------------------------------------
  193. // TEST[setup:library]
  194. Which returns:
  195. [source,text]
  196. --------------------------------------------------
  197. author | name | page_count | release_date
  198. -----------------+--------------------+---------------+------------------------
  199. Peter F. Hamilton|Pandora's Star |768 |2004-03-02T00:00:00.000Z
  200. Vernor Vinge |A Fire Upon the Deep|613 |1992-06-01T00:00:00.000Z
  201. Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z
  202. Alastair Reynolds|Revelation Space |585 |2000-03-15T00:00:00.000Z
  203. James S.A. Corey |Leviathan Wakes |561 |2011-06-02T00:00:00.000Z
  204. --------------------------------------------------
  205. // TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
  206. // TESTRESPONSE[non_json]
  207. ==== YAML
  208. [source,console]
  209. --------------------------------------------------
  210. POST /_sql?format=yaml
  211. {
  212. "query": "SELECT * FROM library ORDER BY page_count DESC",
  213. "fetch_size": 5
  214. }
  215. --------------------------------------------------
  216. // TEST[setup:library]
  217. Which returns:
  218. [source,yaml]
  219. --------------------------------------------------
  220. columns:
  221. - name: "author"
  222. type: "text"
  223. - name: "name"
  224. type: "text"
  225. - name: "page_count"
  226. type: "short"
  227. - name: "release_date"
  228. type: "datetime"
  229. rows:
  230. - - "Peter F. Hamilton"
  231. - "Pandora's Star"
  232. - 768
  233. - "2004-03-02T00:00:00.000Z"
  234. - - "Vernor Vinge"
  235. - "A Fire Upon the Deep"
  236. - 613
  237. - "1992-06-01T00:00:00.000Z"
  238. - - "Frank Herbert"
  239. - "Dune"
  240. - 604
  241. - "1965-06-01T00:00:00.000Z"
  242. - - "Alastair Reynolds"
  243. - "Revelation Space"
  244. - 585
  245. - "2000-03-15T00:00:00.000Z"
  246. - - "James S.A. Corey"
  247. - "Leviathan Wakes"
  248. - 561
  249. - "2011-06-02T00:00:00.000Z"
  250. cursor: "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
  251. --------------------------------------------------
  252. // TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
  253. [[sql-pagination]]
  254. === Paginating through a large response
  255. Using the example from the <<sql-rest-format,previous section>>, one can
  256. continue to the next page by sending back the cursor field. In the case of CSV, TSV and TXT
  257. formats, the cursor is returned in the `Cursor` HTTP header.
  258. [source,console]
  259. --------------------------------------------------
  260. POST /_sql?format=json
  261. {
  262. "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
  263. }
  264. --------------------------------------------------
  265. // TEST[continued]
  266. // TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
  267. Which looks like:
  268. [source,console-result]
  269. --------------------------------------------------
  270. {
  271. "rows" : [
  272. ["Dan Simmons", "Hyperion", 482, "1989-05-26T00:00:00.000Z"],
  273. ["Iain M. Banks", "Consider Phlebas", 471, "1987-04-23T00:00:00.000Z"],
  274. ["Neal Stephenson", "Snow Crash", 470, "1992-06-01T00:00:00.000Z"],
  275. ["Frank Herbert", "God Emperor of Dune", 454, "1981-05-28T00:00:00.000Z"],
  276. ["Frank Herbert", "Children of Dune", 408, "1976-04-21T00:00:00.000Z"]
  277. ],
  278. "cursor" : "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f///w8="
  279. }
  280. --------------------------------------------------
  281. // TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWODRMaXBUaVlRN21iTlRyWHZWYUdrdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl9f\/\/\/w8=/$body.cursor/]
  282. Note that the `columns` object is only part of the first page.
  283. You've reached the last page when there is no `cursor` returned
  284. in the results. Like Elasticsearch's <<scroll-search-results,scroll>>,
  285. SQL may keep state in Elasticsearch to support the cursor. Unlike
  286. scroll, receiving the last page is enough to guarantee that the
  287. Elasticsearch state is cleared.
  288. To clear the state earlier, use the <<clear-sql-cursor-api,clear cursor API>>:
  289. [source,console]
  290. --------------------------------------------------
  291. POST /_sql/close
  292. {
  293. "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f///w8="
  294. }
  295. --------------------------------------------------
  296. // TEST[continued]
  297. // TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWYUpOYklQMHhRUEtld3RsNnFtYU1hQQ==:BAFmBGRhdGUBZgVsaWtlcwFzB21lc3NhZ2UBZgR1c2Vy9f\/\/\/w8=/$body.cursor/]
  298. Which will like return the
  299. [source,console-result]
  300. --------------------------------------------------
  301. {
  302. "succeeded" : true
  303. }
  304. --------------------------------------------------
  305. [[sql-rest-filtering]]
  306. === Filtering using {es} Query DSL
  307. One can filter the results that SQL will run on using a standard
  308. {es} Query DSL by specifying the query in the filter
  309. parameter.
  310. [source,console]
  311. --------------------------------------------------
  312. POST /_sql?format=txt
  313. {
  314. "query": "SELECT * FROM library ORDER BY page_count DESC",
  315. "filter": {
  316. "range": {
  317. "page_count": {
  318. "gte" : 100,
  319. "lte" : 200
  320. }
  321. }
  322. },
  323. "fetch_size": 5
  324. }
  325. --------------------------------------------------
  326. // TEST[setup:library]
  327. Which returns:
  328. [source,text]
  329. --------------------------------------------------
  330. author | name | page_count | release_date
  331. ---------------+------------------------------------+---------------+------------------------
  332. Douglas Adams |The Hitchhiker's Guide to the Galaxy|180 |1979-10-12T00:00:00.000Z
  333. --------------------------------------------------
  334. // TESTRESPONSE[s/\|/\\|/ s/\+/\\+/]
  335. // TESTRESPONSE[non_json]
  336. [TIP]
  337. =================
  338. A useful and less obvious usage for standard Query DSL filtering is to search documents by a specific <<search-routing, routing key>>.
  339. Because {es-sql} does not support a `routing` parameter, one can specify a <<mapping-routing-field, `terms` filter for the `_routing` field>> instead:
  340. [source,console]
  341. --------------------------------------------------
  342. POST /_sql?format=txt
  343. {
  344. "query": "SELECT * FROM library",
  345. "filter": {
  346. "terms": {
  347. "_routing": ["abc"]
  348. }
  349. }
  350. }
  351. --------------------------------------------------
  352. // TEST[setup:library]
  353. =================
  354. [[sql-rest-columnar]]
  355. === Columnar results
  356. The most well known way of displaying the results of an SQL query result in general is the one where each
  357. individual record/document represents one line/row. For certain formats, {es-sql} can return the results
  358. in a columnar fashion: one row represents all the values of a certain column from the current page of results.
  359. The following formats can be returned in columnar orientation: `json`, `yaml`, `cbor` and `smile`.
  360. [source,console]
  361. --------------------------------------------------
  362. POST /_sql?format=json
  363. {
  364. "query": "SELECT * FROM library ORDER BY page_count DESC",
  365. "fetch_size": 5,
  366. "columnar": true
  367. }
  368. --------------------------------------------------
  369. // TEST[setup:library]
  370. Which returns:
  371. [source,console-result]
  372. --------------------------------------------------
  373. {
  374. "columns": [
  375. {"name": "author", "type": "text"},
  376. {"name": "name", "type": "text"},
  377. {"name": "page_count", "type": "short"},
  378. {"name": "release_date", "type": "datetime"}
  379. ],
  380. "values": [
  381. ["Peter F. Hamilton", "Vernor Vinge", "Frank Herbert", "Alastair Reynolds", "James S.A. Corey"],
  382. ["Pandora's Star", "A Fire Upon the Deep", "Dune", "Revelation Space", "Leviathan Wakes"],
  383. [768, 613, 604, 585, 561],
  384. ["2004-03-02T00:00:00.000Z", "1992-06-01T00:00:00.000Z", "1965-06-01T00:00:00.000Z", "2000-03-15T00:00:00.000Z", "2011-06-02T00:00:00.000Z"]
  385. ],
  386. "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8="
  387. }
  388. --------------------------------------------------
  389. // TESTRESPONSE[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
  390. Any subsequent calls using a `cursor` still have to contain the `columnar` parameter to preserve the orientation,
  391. meaning the initial query will not _remember_ the columnar option.
  392. [source,console]
  393. --------------------------------------------------
  394. POST /_sql?format=json
  395. {
  396. "cursor": "sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl+v///w8=",
  397. "columnar": true
  398. }
  399. --------------------------------------------------
  400. // TEST[continued]
  401. // TEST[s/sDXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAAEWWWdrRlVfSS1TbDYtcW9lc1FJNmlYdw==:BAFmBmF1dGhvcgFmBG5hbWUBZgpwYWdlX2NvdW50AWYMcmVsZWFzZV9kYXRl\+v\/\/\/w8=/$body.cursor/]
  402. Which looks like:
  403. [source,console-result]
  404. --------------------------------------------------
  405. {
  406. "values": [
  407. ["Dan Simmons", "Iain M. Banks", "Neal Stephenson", "Frank Herbert", "Frank Herbert"],
  408. ["Hyperion", "Consider Phlebas", "Snow Crash", "God Emperor of Dune", "Children of Dune"],
  409. [482, 471, 470, 454, 408],
  410. ["1989-05-26T00:00:00.000Z", "1987-04-23T00:00:00.000Z", "1992-06-01T00:00:00.000Z", "1981-05-28T00:00:00.000Z", "1976-04-21T00:00:00.000Z"]
  411. ],
  412. "cursor": "46ToAwFzQERYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQUVXWjBaNlFXbzNOV0pVY21Wa1NUZDJhV2t3V2xwblp3PT3/////DwQBZgZhdXRob3IBBHRleHQAAAFmBG5hbWUBBHRleHQAAAFmCnBhZ2VfY291bnQBBGxvbmcBAAFmDHJlbGVhc2VfZGF0ZQEIZGF0ZXRpbWUBAAEP"
  413. }
  414. --------------------------------------------------
  415. // TESTRESPONSE[s/46ToAwFzQERYRjFaWEo1UVc1a1JtVjBZMmdCQUFBQUFBQUFBQUVXWjBaNlFXbzNOV0pVY21Wa1NUZDJhV2t3V2xwblp3PT3\/\/\/\/\/DwQBZgZhdXRob3IBBHRleHQAAAFmBG5hbWUBBHRleHQAAAFmCnBhZ2VfY291bnQBBGxvbmcBAAFmDHJlbGVhc2VfZGF0ZQEIZGF0ZXRpbWUBAAEP/$body.cursor/]
  416. [[sql-rest-params]]
  417. === Passing parameters to a query
  418. Using values in a query condition, for example, or in a `HAVING` statement can be done "inline",
  419. by integrating the value in the query string itself:
  420. [source,console]
  421. --------------------------------------------------
  422. POST /_sql?format=txt
  423. {
  424. "query": "SELECT YEAR(release_date) AS year FROM library WHERE page_count > 300 AND author = 'Frank Herbert' GROUP BY year HAVING COUNT(*) > 0"
  425. }
  426. --------------------------------------------------
  427. // TEST[setup:library]
  428. or it can be done by extracting the values in a separate list of parameters and using question mark placeholders (`?`) in the query string:
  429. [source,console]
  430. --------------------------------------------------
  431. POST /_sql?format=txt
  432. {
  433. "query": "SELECT YEAR(release_date) AS year FROM library WHERE page_count > ? AND author = ? GROUP BY year HAVING COUNT(*) > ?",
  434. "params": [300, "Frank Herbert", 0]
  435. }
  436. --------------------------------------------------
  437. // TEST[setup:library]
  438. [IMPORTANT]
  439. The recommended way of passing values to a query is with question mark placeholders, to avoid any attempts of hacking or SQL injection.
  440. [[sql-runtime-fields]]
  441. === Use runtime fields
  442. Use the `runtime_mappings` parameter to extract and create <<runtime,runtime
  443. fields>>, or columns, from existing ones during a search.
  444. The following search creates a `release_day_of_week` runtime field from
  445. `release_date` and returns it in the response.
  446. [source,console]
  447. ----
  448. POST _sql?format=txt
  449. {
  450. "runtime_mappings": {
  451. "release_day_of_week": {
  452. "type": "keyword",
  453. "script": """
  454. emit(doc['release_date'].value.dayOfWeekEnum.toString())
  455. """
  456. }
  457. },
  458. "query": """
  459. SELECT * FROM library WHERE page_count > 300 AND author = 'Frank Herbert'
  460. """
  461. }
  462. ----
  463. // TEST[setup:library]
  464. The API returns:
  465. [source,txt]
  466. ----
  467. author | name | page_count | release_date |release_day_of_week
  468. ---------------+---------------+---------------+------------------------+-------------------
  469. Frank Herbert |Dune |604 |1965-06-01T00:00:00.000Z|TUESDAY
  470. ----
  471. // TESTRESPONSE[non_json]
  472. [[sql-async]]
  473. === Run an async SQL search
  474. By default, SQL searches are synchronous. They wait for complete results before
  475. returning a response. However, results can take longer for searches across large
  476. data sets or <<data-tiers,frozen data>>.
  477. To avoid long waits, run an async SQL search. Set `wait_for_completion_timeout`
  478. to a duration you’d like to wait for synchronous results.
  479. [source,console]
  480. ----
  481. POST _sql?format=json
  482. {
  483. "wait_for_completion_timeout": "2s",
  484. "query": "SELECT * FROM library ORDER BY page_count DESC",
  485. "fetch_size": 5
  486. }
  487. ----
  488. // TEST[skip:waiting on https://github.com/elastic/elasticsearch/issues/75069]
  489. // TEST[setup:library]
  490. // TEST[s/"wait_for_completion_timeout": "2s"/"wait_for_completion_timeout": "0"/]
  491. If the search doesn’t finish within this period, the search becomes async. The
  492. API returns:
  493. * An `id` for the search.
  494. * An `is_partial` value of `true`, indicating the search results are incomplete.
  495. * An `is_running` value of `true`, indicating the search is still running in the
  496. background.
  497. For CSV, TSV, and TXT responses, the API returns these values in the respective
  498. `Async-ID`, `Async-partial`, and `Async-running` HTTP headers instead.
  499. [source,console-result]
  500. ----
  501. {
  502. "id": "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
  503. "is_partial": true,
  504. "is_running": true,
  505. "rows": [ ]
  506. }
  507. ----
  508. // TESTRESPONSE[skip:waiting on https://github.com/elastic/elasticsearch/issues/75069]
  509. // TESTRESPONSE[s/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=/$body.id/]
  510. // TESTRESPONSE[s/"is_partial": true/"is_partial": $body.is_partial/]
  511. // TESTRESPONSE[s/"is_running": true/"is_running": $body.is_running/]
  512. To check the progress of an async search, use the search ID with the
  513. <<get-async-sql-search-status-api,get async SQL search status API>>.
  514. [source,console]
  515. ----
  516. GET _sql/async/status/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=
  517. ----
  518. // TEST[skip: no access to search ID]
  519. If `is_running` and `is_partial` are `false`, the async search has finished with
  520. complete results.
  521. [source,console-result]
  522. ----
  523. {
  524. "id": "FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=",
  525. "is_running": false,
  526. "is_partial": false,
  527. "expiration_time_in_millis": 1611690295000,
  528. "completion_status": 200
  529. }
  530. ----
  531. // TESTRESPONSE[skip:waiting on https://github.com/elastic/elasticsearch/issues/75069]
  532. // TESTRESPONSE[s/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=/$body.id/]
  533. // TESTRESPONSE[s/"expiration_time_in_millis": 1611690295000/"expiration_time_in_millis": $body.expiration_time_in_millis/]
  534. To get the results, use the search ID with the <<get-async-sql-search-api,get
  535. async SQL search API>>. If the search is still running, specify how long you’d
  536. like to wait using `wait_for_completion_timeout`. You can also specify the
  537. response `format`.
  538. [source,console]
  539. ----
  540. GET _sql/async/FnR0TDhyWUVmUmVtWXRWZER4MXZiNFEad2F5UDk2ZVdTVHV1S0xDUy00SklUdzozMTU=?wait_for_completion_timeout=2s&format=json
  541. ----
  542. // TEST[skip: no access to search ID]
  543. [discrete]
  544. [[sql-async-retention]]
  545. ==== Change the search retention period
  546. By default, {es} stores async SQL searches for five days. After this period,
  547. {es} deletes the search and its results, even if the search is still running. To
  548. change this retention period, use the `keep_alive` parameter.
  549. [source,console]
  550. ----
  551. POST _sql?format=json
  552. {
  553. "keep_alive": "2d",
  554. "wait_for_completion_timeout": "2s",
  555. "query": "SELECT * FROM library ORDER BY page_count DESC",
  556. "fetch_size": 5
  557. }
  558. ----
  559. // TEST[skip:waiting on https://github.com/elastic/elasticsearch/issues/75069]
  560. // TEST[setup:library]
  561. You can use the get async SQL search API's `keep_alive` parameter to later
  562. change the retention period. The new period starts after the request runs.
  563. [source,console]
  564. ----
  565. GET _sql/async/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=?keep_alive=5d&wait_for_completion_timeout=2s&format=json
  566. ----
  567. // TEST[skip: no access to search ID]
  568. Use the <<delete-async-sql-search-api,delete async SQL search API>> to delete an
  569. async search before the `keep_alive` period ends. If the search is still
  570. running, {es} cancels it.
  571. [source,console]
  572. ----
  573. DELETE _sql/async/delete/FmdMX2pIang3UWhLRU5QS0lqdlppYncaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQToxOTI=
  574. ----
  575. // TEST[skip: no access to search ID]
  576. [discrete]
  577. [[sql-store-searches]]
  578. ==== Store synchronous SQL searches
  579. By default, {es} only stores async SQL searches. To save a synchronous search,
  580. specify `wait_for_completion_timeout` and set `keep_on_completion` to `true`.
  581. [source,console]
  582. ----
  583. POST _sql?format=json
  584. {
  585. "keep_on_completion": true,
  586. "wait_for_completion_timeout": "2s",
  587. "query": "SELECT * FROM library ORDER BY page_count DESC",
  588. "fetch_size": 5
  589. }
  590. ----
  591. // TEST[skip:waiting on https://github.com/elastic/elasticsearch/issues/75069]
  592. // TEST[setup:library]
  593. If `is_partial` and `is_running` are `false`, the search was synchronous and
  594. returned complete results.
  595. [source,console-result]
  596. ----
  597. {
  598. "id": "Fnc5UllQdUVWU0NxRFNMbWxNYXplaFEaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQTo0NzA=",
  599. "is_partial": false,
  600. "is_running": false,
  601. "rows": ...,
  602. "columns": ...,
  603. "cursor": ...
  604. }
  605. ----
  606. // TESTRESPONSE[skip:waiting on https://github.com/elastic/elasticsearch/issues/75069]
  607. // TESTRESPONSE[s/Fnc5UllQdUVWU0NxRFNMbWxNYXplaFEaMUpYQ05oSkpTc3kwZ21EdC1tbFJXQTo0NzA=/$body.id/]
  608. // TESTRESPONSE[s/"rows": \.\.\./"rows": $body.rows/]
  609. // TESTRESPONSE[s/"columns": \.\.\./"columns": $body.columns/]
  610. // TESTRESPONSE[s/"cursor": \.\.\./"cursor": $body.cursor/]
  611. You can get the same results later using the search ID with the
  612. <<get-async-sql-search-api,get async SQL search API>>.
  613. Saved synchronous searches are still subject to the `keep_alive` retention
  614. period. When this period ends, {es} deletes the search results. You can also
  615. delete saved searches using the <<delete-async-sql-search-api,delete async SQL
  616. search API>>.