ingest-attachment.asciidoc 11 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387
  1. [[ingest-attachment]]
  2. === Ingest Attachment Processor Plugin
  3. The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by
  4. using the Apache text extraction library https://tika.apache.org/[Tika].
  5. You can use the ingest attachment plugin as a replacement for the mapper attachment plugin.
  6. The source field must be a base64 encoded binary. If you do not want to incur
  7. the overhead of converting back and forth between base64, you can use the CBOR
  8. format instead of JSON and specify the field as a bytes array instead of a string
  9. representation. The processor will skip the base64 decoding then.
  10. :plugin_name: ingest-attachment
  11. include::install_remove.asciidoc[]
  12. [[using-ingest-attachment]]
  13. ==== Using the Attachment Processor in a Pipeline
  14. [[ingest-attachment-options]]
  15. .Attachment options
  16. [options="header"]
  17. |======
  18. | Name | Required | Default | Description
  19. | `field` | yes | - | The field to get the base64 encoded field from
  20. | `target_field` | no | attachment | The field that will hold the attachment information
  21. | `indexed_chars` | no | 100000 | The number of chars being used for extraction to prevent huge fields. Use `-1` for no limit.
  22. | `indexed_chars_field` | no | `null` | Field name from which you can overwrite the number of chars being used for extraction. See `indexed_chars`.
  23. | `properties` | no | all properties | Array of properties to select to be stored. Can be `content`, `title`, `name`, `author`, `keywords`, `date`, `content_type`, `content_length`, `language`
  24. | `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document
  25. | `resource_name` | no | | Field containing the name of the resource to decode. If specified, the processor passes this resource name to the underlying Tika library to enable https://tika.apache.org/1.24.1/detection.html#Resource_Name_Based_Detection[Resource Name Based Detection].
  26. |======
  27. [discrete]
  28. [[ingest-attachment-json-ex]]
  29. ==== Example
  30. If attaching files to JSON documents, you must first encode the file as a base64
  31. string. On Unix-like systems, you can do this using a `base64` command:
  32. [source,shell]
  33. ----
  34. base64 -in myfile.rtf
  35. ----
  36. The command returns the base64-encoded string for the file. The following base64
  37. string is for an `.rtf` file containing the text `Lorem ipsum dolor sit amet`:
  38. `e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=`.
  39. Use an attachment processor to decode the string and extract the file's
  40. properties:
  41. [source,console]
  42. ----
  43. PUT _ingest/pipeline/attachment
  44. {
  45. "description" : "Extract attachment information",
  46. "processors" : [
  47. {
  48. "attachment" : {
  49. "field" : "data"
  50. }
  51. }
  52. ]
  53. }
  54. PUT my-index-000001/_doc/my_id?pipeline=attachment
  55. {
  56. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
  57. }
  58. GET my-index-000001/_doc/my_id
  59. ----
  60. The document's `attachment` object contains extracted properties for the file:
  61. [source,console-result]
  62. ----
  63. {
  64. "found": true,
  65. "_index": "my-index-000001",
  66. "_id": "my_id",
  67. "_version": 1,
  68. "_seq_no": 22,
  69. "_primary_term": 1,
  70. "_source": {
  71. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  72. "attachment": {
  73. "content_type": "application/rtf",
  74. "language": "ro",
  75. "content": "Lorem ipsum dolor sit amet",
  76. "content_length": 28
  77. }
  78. }
  79. }
  80. ----
  81. // TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  82. To extract only certain `attachment` fields, specify the `properties` array:
  83. [source,console]
  84. ----
  85. PUT _ingest/pipeline/attachment
  86. {
  87. "description" : "Extract attachment information",
  88. "processors" : [
  89. {
  90. "attachment" : {
  91. "field" : "data",
  92. "properties": [ "content", "title" ]
  93. }
  94. }
  95. ]
  96. }
  97. ----
  98. NOTE: Extracting contents from binary data is a resource intensive operation and
  99. consumes a lot of resources. It is highly recommended to run pipelines
  100. using this processor in a dedicated ingest node.
  101. [[ingest-attachment-cbor]]
  102. ==== Use the attachment processor with CBOR
  103. To avoid encoding and decoding JSON to base64, you can instead pass CBOR data to
  104. the attachment processor. For example, the following request creates the
  105. `cbor-attachment` pipeline, which uses the attachment processor.
  106. [source,console]
  107. ----
  108. PUT _ingest/pipeline/cbor-attachment
  109. {
  110. "description" : "Extract attachment information",
  111. "processors" : [
  112. {
  113. "attachment" : {
  114. "field" : "data"
  115. }
  116. }
  117. ]
  118. }
  119. ----
  120. The following Python script passes CBOR data to an HTTP indexing request that
  121. includes the `cbor-attachment` pipeline. The HTTP request headers use a
  122. `content-type` of `application/cbor`.
  123. NOTE: Not all {es} clients support custom HTTP request headers.
  124. [source,python]
  125. ----
  126. import cbor2
  127. import requests
  128. file = 'my-file'
  129. headers = {'content-type': 'application/cbor'}
  130. with open(file, 'rb') as f:
  131. doc = {
  132. 'data': f.read()
  133. }
  134. requests.put(
  135. 'http://localhost:9200/my-index-000001/_doc/my_id?pipeline=cbor-attachment',
  136. data=cbor2.dumps(doc),
  137. headers=headers
  138. )
  139. ----
  140. [[ingest-attachment-extracted-chars]]
  141. ==== Limit the number of extracted chars
  142. To prevent extracting too many chars and overload the node memory, the number of chars being used for extraction
  143. is limited by default to `100000`. You can change this value by setting `indexed_chars`. Use `-1` for no limit but
  144. ensure when setting this that your node will have enough HEAP to extract the content of very big documents.
  145. You can also define this limit per document by extracting from a given field the limit to set. If the document
  146. has that field, it will overwrite the `indexed_chars` setting. To set this field, define the `indexed_chars_field`
  147. setting.
  148. For example:
  149. [source,console]
  150. --------------------------------------------------
  151. PUT _ingest/pipeline/attachment
  152. {
  153. "description" : "Extract attachment information",
  154. "processors" : [
  155. {
  156. "attachment" : {
  157. "field" : "data",
  158. "indexed_chars" : 11,
  159. "indexed_chars_field" : "max_size"
  160. }
  161. }
  162. ]
  163. }
  164. PUT my-index-000001/_doc/my_id?pipeline=attachment
  165. {
  166. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
  167. }
  168. GET my-index-000001/_doc/my_id
  169. --------------------------------------------------
  170. Returns this:
  171. [source,console-result]
  172. --------------------------------------------------
  173. {
  174. "found": true,
  175. "_index": "my-index-000001",
  176. "_id": "my_id",
  177. "_version": 1,
  178. "_seq_no": 35,
  179. "_primary_term": 1,
  180. "_source": {
  181. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  182. "attachment": {
  183. "content_type": "application/rtf",
  184. "language": "sl",
  185. "content": "Lorem ipsum",
  186. "content_length": 11
  187. }
  188. }
  189. }
  190. --------------------------------------------------
  191. // TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  192. [source,console]
  193. --------------------------------------------------
  194. PUT _ingest/pipeline/attachment
  195. {
  196. "description" : "Extract attachment information",
  197. "processors" : [
  198. {
  199. "attachment" : {
  200. "field" : "data",
  201. "indexed_chars" : 11,
  202. "indexed_chars_field" : "max_size"
  203. }
  204. }
  205. ]
  206. }
  207. PUT my-index-000001/_doc/my_id_2?pipeline=attachment
  208. {
  209. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  210. "max_size": 5
  211. }
  212. GET my-index-000001/_doc/my_id_2
  213. --------------------------------------------------
  214. Returns this:
  215. [source,console-result]
  216. --------------------------------------------------
  217. {
  218. "found": true,
  219. "_index": "my-index-000001",
  220. "_id": "my_id_2",
  221. "_version": 1,
  222. "_seq_no": 40,
  223. "_primary_term": 1,
  224. "_source": {
  225. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  226. "max_size": 5,
  227. "attachment": {
  228. "content_type": "application/rtf",
  229. "language": "ro",
  230. "content": "Lorem",
  231. "content_length": 5
  232. }
  233. }
  234. }
  235. --------------------------------------------------
  236. // TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  237. [[ingest-attachment-with-arrays]]
  238. ==== Using the Attachment Processor with arrays
  239. To use the attachment processor within an array of attachments the
  240. {ref}/foreach-processor.html[foreach processor] is required. This
  241. enables the attachment processor to be run on the individual elements
  242. of the array.
  243. For example, given the following source:
  244. [source,js]
  245. --------------------------------------------------
  246. {
  247. "attachments" : [
  248. {
  249. "filename" : "ipsum.txt",
  250. "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo="
  251. },
  252. {
  253. "filename" : "test.txt",
  254. "data" : "VGhpcyBpcyBhIHRlc3QK"
  255. }
  256. ]
  257. }
  258. --------------------------------------------------
  259. // NOTCONSOLE
  260. In this case, we want to process the data field in each element
  261. of the attachments field and insert
  262. the properties into the document so the following `foreach`
  263. processor is used:
  264. [source,console]
  265. --------------------------------------------------
  266. PUT _ingest/pipeline/attachment
  267. {
  268. "description" : "Extract attachment information from arrays",
  269. "processors" : [
  270. {
  271. "foreach": {
  272. "field": "attachments",
  273. "processor": {
  274. "attachment": {
  275. "target_field": "_ingest._value.attachment",
  276. "field": "_ingest._value.data"
  277. }
  278. }
  279. }
  280. }
  281. ]
  282. }
  283. PUT my-index-000001/_doc/my_id?pipeline=attachment
  284. {
  285. "attachments" : [
  286. {
  287. "filename" : "ipsum.txt",
  288. "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo="
  289. },
  290. {
  291. "filename" : "test.txt",
  292. "data" : "VGhpcyBpcyBhIHRlc3QK"
  293. }
  294. ]
  295. }
  296. GET my-index-000001/_doc/my_id
  297. --------------------------------------------------
  298. Returns this:
  299. [source,console-result]
  300. --------------------------------------------------
  301. {
  302. "_index" : "my-index-000001",
  303. "_id" : "my_id",
  304. "_version" : 1,
  305. "_seq_no" : 50,
  306. "_primary_term" : 1,
  307. "found" : true,
  308. "_source" : {
  309. "attachments" : [
  310. {
  311. "filename" : "ipsum.txt",
  312. "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo=",
  313. "attachment" : {
  314. "content_type" : "text/plain; charset=ISO-8859-1",
  315. "language" : "en",
  316. "content" : "this is\njust some text",
  317. "content_length" : 24
  318. }
  319. },
  320. {
  321. "filename" : "test.txt",
  322. "data" : "VGhpcyBpcyBhIHRlc3QK",
  323. "attachment" : {
  324. "content_type" : "text/plain; charset=ISO-8859-1",
  325. "language" : "en",
  326. "content" : "This is a test",
  327. "content_length" : 16
  328. }
  329. }
  330. ]
  331. }
  332. }
  333. --------------------------------------------------
  334. // TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  335. Note that the `target_field` needs to be set, otherwise the
  336. default value is used which is a top level field `attachment`. The
  337. properties on this top level field will contain the value of the
  338. first attachment only. However, by specifying the
  339. `target_field` on to a value on `_ingest._value` it will correctly
  340. associate the properties with the correct attachment.