ingest-attachment.asciidoc 9.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331
  1. [[ingest-attachment]]
  2. === Ingest Attachment Processor Plugin
  3. The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by
  4. using the Apache text extraction library http://lucene.apache.org/tika/[Tika].
  5. You can use the ingest attachment plugin as a replacement for the mapper attachment plugin.
  6. The source field must be a base64 encoded binary. If you do not want to incur
  7. the overhead of converting back and forth between base64, you can use the CBOR
  8. format instead of JSON and specify the field as a bytes array instead of a string
  9. representation. The processor will skip the base64 decoding then.
  10. :plugin_name: ingest-attachment
  11. include::install_remove.asciidoc[]
  12. [[using-ingest-attachment]]
  13. ==== Using the Attachment Processor in a Pipeline
  14. [[ingest-attachment-options]]
  15. .Attachment options
  16. [options="header"]
  17. |======
  18. | Name | Required | Default | Description
  19. | `field` | yes | - | The field to get the base64 encoded field from
  20. | `target_field` | no | attachment | The field that will hold the attachment information
  21. | `indexed_chars` | no | 100000 | The number of chars being used for extraction to prevent huge fields. Use `-1` for no limit.
  22. | `indexed_chars_field` | no | `null` | Field name from which you can overwrite the number of chars being used for extraction. See `indexed_chars`.
  23. | `properties` | no | all properties | Array of properties to select to be stored. Can be `content`, `title`, `name`, `author`, `keywords`, `date`, `content_type`, `content_length`, `language`
  24. | `ignore_missing` | no | `false` | If `true` and `field` does not exist, the processor quietly exits without modifying the document
  25. |======
  26. For example, this:
  27. [source,js]
  28. --------------------------------------------------
  29. PUT _ingest/pipeline/attachment
  30. {
  31. "description" : "Extract attachment information",
  32. "processors" : [
  33. {
  34. "attachment" : {
  35. "field" : "data"
  36. }
  37. }
  38. ]
  39. }
  40. PUT my_index/_doc/my_id?pipeline=attachment
  41. {
  42. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
  43. }
  44. GET my_index/_doc/my_id
  45. --------------------------------------------------
  46. // CONSOLE
  47. Returns this:
  48. [source,js]
  49. --------------------------------------------------
  50. {
  51. "found": true,
  52. "_index": "my_index",
  53. "_type": "_doc",
  54. "_id": "my_id",
  55. "_version": 1,
  56. "_seq_no": 22,
  57. "_primary_term": 1,
  58. "_source": {
  59. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  60. "attachment": {
  61. "content_type": "application/rtf",
  62. "language": "ro",
  63. "content": "Lorem ipsum dolor sit amet",
  64. "content_length": 28
  65. }
  66. }
  67. }
  68. --------------------------------------------------
  69. // TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  70. To specify only some fields to be extracted:
  71. [source,js]
  72. --------------------------------------------------
  73. PUT _ingest/pipeline/attachment
  74. {
  75. "description" : "Extract attachment information",
  76. "processors" : [
  77. {
  78. "attachment" : {
  79. "field" : "data",
  80. "properties": [ "content", "title" ]
  81. }
  82. }
  83. ]
  84. }
  85. --------------------------------------------------
  86. // CONSOLE
  87. NOTE: Extracting contents from binary data is a resource intensive operation and
  88. consumes a lot of resources. It is highly recommended to run pipelines
  89. using this processor in a dedicated ingest node.
  90. [[ingest-attachment-extracted-chars]]
  91. ==== Limit the number of extracted chars
  92. To prevent extracting too many chars and overload the node memory, the number of chars being used for extraction
  93. is limited by default to `100000`. You can change this value by setting `indexed_chars`. Use `-1` for no limit but
  94. ensure when setting this that your node will have enough HEAP to extract the content of very big documents.
  95. You can also define this limit per document by extracting from a given field the limit to set. If the document
  96. has that field, it will overwrite the `indexed_chars` setting. To set this field, define the `indexed_chars_field`
  97. setting.
  98. For example:
  99. [source,js]
  100. --------------------------------------------------
  101. PUT _ingest/pipeline/attachment
  102. {
  103. "description" : "Extract attachment information",
  104. "processors" : [
  105. {
  106. "attachment" : {
  107. "field" : "data",
  108. "indexed_chars" : 11,
  109. "indexed_chars_field" : "max_size"
  110. }
  111. }
  112. ]
  113. }
  114. PUT my_index/_doc/my_id?pipeline=attachment
  115. {
  116. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
  117. }
  118. GET my_index/_doc/my_id
  119. --------------------------------------------------
  120. // CONSOLE
  121. Returns this:
  122. [source,js]
  123. --------------------------------------------------
  124. {
  125. "found": true,
  126. "_index": "my_index",
  127. "_type": "_doc",
  128. "_id": "my_id",
  129. "_version": 1,
  130. "_seq_no": 35,
  131. "_primary_term": 1,
  132. "_source": {
  133. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  134. "attachment": {
  135. "content_type": "application/rtf",
  136. "language": "sl",
  137. "content": "Lorem ipsum",
  138. "content_length": 11
  139. }
  140. }
  141. }
  142. --------------------------------------------------
  143. // TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  144. [source,js]
  145. --------------------------------------------------
  146. PUT _ingest/pipeline/attachment
  147. {
  148. "description" : "Extract attachment information",
  149. "processors" : [
  150. {
  151. "attachment" : {
  152. "field" : "data",
  153. "indexed_chars" : 11,
  154. "indexed_chars_field" : "max_size"
  155. }
  156. }
  157. ]
  158. }
  159. PUT my_index/_doc/my_id_2?pipeline=attachment
  160. {
  161. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  162. "max_size": 5
  163. }
  164. GET my_index/_doc/my_id_2
  165. --------------------------------------------------
  166. // CONSOLE
  167. Returns this:
  168. [source,js]
  169. --------------------------------------------------
  170. {
  171. "found": true,
  172. "_index": "my_index",
  173. "_type": "_doc",
  174. "_id": "my_id_2",
  175. "_version": 1,
  176. "_seq_no": 40,
  177. "_primary_term": 1,
  178. "_source": {
  179. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  180. "max_size": 5,
  181. "attachment": {
  182. "content_type": "application/rtf",
  183. "language": "ro",
  184. "content": "Lorem",
  185. "content_length": 5
  186. }
  187. }
  188. }
  189. --------------------------------------------------
  190. // TESTRESPONSE[s/"_seq_no": \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  191. [[ingest-attachment-with-arrays]]
  192. ==== Using the Attachment Processor with arrays
  193. To use the attachment processor within an array of attachments the
  194. {ref}/foreach-processor.html[foreach processor] is required. This
  195. enables the attachment processor to be run on the individual elements
  196. of the array.
  197. For example, given the following source:
  198. [source,js]
  199. --------------------------------------------------
  200. {
  201. "attachments" : [
  202. {
  203. "filename" : "ipsum.txt",
  204. "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo="
  205. },
  206. {
  207. "filename" : "test.txt",
  208. "data" : "VGhpcyBpcyBhIHRlc3QK"
  209. }
  210. ]
  211. }
  212. --------------------------------------------------
  213. // NOTCONSOLE
  214. In this case, we want to process the data field in each element
  215. of the attachments field and insert
  216. the properties into the document so the following `foreach`
  217. processor is used:
  218. [source,js]
  219. --------------------------------------------------
  220. PUT _ingest/pipeline/attachment
  221. {
  222. "description" : "Extract attachment information from arrays",
  223. "processors" : [
  224. {
  225. "foreach": {
  226. "field": "attachments",
  227. "processor": {
  228. "attachment": {
  229. "target_field": "_ingest._value.attachment",
  230. "field": "_ingest._value.data"
  231. }
  232. }
  233. }
  234. }
  235. ]
  236. }
  237. PUT my_index/_doc/my_id?pipeline=attachment
  238. {
  239. "attachments" : [
  240. {
  241. "filename" : "ipsum.txt",
  242. "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo="
  243. },
  244. {
  245. "filename" : "test.txt",
  246. "data" : "VGhpcyBpcyBhIHRlc3QK"
  247. }
  248. ]
  249. }
  250. GET my_index/_doc/my_id
  251. --------------------------------------------------
  252. // CONSOLE
  253. Returns this:
  254. [source,js]
  255. --------------------------------------------------
  256. {
  257. "_index" : "my_index",
  258. "_type" : "_doc",
  259. "_id" : "my_id",
  260. "_version" : 1,
  261. "_seq_no" : 50,
  262. "_primary_term" : 1,
  263. "found" : true,
  264. "_source" : {
  265. "attachments" : [
  266. {
  267. "filename" : "ipsum.txt",
  268. "data" : "dGhpcyBpcwpqdXN0IHNvbWUgdGV4dAo=",
  269. "attachment" : {
  270. "content_type" : "text/plain; charset=ISO-8859-1",
  271. "language" : "en",
  272. "content" : "this is\njust some text",
  273. "content_length" : 24
  274. }
  275. },
  276. {
  277. "filename" : "test.txt",
  278. "data" : "VGhpcyBpcyBhIHRlc3QK",
  279. "attachment" : {
  280. "content_type" : "text/plain; charset=ISO-8859-1",
  281. "language" : "en",
  282. "content" : "This is a test",
  283. "content_length" : 16
  284. }
  285. }
  286. ]
  287. }
  288. }
  289. --------------------------------------------------
  290. // TESTRESPONSE[s/"_seq_no" : \d+/"_seq_no" : $body._seq_no/ s/"_primary_term" : 1/"_primary_term" : $body._primary_term/]
  291. Note that the `target_field` needs to be set, otherwise the
  292. default value is used which is a top level field `attachment`. The
  293. properties on this top level field will contain the value of the
  294. first attachment only. However, by specifying the
  295. `target_field` on to a value on `_ingest._value` it will correctly
  296. associate the properties with the correct attachment.