ingest-attachment.asciidoc 3.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105
  1. [[ingest-attachment]]
  2. === Ingest Attachment Processor Plugin
  3. The ingest attachment plugin lets Elasticsearch extract file attachments in common formats (such as PPT, XLS, and PDF) by
  4. using the Apache text extraction library http://lucene.apache.org/tika/[Tika].
  5. You can use the ingest attachment plugin as a replacement for the mapper attachment plugin.
  6. The source field must be a base64 encoded binary. If you do not want to incur
  7. the overhead of converting back and forth between base64, you can use the CBOR
  8. format instead of JSON and specify the field as a bytes array instead of a string
  9. representation. The processor will skip the base64 decoding then.
  10. [[ingest-attachment-install]]
  11. [float]
  12. ==== Installation
  13. This plugin can be installed using the plugin manager:
  14. [source,sh]
  15. ----------------------------------------------------------------
  16. sudo bin/elasticsearch-plugin install ingest-attachment
  17. ----------------------------------------------------------------
  18. // NOTCONSOLE
  19. The plugin must be installed on every node in the cluster, and each node must
  20. be restarted after installation.
  21. [[ingest-attachment-remove]]
  22. [float]
  23. ==== Removal
  24. The plugin can be removed with the following command:
  25. [source,sh]
  26. ----------------------------------------------------------------
  27. sudo bin/elasticsearch-plugin remove ingest-attachment
  28. ----------------------------------------------------------------
  29. // NOTCONSOLE
  30. The node must be stopped before removing the plugin.
  31. [[using-ingest-attachment]]
  32. ==== Using the Attachment Processor in a Pipeline
  33. [[ingest-attachment-options]]
  34. .Attachment options
  35. [options="header"]
  36. |======
  37. | Name | Required | Default | Description
  38. | `field` | yes | - | The field to get the base64 encoded field from
  39. | `target_field` | no | attachment | The field that will hold the attachment information
  40. | `indexed_chars` | no | 100000 | The number of chars being used for extraction to prevent huge fields. Use `-1` for no limit.
  41. | `properties` | no | all | Properties to select to be stored. Can be `content`, `title`, `name`, `author`, `keywords`, `date`, `content_type`, `content_length`, `language`
  42. |======
  43. For example, this:
  44. [source,js]
  45. --------------------------------------------------
  46. PUT _ingest/pipeline/attachment
  47. {
  48. "description" : "Extract attachment information",
  49. "processors" : [
  50. {
  51. "attachment" : {
  52. "field" : "data"
  53. }
  54. }
  55. ]
  56. }
  57. PUT my_index/my_type/my_id?pipeline=attachment
  58. {
  59. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0="
  60. }
  61. GET my_index/my_type/my_id
  62. --------------------------------------------------
  63. // CONSOLE
  64. Returns this:
  65. [source,js]
  66. --------------------------------------------------
  67. {
  68. "found": true,
  69. "_index": "my_index",
  70. "_type": "my_type",
  71. "_id": "my_id",
  72. "_version": 1,
  73. "_source": {
  74. "data": "e1xydGYxXGFuc2kNCkxvcmVtIGlwc3VtIGRvbG9yIHNpdCBhbWV0DQpccGFyIH0=",
  75. "attachment": {
  76. "content_type": "application/rtf",
  77. "language": "ro",
  78. "content": "Lorem ipsum dolor sit amet",
  79. "content_length": 28
  80. }
  81. }
  82. }
  83. --------------------------------------------------
  84. // TESTRESPONSE
  85. NOTE: Extracting contents from binary data is a resource intensive operation and
  86. consumes a lot of resources. It is highly recommended to run pipelines
  87. using this processor in a dedicated ingest node.