custom-analyzer.asciidoc 5.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261
  1. [[analysis-custom-analyzer]]
  2. === Custom Analyzer
  3. When the built-in analyzers do not fulfill your needs, you can create a
  4. `custom` analyzer which uses the appropriate combination of:
  5. * zero or more <<analysis-charfilters, character filters>>
  6. * a <<analysis-tokenizers,tokenizer>>
  7. * zero or more <<analysis-tokenfilters,token filters>>.
  8. [float]
  9. === Configuration
  10. The `custom` analyzer accepts the following parameters:
  11. [horizontal]
  12. `tokenizer`::
  13. A built-in or customised <<analysis-tokenizers,tokenizer>>.
  14. (Required)
  15. `char_filter`::
  16. An optional array of built-in or customised
  17. <<analysis-charfilters, character filters>>.
  18. `filter`::
  19. An optional array of built-in or customised
  20. <<analysis-tokenfilters, token filters>>.
  21. `position_increment_gap`::
  22. When indexing an array of text values, Elasticsearch inserts a fake "gap"
  23. between the last term of one value and the first term of the next value to
  24. ensure that a phrase query doesn't match two terms from different array
  25. elements. Defaults to `100`. See <<position-increment-gap>> for more.
  26. [float]
  27. === Example configuration
  28. Here is an example that combines the following:
  29. Character Filter::
  30. * <<analysis-htmlstrip-charfilter,HTML Strip Character Filter>>
  31. Tokenizer::
  32. * <<analysis-standard-tokenizer,Standard Tokenizer>>
  33. Token Filters::
  34. * <<analysis-lowercase-tokenfilter,Lowercase Token Filter>>
  35. * <<analysis-asciifolding-tokenfilter,ASCII-Folding Token Filter>>
  36. [source,js]
  37. --------------------------------
  38. PUT my_index
  39. {
  40. "settings": {
  41. "analysis": {
  42. "analyzer": {
  43. "my_custom_analyzer": {
  44. "type": "custom",
  45. "tokenizer": "standard",
  46. "char_filter": [
  47. "html_strip"
  48. ],
  49. "filter": [
  50. "lowercase",
  51. "asciifolding"
  52. ]
  53. }
  54. }
  55. }
  56. }
  57. }
  58. GET _cluster/health?wait_for_status=yellow
  59. POST my_index/_analyze
  60. {
  61. "analyzer": "my_custom_analyzer",
  62. "text": "Is this <b>déjà vu</b>?"
  63. }
  64. --------------------------------
  65. // CONSOLE
  66. /////////////////////
  67. [source,js]
  68. ----------------------------
  69. {
  70. "tokens": [
  71. {
  72. "token": "is",
  73. "start_offset": 0,
  74. "end_offset": 2,
  75. "type": "<ALPHANUM>",
  76. "position": 0
  77. },
  78. {
  79. "token": "this",
  80. "start_offset": 3,
  81. "end_offset": 7,
  82. "type": "<ALPHANUM>",
  83. "position": 1
  84. },
  85. {
  86. "token": "deja",
  87. "start_offset": 11,
  88. "end_offset": 15,
  89. "type": "<ALPHANUM>",
  90. "position": 2
  91. },
  92. {
  93. "token": "vu",
  94. "start_offset": 16,
  95. "end_offset": 22,
  96. "type": "<ALPHANUM>",
  97. "position": 3
  98. }
  99. ]
  100. }
  101. ----------------------------
  102. // TESTRESPONSE
  103. /////////////////////
  104. The above example produces the following terms:
  105. [source,text]
  106. ---------------------------
  107. [ is, this, deja, vu ]
  108. ---------------------------
  109. The previous example used tokenizer, token filters, and character filters with
  110. their default configurations, but it is possible to create configured versions
  111. of each and to use them in a custom analyzer.
  112. Here is a more complicated example that combines the following:
  113. Character Filter::
  114. * <<analysis-mapping-charfilter,Mapping Character Filter>>, configured to replace `:)` with `_happy_` and `:(` with `_sad_`
  115. Tokenizer::
  116. * <<analysis-pattern-tokenizer,Pattern Tokenizer>>, configured to split on punctuation characters
  117. Token Filters::
  118. * <<analysis-lowercase-tokenfilter,Lowercase Token Filter>>
  119. * <<analysis-stop-tokenfilter,Stop Token Filter>>, configured to use the pre-defined list of English stop words
  120. Here is an example:
  121. [source,js]
  122. --------------------------------------------------
  123. PUT my_index
  124. {
  125. "settings": {
  126. "analysis": {
  127. "analyzer": {
  128. "my_custom_analyzer": {
  129. "type": "custom",
  130. "char_filter": [
  131. "emoticons" <1>
  132. ],
  133. "tokenizer": "punctuation", <1>
  134. "filter": [
  135. "lowercase",
  136. "english_stop" <1>
  137. ]
  138. }
  139. },
  140. "tokenizer": {
  141. "punctuation": { <1>
  142. "type": "pattern",
  143. "pattern": "[ .,!?]"
  144. }
  145. },
  146. "char_filter": {
  147. "emoticons": { <1>
  148. "type": "mapping",
  149. "mappings": [
  150. ":) => _happy_",
  151. ":( => _sad_"
  152. ]
  153. }
  154. },
  155. "filter": {
  156. "english_stop": { <1>
  157. "type": "stop",
  158. "stopwords": "_english_"
  159. }
  160. }
  161. }
  162. }
  163. }
  164. GET _cluster/health?wait_for_status=yellow
  165. POST my_index/_analyze
  166. {
  167. "analyzer": "my_custom_analyzer",
  168. "text": "I'm a :) person, and you?"
  169. }
  170. --------------------------------------------------
  171. // CONSOLE
  172. <1> The `emoticon` character filter, `punctuation` tokenizer and
  173. `english_stop` token filter are custom implementations which are defined
  174. in the same index settings.
  175. /////////////////////
  176. [source,js]
  177. ----------------------------
  178. {
  179. "tokens": [
  180. {
  181. "token": "i'm",
  182. "start_offset": 0,
  183. "end_offset": 3,
  184. "type": "word",
  185. "position": 0
  186. },
  187. {
  188. "token": "_happy_",
  189. "start_offset": 6,
  190. "end_offset": 8,
  191. "type": "word",
  192. "position": 2
  193. },
  194. {
  195. "token": "person",
  196. "start_offset": 9,
  197. "end_offset": 15,
  198. "type": "word",
  199. "position": 3
  200. },
  201. {
  202. "token": "you",
  203. "start_offset": 21,
  204. "end_offset": 24,
  205. "type": "word",
  206. "position": 5
  207. }
  208. ]
  209. }
  210. ----------------------------
  211. // TESTRESPONSE
  212. /////////////////////
  213. The above example produces the following terms:
  214. [source,text]
  215. ---------------------------
  216. [ i'm, _happy_, person, you ]
  217. ---------------------------