custom-analyzer.asciidoc 5.9 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262
  1. [[analysis-custom-analyzer]]
  2. === Custom Analyzer
  3. When the built-in analyzers do not fulfill your needs, you can create a
  4. `custom` analyzer which uses the appropriate combination of:
  5. * zero or more <<analysis-charfilters, character filters>>
  6. * a <<analysis-tokenizers,tokenizer>>
  7. * zero or more <<analysis-tokenfilters,token filters>>.
  8. [float]
  9. === Configuration
  10. The `custom` analyzer accepts the following parameters:
  11. [horizontal]
  12. `tokenizer`::
  13. A built-in or customised <<analysis-tokenizers,tokenizer>>.
  14. (Required)
  15. `char_filter`::
  16. An optional array of built-in or customised
  17. <<analysis-charfilters, character filters>>.
  18. `filter`::
  19. An optional array of built-in or customised
  20. <<analysis-tokenfilters, token filters>>.
  21. `position_increment_gap`::
  22. When indexing an array of text values, Elasticsearch inserts a fake "gap"
  23. between the last term of one value and the first term of the next value to
  24. ensure that a phrase query doesn't match two terms from different array
  25. elements. Defaults to `100`. See <<position-increment-gap>> for more.
  26. [float]
  27. === Example configuration
  28. Here is an example that combines the following:
  29. Character Filter::
  30. * <<analysis-htmlstrip-charfilter,HTML Strip Character Filter>>
  31. Tokenizer::
  32. * <<analysis-standard-tokenizer,Standard Tokenizer>>
  33. Token Filters::
  34. * <<analysis-lowercase-tokenfilter,Lowercase Token Filter>>
  35. * <<analysis-asciifolding-tokenfilter,ASCII-Folding Token Filter>>
  36. [source,js]
  37. --------------------------------
  38. PUT my_index
  39. {
  40. "settings": {
  41. "analysis": {
  42. "analyzer": {
  43. "my_custom_analyzer": {
  44. "type": "custom", <1>
  45. "tokenizer": "standard",
  46. "char_filter": [
  47. "html_strip"
  48. ],
  49. "filter": [
  50. "lowercase",
  51. "asciifolding"
  52. ]
  53. }
  54. }
  55. }
  56. }
  57. }
  58. POST my_index/_analyze
  59. {
  60. "analyzer": "my_custom_analyzer",
  61. "text": "Is this <b>déjà vu</b>?"
  62. }
  63. --------------------------------
  64. // CONSOLE
  65. <1> Setting `type` to `custom` tells Elasticsearch that we are defining a custom analyzer.
  66. Compare this to how <<configuring-analyzers,built-in analyzers can be configured>>:
  67. `type` will be set to the name of the built-in analyzer, like
  68. <<analysis-standard-analyzer,`standard`>> or <<analysis-simple-analyzer,`simple`>>.
  69. /////////////////////
  70. [source,console-result]
  71. ----------------------------
  72. {
  73. "tokens": [
  74. {
  75. "token": "is",
  76. "start_offset": 0,
  77. "end_offset": 2,
  78. "type": "<ALPHANUM>",
  79. "position": 0
  80. },
  81. {
  82. "token": "this",
  83. "start_offset": 3,
  84. "end_offset": 7,
  85. "type": "<ALPHANUM>",
  86. "position": 1
  87. },
  88. {
  89. "token": "deja",
  90. "start_offset": 11,
  91. "end_offset": 15,
  92. "type": "<ALPHANUM>",
  93. "position": 2
  94. },
  95. {
  96. "token": "vu",
  97. "start_offset": 16,
  98. "end_offset": 22,
  99. "type": "<ALPHANUM>",
  100. "position": 3
  101. }
  102. ]
  103. }
  104. ----------------------------
  105. /////////////////////
  106. The above example produces the following terms:
  107. [source,text]
  108. ---------------------------
  109. [ is, this, deja, vu ]
  110. ---------------------------
  111. The previous example used tokenizer, token filters, and character filters with
  112. their default configurations, but it is possible to create configured versions
  113. of each and to use them in a custom analyzer.
  114. Here is a more complicated example that combines the following:
  115. Character Filter::
  116. * <<analysis-mapping-charfilter,Mapping Character Filter>>, configured to replace `:)` with `_happy_` and `:(` with `_sad_`
  117. Tokenizer::
  118. * <<analysis-pattern-tokenizer,Pattern Tokenizer>>, configured to split on punctuation characters
  119. Token Filters::
  120. * <<analysis-lowercase-tokenfilter,Lowercase Token Filter>>
  121. * <<analysis-stop-tokenfilter,Stop Token Filter>>, configured to use the pre-defined list of English stop words
  122. Here is an example:
  123. [source,js]
  124. --------------------------------------------------
  125. PUT my_index
  126. {
  127. "settings": {
  128. "analysis": {
  129. "analyzer": {
  130. "my_custom_analyzer": { <1>
  131. "type": "custom",
  132. "char_filter": [
  133. "emoticons"
  134. ],
  135. "tokenizer": "punctuation",
  136. "filter": [
  137. "lowercase",
  138. "english_stop"
  139. ]
  140. }
  141. },
  142. "tokenizer": {
  143. "punctuation": { <2>
  144. "type": "pattern",
  145. "pattern": "[ .,!?]"
  146. }
  147. },
  148. "char_filter": {
  149. "emoticons": { <3>
  150. "type": "mapping",
  151. "mappings": [
  152. ":) => _happy_",
  153. ":( => _sad_"
  154. ]
  155. }
  156. },
  157. "filter": {
  158. "english_stop": { <4>
  159. "type": "stop",
  160. "stopwords": "_english_"
  161. }
  162. }
  163. }
  164. }
  165. }
  166. POST my_index/_analyze
  167. {
  168. "analyzer": "my_custom_analyzer",
  169. "text": "I'm a :) person, and you?"
  170. }
  171. --------------------------------------------------
  172. // CONSOLE
  173. <1> Assigns the index a default custom analyzer, `my_custom_analyzer`. This
  174. analyzer uses a custom tokenizer, character filter, and token filter that
  175. are defined later in the request.
  176. <2> Defines the custom `punctuation` tokenizer.
  177. <3> Defines the custom `emoticons` character filter.
  178. <4> Defines the custom `english_stop` token filter.
  179. /////////////////////
  180. [source,console-result]
  181. ----------------------------
  182. {
  183. "tokens": [
  184. {
  185. "token": "i'm",
  186. "start_offset": 0,
  187. "end_offset": 3,
  188. "type": "word",
  189. "position": 0
  190. },
  191. {
  192. "token": "_happy_",
  193. "start_offset": 6,
  194. "end_offset": 8,
  195. "type": "word",
  196. "position": 2
  197. },
  198. {
  199. "token": "person",
  200. "start_offset": 9,
  201. "end_offset": 15,
  202. "type": "word",
  203. "position": 3
  204. },
  205. {
  206. "token": "you",
  207. "start_offset": 21,
  208. "end_offset": 24,
  209. "type": "word",
  210. "position": 5
  211. }
  212. ]
  213. }
  214. ----------------------------
  215. /////////////////////
  216. The above example produces the following terms:
  217. [source,text]
  218. ---------------------------
  219. [ i'm, _happy_, person, you ]
  220. ---------------------------