edgengram-tokenizer.asciidoc 8.2 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359
  1. [[analysis-edgengram-tokenizer]]
  2. === Edge n-gram tokenizer
  3. ++++
  4. <titleabbrev>Edge n-gram</titleabbrev>
  5. ++++
  6. The `edge_ngram` tokenizer first breaks text down into words whenever it
  7. encounters one of a list of specified characters, then it emits
  8. https://en.wikipedia.org/wiki/N-gram[N-grams] of each word where the start of
  9. the N-gram is anchored to the beginning of the word.
  10. Edge N-Grams are useful for _search-as-you-type_ queries.
  11. TIP: When you need _search-as-you-type_ for text which has a widely known
  12. order, such as movie or song titles, the
  13. <<completion-suggester,completion suggester>> is a much more efficient
  14. choice than edge N-grams. Edge N-grams have the advantage when trying to
  15. autocomplete words that can appear in any order.
  16. [float]
  17. === Example output
  18. With the default settings, the `edge_ngram` tokenizer treats the initial text as a
  19. single token and produces N-grams with minimum length `1` and maximum length
  20. `2`:
  21. [source,console]
  22. ---------------------------
  23. POST _analyze
  24. {
  25. "tokenizer": "edge_ngram",
  26. "text": "Quick Fox"
  27. }
  28. ---------------------------
  29. /////////////////////
  30. [source,console-result]
  31. ----------------------------
  32. {
  33. "tokens": [
  34. {
  35. "token": "Q",
  36. "start_offset": 0,
  37. "end_offset": 1,
  38. "type": "word",
  39. "position": 0
  40. },
  41. {
  42. "token": "Qu",
  43. "start_offset": 0,
  44. "end_offset": 2,
  45. "type": "word",
  46. "position": 1
  47. }
  48. ]
  49. }
  50. ----------------------------
  51. /////////////////////
  52. The above sentence would produce the following terms:
  53. [source,text]
  54. ---------------------------
  55. [ Q, Qu ]
  56. ---------------------------
  57. NOTE: These default gram lengths are almost entirely useless. You need to
  58. configure the `edge_ngram` before using it.
  59. [float]
  60. === Configuration
  61. The `edge_ngram` tokenizer accepts the following parameters:
  62. `min_gram`::
  63. Minimum length of characters in a gram. Defaults to `1`.
  64. `max_gram`::
  65. +
  66. --
  67. Maximum length of characters in a gram. Defaults to `2`.
  68. See <<max-gram-limits>>.
  69. --
  70. `token_chars`::
  71. Character classes that should be included in a token. Elasticsearch
  72. will split on characters that don't belong to the classes specified.
  73. Defaults to `[]` (keep all characters).
  74. +
  75. Character classes may be any of the following:
  76. +
  77. * `letter` -- for example `a`, `b`, `ï` or `京`
  78. * `digit` -- for example `3` or `7`
  79. * `whitespace` -- for example `" "` or `"\n"`
  80. * `punctuation` -- for example `!` or `"`
  81. * `symbol` -- for example `$` or `√`
  82. * `custom` -- custom characters which need to be set using the
  83. `custom_token_chars` setting.
  84. `custom_token_chars`::
  85. Custom characters that should be treated as part of a token. For example,
  86. setting this to `+-_` will make the tokenizer treat the plus, minus and
  87. underscore sign as part of a token.
  88. [float]
  89. [[max-gram-limits]]
  90. === Limitations of the `max_gram` parameter
  91. The `edge_ngram` tokenizer's `max_gram` value limits the character length of
  92. tokens. When the `edge_ngram` tokenizer is used with an index analyzer, this
  93. means search terms longer than the `max_gram` length may not match any indexed
  94. terms.
  95. For example, if the `max_gram` is `3`, searches for `apple` won't match the
  96. indexed term `app`.
  97. To account for this, you can use the
  98. <<analysis-truncate-tokenfilter,`truncate`>> token filter with a search analyzer
  99. to shorten search terms to the `max_gram` character length. However, this could
  100. return irrelevant results.
  101. For example, if the `max_gram` is `3` and search terms are truncated to three
  102. characters, the search term `apple` is shortened to `app`. This means searches
  103. for `apple` return any indexed terms matching `app`, such as `apply`, `snapped`,
  104. and `apple`.
  105. We recommend testing both approaches to see which best fits your
  106. use case and desired search experience.
  107. [float]
  108. === Example configuration
  109. In this example, we configure the `edge_ngram` tokenizer to treat letters and
  110. digits as tokens, and to produce grams with minimum length `2` and maximum
  111. length `10`:
  112. [source,console]
  113. ----------------------------
  114. PUT my_index
  115. {
  116. "settings": {
  117. "analysis": {
  118. "analyzer": {
  119. "my_analyzer": {
  120. "tokenizer": "my_tokenizer"
  121. }
  122. },
  123. "tokenizer": {
  124. "my_tokenizer": {
  125. "type": "edge_ngram",
  126. "min_gram": 2,
  127. "max_gram": 10,
  128. "token_chars": [
  129. "letter",
  130. "digit"
  131. ]
  132. }
  133. }
  134. }
  135. }
  136. }
  137. POST my_index/_analyze
  138. {
  139. "analyzer": "my_analyzer",
  140. "text": "2 Quick Foxes."
  141. }
  142. ----------------------------
  143. /////////////////////
  144. [source,console-result]
  145. ----------------------------
  146. {
  147. "tokens": [
  148. {
  149. "token": "Qu",
  150. "start_offset": 2,
  151. "end_offset": 4,
  152. "type": "word",
  153. "position": 0
  154. },
  155. {
  156. "token": "Qui",
  157. "start_offset": 2,
  158. "end_offset": 5,
  159. "type": "word",
  160. "position": 1
  161. },
  162. {
  163. "token": "Quic",
  164. "start_offset": 2,
  165. "end_offset": 6,
  166. "type": "word",
  167. "position": 2
  168. },
  169. {
  170. "token": "Quick",
  171. "start_offset": 2,
  172. "end_offset": 7,
  173. "type": "word",
  174. "position": 3
  175. },
  176. {
  177. "token": "Fo",
  178. "start_offset": 8,
  179. "end_offset": 10,
  180. "type": "word",
  181. "position": 4
  182. },
  183. {
  184. "token": "Fox",
  185. "start_offset": 8,
  186. "end_offset": 11,
  187. "type": "word",
  188. "position": 5
  189. },
  190. {
  191. "token": "Foxe",
  192. "start_offset": 8,
  193. "end_offset": 12,
  194. "type": "word",
  195. "position": 6
  196. },
  197. {
  198. "token": "Foxes",
  199. "start_offset": 8,
  200. "end_offset": 13,
  201. "type": "word",
  202. "position": 7
  203. }
  204. ]
  205. }
  206. ----------------------------
  207. /////////////////////
  208. The above example produces the following terms:
  209. [source,text]
  210. ---------------------------
  211. [ Qu, Qui, Quic, Quick, Fo, Fox, Foxe, Foxes ]
  212. ---------------------------
  213. Usually we recommend using the same `analyzer` at index time and at search
  214. time. In the case of the `edge_ngram` tokenizer, the advice is different. It
  215. only makes sense to use the `edge_ngram` tokenizer at index time, to ensure
  216. that partial words are available for matching in the index. At search time,
  217. just search for the terms the user has typed in, for instance: `Quick Fo`.
  218. Below is an example of how to set up a field for _search-as-you-type_.
  219. Note that the `max_gram` value for the index analyzer is `10`, which limits
  220. indexed terms to 10 characters. Search terms are not truncated, meaning that
  221. search terms longer than 10 characters may not match any indexed terms.
  222. [source,console]
  223. -----------------------------------
  224. PUT my_index
  225. {
  226. "settings": {
  227. "analysis": {
  228. "analyzer": {
  229. "autocomplete": {
  230. "tokenizer": "autocomplete",
  231. "filter": [
  232. "lowercase"
  233. ]
  234. },
  235. "autocomplete_search": {
  236. "tokenizer": "lowercase"
  237. }
  238. },
  239. "tokenizer": {
  240. "autocomplete": {
  241. "type": "edge_ngram",
  242. "min_gram": 2,
  243. "max_gram": 10,
  244. "token_chars": [
  245. "letter"
  246. ]
  247. }
  248. }
  249. }
  250. },
  251. "mappings": {
  252. "properties": {
  253. "title": {
  254. "type": "text",
  255. "analyzer": "autocomplete",
  256. "search_analyzer": "autocomplete_search"
  257. }
  258. }
  259. }
  260. }
  261. PUT my_index/_doc/1
  262. {
  263. "title": "Quick Foxes" <1>
  264. }
  265. POST my_index/_refresh
  266. GET my_index/_search
  267. {
  268. "query": {
  269. "match": {
  270. "title": {
  271. "query": "Quick Fo", <2>
  272. "operator": "and"
  273. }
  274. }
  275. }
  276. }
  277. -----------------------------------
  278. <1> The `autocomplete` analyzer indexes the terms `[qu, qui, quic, quick, fo, fox, foxe, foxes]`.
  279. <2> The `autocomplete_search` analyzer searches for the terms `[quick, fo]`, both of which appear in the index.
  280. /////////////////////
  281. [source,console-result]
  282. ----------------------------
  283. {
  284. "took": $body.took,
  285. "timed_out": false,
  286. "_shards": {
  287. "total": 1,
  288. "successful": 1,
  289. "skipped" : 0,
  290. "failed": 0
  291. },
  292. "hits": {
  293. "total" : {
  294. "value": 1,
  295. "relation": "eq"
  296. },
  297. "max_score": 0.5753642,
  298. "hits": [
  299. {
  300. "_index": "my_index",
  301. "_id": "1",
  302. "_score": 0.5753642,
  303. "_source": {
  304. "title": "Quick Foxes"
  305. }
  306. }
  307. ]
  308. }
  309. }
  310. ----------------------------
  311. // TESTRESPONSE[s/"took".*/"took": "$body.took",/]
  312. /////////////////////