pattern-tokenizer.asciidoc 5.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275
  1. [[analysis-pattern-tokenizer]]
  2. === Pattern tokenizer
  3. ++++
  4. <titleabbrev>Pattern</titleabbrev>
  5. ++++
  6. The `pattern` tokenizer uses a regular expression to either split text into
  7. terms whenever it matches a word separator, or to capture matching text as
  8. terms.
  9. The default pattern is `\W+`, which splits text whenever it encounters
  10. non-word characters.
  11. [WARNING]
  12. .Beware of Pathological Regular Expressions
  13. ========================================
  14. The pattern tokenizer uses
  15. https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
  16. A badly written regular expression could run very slowly or even throw a
  17. StackOverflowError and cause the node it is running on to exit suddenly.
  18. Read more about https://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
  19. ========================================
  20. [discrete]
  21. === Example output
  22. [source,console]
  23. ---------------------------
  24. POST _analyze
  25. {
  26. "tokenizer": "pattern",
  27. "text": "The foo_bar_size's default is 5."
  28. }
  29. ---------------------------
  30. /////////////////////
  31. [source,console-result]
  32. ----------------------------
  33. {
  34. "tokens": [
  35. {
  36. "token": "The",
  37. "start_offset": 0,
  38. "end_offset": 3,
  39. "type": "word",
  40. "position": 0
  41. },
  42. {
  43. "token": "foo_bar_size",
  44. "start_offset": 4,
  45. "end_offset": 16,
  46. "type": "word",
  47. "position": 1
  48. },
  49. {
  50. "token": "s",
  51. "start_offset": 17,
  52. "end_offset": 18,
  53. "type": "word",
  54. "position": 2
  55. },
  56. {
  57. "token": "default",
  58. "start_offset": 19,
  59. "end_offset": 26,
  60. "type": "word",
  61. "position": 3
  62. },
  63. {
  64. "token": "is",
  65. "start_offset": 27,
  66. "end_offset": 29,
  67. "type": "word",
  68. "position": 4
  69. },
  70. {
  71. "token": "5",
  72. "start_offset": 30,
  73. "end_offset": 31,
  74. "type": "word",
  75. "position": 5
  76. }
  77. ]
  78. }
  79. ----------------------------
  80. /////////////////////
  81. The above sentence would produce the following terms:
  82. [source,text]
  83. ---------------------------
  84. [ The, foo_bar_size, s, default, is, 5 ]
  85. ---------------------------
  86. [discrete]
  87. === Configuration
  88. The `pattern` tokenizer accepts the following parameters:
  89. [horizontal]
  90. `pattern`::
  91. A https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
  92. `flags`::
  93. Java regular expression https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
  94. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
  95. `group`::
  96. Which capture group to extract as tokens. Defaults to `-1` (split).
  97. [discrete]
  98. === Example configuration
  99. In this example, we configure the `pattern` tokenizer to break text into
  100. tokens when it encounters commas:
  101. [source,console]
  102. ----------------------------
  103. PUT my-index-000001
  104. {
  105. "settings": {
  106. "analysis": {
  107. "analyzer": {
  108. "my_analyzer": {
  109. "tokenizer": "my_tokenizer"
  110. }
  111. },
  112. "tokenizer": {
  113. "my_tokenizer": {
  114. "type": "pattern",
  115. "pattern": ","
  116. }
  117. }
  118. }
  119. }
  120. }
  121. POST my-index-000001/_analyze
  122. {
  123. "analyzer": "my_analyzer",
  124. "text": "comma,separated,values"
  125. }
  126. ----------------------------
  127. /////////////////////
  128. [source,console-result]
  129. ----------------------------
  130. {
  131. "tokens": [
  132. {
  133. "token": "comma",
  134. "start_offset": 0,
  135. "end_offset": 5,
  136. "type": "word",
  137. "position": 0
  138. },
  139. {
  140. "token": "separated",
  141. "start_offset": 6,
  142. "end_offset": 15,
  143. "type": "word",
  144. "position": 1
  145. },
  146. {
  147. "token": "values",
  148. "start_offset": 16,
  149. "end_offset": 22,
  150. "type": "word",
  151. "position": 2
  152. }
  153. ]
  154. }
  155. ----------------------------
  156. /////////////////////
  157. The above example produces the following terms:
  158. [source,text]
  159. ---------------------------
  160. [ comma, separated, values ]
  161. ---------------------------
  162. In the next example, we configure the `pattern` tokenizer to capture values
  163. enclosed in double quotes (ignoring embedded escaped quotes `\"`). The regex
  164. itself looks like this:
  165. "((?:\\"|[^"]|\\")*)"
  166. And reads as follows:
  167. * A literal `"`
  168. * Start capturing:
  169. ** A literal `\"` OR any character except `"`
  170. ** Repeat until no more characters match
  171. * A literal closing `"`
  172. When the pattern is specified in JSON, the `"` and `\` characters need to be
  173. escaped, so the pattern ends up looking like:
  174. \"((?:\\\\\"|[^\"]|\\\\\")+)\"
  175. [source,console]
  176. ----------------------------
  177. PUT my-index-000001
  178. {
  179. "settings": {
  180. "analysis": {
  181. "analyzer": {
  182. "my_analyzer": {
  183. "tokenizer": "my_tokenizer"
  184. }
  185. },
  186. "tokenizer": {
  187. "my_tokenizer": {
  188. "type": "pattern",
  189. "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"",
  190. "group": 1
  191. }
  192. }
  193. }
  194. }
  195. }
  196. POST my-index-000001/_analyze
  197. {
  198. "analyzer": "my_analyzer",
  199. "text": "\"value\", \"value with embedded \\\" quote\""
  200. }
  201. ----------------------------
  202. /////////////////////
  203. [source,console-result]
  204. ----------------------------
  205. {
  206. "tokens": [
  207. {
  208. "token": "value",
  209. "start_offset": 1,
  210. "end_offset": 6,
  211. "type": "word",
  212. "position": 0
  213. },
  214. {
  215. "token": "value with embedded \\\" quote",
  216. "start_offset": 10,
  217. "end_offset": 38,
  218. "type": "word",
  219. "position": 1
  220. }
  221. ]
  222. }
  223. ----------------------------
  224. /////////////////////
  225. The above example produces the following two terms:
  226. [source,text]
  227. ---------------------------
  228. [ value, value with embedded \" quote ]
  229. ---------------------------