pattern-tokenizer.asciidoc 5.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278
  1. [[analysis-pattern-tokenizer]]
  2. === Pattern Tokenizer
  3. The `pattern` tokenizer uses a regular expression to either split text into
  4. terms whenever it matches a word separator, or to capture matching text as
  5. terms.
  6. The default pattern is `\W+`, which splits text whenever it encounters
  7. non-word characters.
  8. [WARNING]
  9. .Beware of Pathological Regular Expressions
  10. ========================================
  11. The pattern tokenizer uses
  12. http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
  13. A badly written regular expression could run very slowly or even throw a
  14. StackOverflowError and cause the node it is running on to exit suddenly.
  15. Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
  16. ========================================
  17. [float]
  18. === Example output
  19. [source,js]
  20. ---------------------------
  21. POST _analyze
  22. {
  23. "tokenizer": "pattern",
  24. "text": "The foo_bar_size's default is 5."
  25. }
  26. ---------------------------
  27. // CONSOLE
  28. /////////////////////
  29. [source,js]
  30. ----------------------------
  31. {
  32. "tokens": [
  33. {
  34. "token": "The",
  35. "start_offset": 0,
  36. "end_offset": 3,
  37. "type": "word",
  38. "position": 0
  39. },
  40. {
  41. "token": "foo_bar_size",
  42. "start_offset": 4,
  43. "end_offset": 16,
  44. "type": "word",
  45. "position": 1
  46. },
  47. {
  48. "token": "s",
  49. "start_offset": 17,
  50. "end_offset": 18,
  51. "type": "word",
  52. "position": 2
  53. },
  54. {
  55. "token": "default",
  56. "start_offset": 19,
  57. "end_offset": 26,
  58. "type": "word",
  59. "position": 3
  60. },
  61. {
  62. "token": "is",
  63. "start_offset": 27,
  64. "end_offset": 29,
  65. "type": "word",
  66. "position": 4
  67. },
  68. {
  69. "token": "5",
  70. "start_offset": 30,
  71. "end_offset": 31,
  72. "type": "word",
  73. "position": 5
  74. }
  75. ]
  76. }
  77. ----------------------------
  78. // TESTRESPONSE
  79. /////////////////////
  80. The above sentence would produce the following terms:
  81. [source,text]
  82. ---------------------------
  83. [ The, foo_bar_size, s, default, is, 5 ]
  84. ---------------------------
  85. [float]
  86. === Configuration
  87. The `pattern` tokenizer accepts the following parameters:
  88. [horizontal]
  89. `pattern`::
  90. A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
  91. `flags`::
  92. Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
  93. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
  94. `group`::
  95. Which capture group to extract as tokens. Defaults to `-1` (split).
  96. [float]
  97. === Example configuration
  98. In this example, we configure the `pattern` tokenizer to break text into
  99. tokens when it encounters commas:
  100. [source,js]
  101. ----------------------------
  102. PUT my_index
  103. {
  104. "settings": {
  105. "analysis": {
  106. "analyzer": {
  107. "my_analyzer": {
  108. "tokenizer": "my_tokenizer"
  109. }
  110. },
  111. "tokenizer": {
  112. "my_tokenizer": {
  113. "type": "pattern",
  114. "pattern": ","
  115. }
  116. }
  117. }
  118. }
  119. }
  120. POST my_index/_analyze
  121. {
  122. "analyzer": "my_analyzer",
  123. "text": "comma,separated,values"
  124. }
  125. ----------------------------
  126. // CONSOLE
  127. /////////////////////
  128. [source,js]
  129. ----------------------------
  130. {
  131. "tokens": [
  132. {
  133. "token": "comma",
  134. "start_offset": 0,
  135. "end_offset": 5,
  136. "type": "word",
  137. "position": 0
  138. },
  139. {
  140. "token": "separated",
  141. "start_offset": 6,
  142. "end_offset": 15,
  143. "type": "word",
  144. "position": 1
  145. },
  146. {
  147. "token": "values",
  148. "start_offset": 16,
  149. "end_offset": 22,
  150. "type": "word",
  151. "position": 2
  152. }
  153. ]
  154. }
  155. ----------------------------
  156. // TESTRESPONSE
  157. /////////////////////
  158. The above example produces the following terms:
  159. [source,text]
  160. ---------------------------
  161. [ comma, separated, values ]
  162. ---------------------------
  163. In the next example, we configure the `pattern` tokenizer to capture values
  164. enclosed in double quotes (ignoring embedded escaped quotes `\"`). The regex
  165. itself looks like this:
  166. "((?:\\"|[^"]|\\")*)"
  167. And reads as follows:
  168. * A literal `"`
  169. * Start capturing:
  170. ** A literal `\"` OR any character except `"`
  171. ** Repeat until no more characters match
  172. * A literal closing `"`
  173. When the pattern is specified in JSON, the `"` and `\` characters need to be
  174. escaped, so the pattern ends up looking like:
  175. \"((?:\\\\\"|[^\"]|\\\\\")+)\"
  176. [source,js]
  177. ----------------------------
  178. PUT my_index
  179. {
  180. "settings": {
  181. "analysis": {
  182. "analyzer": {
  183. "my_analyzer": {
  184. "tokenizer": "my_tokenizer"
  185. }
  186. },
  187. "tokenizer": {
  188. "my_tokenizer": {
  189. "type": "pattern",
  190. "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"",
  191. "group": 1
  192. }
  193. }
  194. }
  195. }
  196. }
  197. POST my_index/_analyze
  198. {
  199. "analyzer": "my_analyzer",
  200. "text": "\"value\", \"value with embedded \\\" quote\""
  201. }
  202. ----------------------------
  203. // CONSOLE
  204. /////////////////////
  205. [source,js]
  206. ----------------------------
  207. {
  208. "tokens": [
  209. {
  210. "token": "value",
  211. "start_offset": 1,
  212. "end_offset": 6,
  213. "type": "word",
  214. "position": 0
  215. },
  216. {
  217. "token": "value with embedded \\\" quote",
  218. "start_offset": 10,
  219. "end_offset": 38,
  220. "type": "word",
  221. "position": 1
  222. }
  223. ]
  224. }
  225. ----------------------------
  226. // TESTRESPONSE
  227. /////////////////////
  228. The above example produces the following two terms:
  229. [source,text]
  230. ---------------------------
  231. [ value, value with embedded \" quote ]
  232. ---------------------------