pattern-tokenizer.asciidoc 5.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272
  1. [[analysis-pattern-tokenizer]]
  2. === Pattern Tokenizer
  3. The `pattern` tokenizer uses a regular expression to either split text into
  4. terms whenever it matches a word separator, or to capture matching text as
  5. terms.
  6. The default pattern is `\W+`, which splits text whenever it encounters
  7. non-word characters.
  8. [WARNING]
  9. .Beware of Pathological Regular Expressions
  10. ========================================
  11. The pattern tokenizer uses
  12. http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java Regular Expressions].
  13. A badly written regular expression could run very slowly or even throw a
  14. StackOverflowError and cause the node it is running on to exit suddenly.
  15. Read more about http://www.regular-expressions.info/catastrophic.html[pathological regular expressions and how to avoid them].
  16. ========================================
  17. [float]
  18. === Example output
  19. [source,console]
  20. ---------------------------
  21. POST _analyze
  22. {
  23. "tokenizer": "pattern",
  24. "text": "The foo_bar_size's default is 5."
  25. }
  26. ---------------------------
  27. /////////////////////
  28. [source,console-result]
  29. ----------------------------
  30. {
  31. "tokens": [
  32. {
  33. "token": "The",
  34. "start_offset": 0,
  35. "end_offset": 3,
  36. "type": "word",
  37. "position": 0
  38. },
  39. {
  40. "token": "foo_bar_size",
  41. "start_offset": 4,
  42. "end_offset": 16,
  43. "type": "word",
  44. "position": 1
  45. },
  46. {
  47. "token": "s",
  48. "start_offset": 17,
  49. "end_offset": 18,
  50. "type": "word",
  51. "position": 2
  52. },
  53. {
  54. "token": "default",
  55. "start_offset": 19,
  56. "end_offset": 26,
  57. "type": "word",
  58. "position": 3
  59. },
  60. {
  61. "token": "is",
  62. "start_offset": 27,
  63. "end_offset": 29,
  64. "type": "word",
  65. "position": 4
  66. },
  67. {
  68. "token": "5",
  69. "start_offset": 30,
  70. "end_offset": 31,
  71. "type": "word",
  72. "position": 5
  73. }
  74. ]
  75. }
  76. ----------------------------
  77. /////////////////////
  78. The above sentence would produce the following terms:
  79. [source,text]
  80. ---------------------------
  81. [ The, foo_bar_size, s, default, is, 5 ]
  82. ---------------------------
  83. [float]
  84. === Configuration
  85. The `pattern` tokenizer accepts the following parameters:
  86. [horizontal]
  87. `pattern`::
  88. A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
  89. `flags`::
  90. Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
  91. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
  92. `group`::
  93. Which capture group to extract as tokens. Defaults to `-1` (split).
  94. [float]
  95. === Example configuration
  96. In this example, we configure the `pattern` tokenizer to break text into
  97. tokens when it encounters commas:
  98. [source,console]
  99. ----------------------------
  100. PUT my_index
  101. {
  102. "settings": {
  103. "analysis": {
  104. "analyzer": {
  105. "my_analyzer": {
  106. "tokenizer": "my_tokenizer"
  107. }
  108. },
  109. "tokenizer": {
  110. "my_tokenizer": {
  111. "type": "pattern",
  112. "pattern": ","
  113. }
  114. }
  115. }
  116. }
  117. }
  118. POST my_index/_analyze
  119. {
  120. "analyzer": "my_analyzer",
  121. "text": "comma,separated,values"
  122. }
  123. ----------------------------
  124. /////////////////////
  125. [source,console-result]
  126. ----------------------------
  127. {
  128. "tokens": [
  129. {
  130. "token": "comma",
  131. "start_offset": 0,
  132. "end_offset": 5,
  133. "type": "word",
  134. "position": 0
  135. },
  136. {
  137. "token": "separated",
  138. "start_offset": 6,
  139. "end_offset": 15,
  140. "type": "word",
  141. "position": 1
  142. },
  143. {
  144. "token": "values",
  145. "start_offset": 16,
  146. "end_offset": 22,
  147. "type": "word",
  148. "position": 2
  149. }
  150. ]
  151. }
  152. ----------------------------
  153. /////////////////////
  154. The above example produces the following terms:
  155. [source,text]
  156. ---------------------------
  157. [ comma, separated, values ]
  158. ---------------------------
  159. In the next example, we configure the `pattern` tokenizer to capture values
  160. enclosed in double quotes (ignoring embedded escaped quotes `\"`). The regex
  161. itself looks like this:
  162. "((?:\\"|[^"]|\\")*)"
  163. And reads as follows:
  164. * A literal `"`
  165. * Start capturing:
  166. ** A literal `\"` OR any character except `"`
  167. ** Repeat until no more characters match
  168. * A literal closing `"`
  169. When the pattern is specified in JSON, the `"` and `\` characters need to be
  170. escaped, so the pattern ends up looking like:
  171. \"((?:\\\\\"|[^\"]|\\\\\")+)\"
  172. [source,console]
  173. ----------------------------
  174. PUT my_index
  175. {
  176. "settings": {
  177. "analysis": {
  178. "analyzer": {
  179. "my_analyzer": {
  180. "tokenizer": "my_tokenizer"
  181. }
  182. },
  183. "tokenizer": {
  184. "my_tokenizer": {
  185. "type": "pattern",
  186. "pattern": "\"((?:\\\\\"|[^\"]|\\\\\")+)\"",
  187. "group": 1
  188. }
  189. }
  190. }
  191. }
  192. }
  193. POST my_index/_analyze
  194. {
  195. "analyzer": "my_analyzer",
  196. "text": "\"value\", \"value with embedded \\\" quote\""
  197. }
  198. ----------------------------
  199. /////////////////////
  200. [source,console-result]
  201. ----------------------------
  202. {
  203. "tokens": [
  204. {
  205. "token": "value",
  206. "start_offset": 1,
  207. "end_offset": 6,
  208. "type": "word",
  209. "position": 0
  210. },
  211. {
  212. "token": "value with embedded \\\" quote",
  213. "start_offset": 10,
  214. "end_offset": 38,
  215. "type": "word",
  216. "position": 1
  217. }
  218. ]
  219. }
  220. ----------------------------
  221. /////////////////////
  222. The above example produces the following two terms:
  223. [source,text]
  224. ---------------------------
  225. [ value, value with embedded \" quote ]
  226. ---------------------------