pattern-analyzer.asciidoc 7.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371
  1. [[analysis-pattern-analyzer]]
  2. === Pattern Analyzer
  3. The `pattern` analyzer uses a regular expression to split the text into terms.
  4. The regular expression should match the *token separators* not the tokens
  5. themselves. The regular expression defaults to `\W+` (or all non-word characters).
  6. [float]
  7. === Definition
  8. It consists of:
  9. Tokenizer::
  10. * <<analysis-pattern-tokenizer,Pattern Tokenizer>>
  11. Token Filters::
  12. * <<analysis-lowercase-tokenfilter,Lower Case Token Filter>>
  13. * <<analysis-stop-tokenfilter,Stop Token Filter>> (disabled by default)
  14. [float]
  15. === Example output
  16. [source,js]
  17. ---------------------------
  18. POST _analyze
  19. {
  20. "analyzer": "pattern",
  21. "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
  22. }
  23. ---------------------------
  24. // CONSOLE
  25. /////////////////////
  26. [source,js]
  27. ----------------------------
  28. {
  29. "tokens": [
  30. {
  31. "token": "the",
  32. "start_offset": 0,
  33. "end_offset": 3,
  34. "type": "word",
  35. "position": 0
  36. },
  37. {
  38. "token": "2",
  39. "start_offset": 4,
  40. "end_offset": 5,
  41. "type": "word",
  42. "position": 1
  43. },
  44. {
  45. "token": "quick",
  46. "start_offset": 6,
  47. "end_offset": 11,
  48. "type": "word",
  49. "position": 2
  50. },
  51. {
  52. "token": "brown",
  53. "start_offset": 12,
  54. "end_offset": 17,
  55. "type": "word",
  56. "position": 3
  57. },
  58. {
  59. "token": "foxes",
  60. "start_offset": 18,
  61. "end_offset": 23,
  62. "type": "word",
  63. "position": 4
  64. },
  65. {
  66. "token": "jumped",
  67. "start_offset": 24,
  68. "end_offset": 30,
  69. "type": "word",
  70. "position": 5
  71. },
  72. {
  73. "token": "over",
  74. "start_offset": 31,
  75. "end_offset": 35,
  76. "type": "word",
  77. "position": 6
  78. },
  79. {
  80. "token": "the",
  81. "start_offset": 36,
  82. "end_offset": 39,
  83. "type": "word",
  84. "position": 7
  85. },
  86. {
  87. "token": "lazy",
  88. "start_offset": 40,
  89. "end_offset": 44,
  90. "type": "word",
  91. "position": 8
  92. },
  93. {
  94. "token": "dog",
  95. "start_offset": 45,
  96. "end_offset": 48,
  97. "type": "word",
  98. "position": 9
  99. },
  100. {
  101. "token": "s",
  102. "start_offset": 49,
  103. "end_offset": 50,
  104. "type": "word",
  105. "position": 10
  106. },
  107. {
  108. "token": "bone",
  109. "start_offset": 51,
  110. "end_offset": 55,
  111. "type": "word",
  112. "position": 11
  113. }
  114. ]
  115. }
  116. ----------------------------
  117. // TESTRESPONSE
  118. /////////////////////
  119. The above sentence would produce the following terms:
  120. [source,text]
  121. ---------------------------
  122. [ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
  123. ---------------------------
  124. [float]
  125. === Configuration
  126. The `pattern` analyzer accepts the following parameters:
  127. [horizontal]
  128. `pattern`::
  129. A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
  130. `flags`::
  131. Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
  132. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
  133. `lowercase`::
  134. Should terms be lowercased or not. Defaults to `true`.
  135. `max_token_length`::
  136. The maximum token length. If a token is seen that exceeds this length then
  137. it is split at `max_token_length` intervals. Defaults to `255`.
  138. `stopwords`::
  139. A pre-defined stop words list like `_english_` or an array containing a
  140. list of stop words. Defaults to `\_none_`.
  141. `stopwords_path`::
  142. The path to a file containing stop words.
  143. See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
  144. about stop word configuration.
  145. [float]
  146. === Example configuration
  147. In this example, we configure the `pattern` analyzer to split email addresses
  148. on non-word characters or on underscores (`\W|_`), and to lower-case the result:
  149. [source,js]
  150. ----------------------------
  151. PUT my_index
  152. {
  153. "settings": {
  154. "analysis": {
  155. "analyzer": {
  156. "my_email_analyzer": {
  157. "type": "pattern",
  158. "pattern": "\\W|_", <1>
  159. "lowercase": true
  160. }
  161. }
  162. }
  163. }
  164. }
  165. POST my_index/_analyze
  166. {
  167. "analyzer": "my_email_analyzer",
  168. "text": "John_Smith@foo-bar.com"
  169. }
  170. ----------------------------
  171. // CONSOLE
  172. <1> The backslashes in the pattern need to be escaped when specifying the
  173. pattern as a JSON string.
  174. /////////////////////
  175. [source,js]
  176. ----------------------------
  177. {
  178. "tokens": [
  179. {
  180. "token": "john",
  181. "start_offset": 0,
  182. "end_offset": 4,
  183. "type": "word",
  184. "position": 0
  185. },
  186. {
  187. "token": "smith",
  188. "start_offset": 5,
  189. "end_offset": 10,
  190. "type": "word",
  191. "position": 1
  192. },
  193. {
  194. "token": "foo",
  195. "start_offset": 11,
  196. "end_offset": 14,
  197. "type": "word",
  198. "position": 2
  199. },
  200. {
  201. "token": "bar",
  202. "start_offset": 15,
  203. "end_offset": 18,
  204. "type": "word",
  205. "position": 3
  206. },
  207. {
  208. "token": "com",
  209. "start_offset": 19,
  210. "end_offset": 22,
  211. "type": "word",
  212. "position": 4
  213. }
  214. ]
  215. }
  216. ----------------------------
  217. // TESTRESPONSE
  218. /////////////////////
  219. The above example produces the following terms:
  220. [source,text]
  221. ---------------------------
  222. [ john, smith, foo, bar, com ]
  223. ---------------------------
  224. [float]
  225. ==== CamelCase tokenizer
  226. The following more complicated example splits CamelCase text into tokens:
  227. [source,js]
  228. --------------------------------------------------
  229. PUT my_index
  230. {
  231. "settings": {
  232. "analysis": {
  233. "analyzer": {
  234. "camel": {
  235. "type": "pattern",
  236. "pattern": "([^\\p{L}\\d]+)|(?<=\\D)(?=\\d)|(?<=\\d)(?=\\D)|(?<=[\\p{L}&&[^\\p{Lu}]])(?=\\p{Lu})|(?<=\\p{Lu})(?=\\p{Lu}[\\p{L}&&[^\\p{Lu}]])"
  237. }
  238. }
  239. }
  240. }
  241. }
  242. GET my_index/_analyze
  243. {
  244. "analyzer": "camel",
  245. "text": "MooseX::FTPClass2_beta"
  246. }
  247. --------------------------------------------------
  248. // CONSOLE
  249. /////////////////////
  250. [source,js]
  251. ----------------------------
  252. {
  253. "tokens": [
  254. {
  255. "token": "moose",
  256. "start_offset": 0,
  257. "end_offset": 5,
  258. "type": "word",
  259. "position": 0
  260. },
  261. {
  262. "token": "x",
  263. "start_offset": 5,
  264. "end_offset": 6,
  265. "type": "word",
  266. "position": 1
  267. },
  268. {
  269. "token": "ftp",
  270. "start_offset": 8,
  271. "end_offset": 11,
  272. "type": "word",
  273. "position": 2
  274. },
  275. {
  276. "token": "class",
  277. "start_offset": 11,
  278. "end_offset": 16,
  279. "type": "word",
  280. "position": 3
  281. },
  282. {
  283. "token": "2",
  284. "start_offset": 16,
  285. "end_offset": 17,
  286. "type": "word",
  287. "position": 4
  288. },
  289. {
  290. "token": "beta",
  291. "start_offset": 18,
  292. "end_offset": 22,
  293. "type": "word",
  294. "position": 5
  295. }
  296. ]
  297. }
  298. ----------------------------
  299. // TESTRESPONSE
  300. /////////////////////
  301. The above example produces the following terms:
  302. [source,text]
  303. ---------------------------
  304. [ moose, x, ftp, class, 2, beta ]
  305. ---------------------------
  306. The regex above is easier to understand as:
  307. [source,js]
  308. --------------------------------------------------
  309. ([^\p{L}\d]+) # swallow non letters and numbers,
  310. | (?<=\D)(?=\d) # or non-number followed by number,
  311. | (?<=\d)(?=\D) # or number followed by non-number,
  312. | (?<=[ \p{L} && [^\p{Lu}]]) # or lower case
  313. (?=\p{Lu}) # followed by upper case,
  314. | (?<=\p{Lu}) # or upper case
  315. (?=\p{Lu} # followed by upper case
  316. [\p{L}&&[^\p{Lu}]] # then lower case
  317. )
  318. --------------------------------------------------