pattern-analyzer.asciidoc 7.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375
  1. [[analysis-pattern-analyzer]]
  2. === Pattern Analyzer
  3. The `pattern` analyzer uses a regular expression to split the text into terms.
  4. The regular expression should match the *token separators* not the tokens
  5. themselves. The regular expression defaults to `\W+` (or all non-word characters).
  6. [float]
  7. === Definition
  8. It consists of:
  9. Tokenizer::
  10. * <<analysis-pattern-tokenizer,Pattern Tokenizer>>
  11. Token Filters::
  12. * <<analysis-lowercase-tokenfilter,Lower Case Token Filter>>
  13. * <<analysis-stop-tokenfilter,Stop Token Filter>> (disabled by default)
  14. [float]
  15. === Example output
  16. [source,js]
  17. ---------------------------
  18. POST _analyze
  19. {
  20. "analyzer": "pattern",
  21. "text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
  22. }
  23. ---------------------------
  24. // CONSOLE
  25. /////////////////////
  26. [source,js]
  27. ----------------------------
  28. {
  29. "tokens": [
  30. {
  31. "token": "the",
  32. "start_offset": 0,
  33. "end_offset": 3,
  34. "type": "word",
  35. "position": 0
  36. },
  37. {
  38. "token": "2",
  39. "start_offset": 4,
  40. "end_offset": 5,
  41. "type": "word",
  42. "position": 1
  43. },
  44. {
  45. "token": "quick",
  46. "start_offset": 6,
  47. "end_offset": 11,
  48. "type": "word",
  49. "position": 2
  50. },
  51. {
  52. "token": "brown",
  53. "start_offset": 12,
  54. "end_offset": 17,
  55. "type": "word",
  56. "position": 3
  57. },
  58. {
  59. "token": "foxes",
  60. "start_offset": 18,
  61. "end_offset": 23,
  62. "type": "word",
  63. "position": 4
  64. },
  65. {
  66. "token": "jumped",
  67. "start_offset": 24,
  68. "end_offset": 30,
  69. "type": "word",
  70. "position": 5
  71. },
  72. {
  73. "token": "over",
  74. "start_offset": 31,
  75. "end_offset": 35,
  76. "type": "word",
  77. "position": 6
  78. },
  79. {
  80. "token": "the",
  81. "start_offset": 36,
  82. "end_offset": 39,
  83. "type": "word",
  84. "position": 7
  85. },
  86. {
  87. "token": "lazy",
  88. "start_offset": 40,
  89. "end_offset": 44,
  90. "type": "word",
  91. "position": 8
  92. },
  93. {
  94. "token": "dog",
  95. "start_offset": 45,
  96. "end_offset": 48,
  97. "type": "word",
  98. "position": 9
  99. },
  100. {
  101. "token": "s",
  102. "start_offset": 49,
  103. "end_offset": 50,
  104. "type": "word",
  105. "position": 10
  106. },
  107. {
  108. "token": "bone",
  109. "start_offset": 51,
  110. "end_offset": 55,
  111. "type": "word",
  112. "position": 11
  113. }
  114. ]
  115. }
  116. ----------------------------
  117. // TESTRESPONSE
  118. /////////////////////
  119. The above sentence would produce the following terms:
  120. [source,text]
  121. ---------------------------
  122. [ the, 2, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
  123. ---------------------------
  124. [float]
  125. === Configuration
  126. The `pattern` analyzer accepts the following parameters:
  127. [horizontal]
  128. `pattern`::
  129. A http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html[Java regular expression], defaults to `\W+`.
  130. `flags`::
  131. Java regular expression http://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html#field.summary[flags].
  132. Flags should be pipe-separated, eg `"CASE_INSENSITIVE|COMMENTS"`.
  133. `lowercase`::
  134. Should terms be lowercased or not. Defaults to `true`.
  135. `max_token_length`::
  136. The maximum token length. If a token is seen that exceeds this length then
  137. it is split at `max_token_length` intervals. Defaults to `255`.
  138. `stopwords`::
  139. A pre-defined stop words list like `_english_` or an array containing a
  140. list of stop words. Defaults to `_none_`.
  141. `stopwords_path`::
  142. The path to a file containing stop words.
  143. See the <<analysis-stop-tokenfilter,Stop Token Filter>> for more information
  144. about stop word configuration.
  145. [float]
  146. === Example configuration
  147. In this example, we configure the `pattern` analyzer to split email addresses
  148. on non-word characters or on underscores (`\W|_`), and to lower-case the result:
  149. [source,js]
  150. ----------------------------
  151. PUT my_index
  152. {
  153. "settings": {
  154. "analysis": {
  155. "analyzer": {
  156. "my_email_analyzer": {
  157. "type": "pattern",
  158. "pattern": "\\W|_", <1>
  159. "lowercase": true
  160. }
  161. }
  162. }
  163. }
  164. }
  165. GET _cluster/health?wait_for_status=yellow
  166. POST my_index/_analyze
  167. {
  168. "analyzer": "my_email_analyzer",
  169. "text": "John_Smith@foo-bar.com"
  170. }
  171. ----------------------------
  172. // CONSOLE
  173. <1> The backslashes in the pattern need to be escaped when specifying the
  174. pattern as a JSON string.
  175. /////////////////////
  176. [source,js]
  177. ----------------------------
  178. {
  179. "tokens": [
  180. {
  181. "token": "john",
  182. "start_offset": 0,
  183. "end_offset": 4,
  184. "type": "word",
  185. "position": 0
  186. },
  187. {
  188. "token": "smith",
  189. "start_offset": 5,
  190. "end_offset": 10,
  191. "type": "word",
  192. "position": 1
  193. },
  194. {
  195. "token": "foo",
  196. "start_offset": 11,
  197. "end_offset": 14,
  198. "type": "word",
  199. "position": 2
  200. },
  201. {
  202. "token": "bar",
  203. "start_offset": 15,
  204. "end_offset": 18,
  205. "type": "word",
  206. "position": 3
  207. },
  208. {
  209. "token": "com",
  210. "start_offset": 19,
  211. "end_offset": 22,
  212. "type": "word",
  213. "position": 4
  214. }
  215. ]
  216. }
  217. ----------------------------
  218. // TESTRESPONSE
  219. /////////////////////
  220. The above example produces the following terms:
  221. [source,text]
  222. ---------------------------
  223. [ john, smith, foo, bar, com ]
  224. ---------------------------
  225. [float]
  226. ==== CamelCase tokenizer
  227. The following more complicated example splits CamelCase text into tokens:
  228. [source,js]
  229. --------------------------------------------------
  230. PUT my_index
  231. {
  232. "settings": {
  233. "analysis": {
  234. "analyzer": {
  235. "camel": {
  236. "type": "pattern",
  237. "pattern": "([^\\p{L}\\d]+)|(?<=\\D)(?=\\d)|(?<=\\d)(?=\\D)|(?<=[\\p{L}&&[^\\p{Lu}]])(?=\\p{Lu})|(?<=\\p{Lu})(?=\\p{Lu}[\\p{L}&&[^\\p{Lu}]])"
  238. }
  239. }
  240. }
  241. }
  242. }
  243. GET _cluster/health?wait_for_status=yellow
  244. GET my_index/_analyze
  245. {
  246. "analyzer": "camel",
  247. "text": "MooseX::FTPClass2_beta"
  248. }
  249. --------------------------------------------------
  250. // CONSOLE
  251. /////////////////////
  252. [source,js]
  253. ----------------------------
  254. {
  255. "tokens": [
  256. {
  257. "token": "moose",
  258. "start_offset": 0,
  259. "end_offset": 5,
  260. "type": "word",
  261. "position": 0
  262. },
  263. {
  264. "token": "x",
  265. "start_offset": 5,
  266. "end_offset": 6,
  267. "type": "word",
  268. "position": 1
  269. },
  270. {
  271. "token": "ftp",
  272. "start_offset": 8,
  273. "end_offset": 11,
  274. "type": "word",
  275. "position": 2
  276. },
  277. {
  278. "token": "class",
  279. "start_offset": 11,
  280. "end_offset": 16,
  281. "type": "word",
  282. "position": 3
  283. },
  284. {
  285. "token": "2",
  286. "start_offset": 16,
  287. "end_offset": 17,
  288. "type": "word",
  289. "position": 4
  290. },
  291. {
  292. "token": "beta",
  293. "start_offset": 18,
  294. "end_offset": 22,
  295. "type": "word",
  296. "position": 5
  297. }
  298. ]
  299. }
  300. ----------------------------
  301. // TESTRESPONSE
  302. /////////////////////
  303. The above example produces the following terms:
  304. [source,text]
  305. ---------------------------
  306. [ moose, x, ftp, class, 2, beta ]
  307. ---------------------------
  308. The regex above is easier to understand as:
  309. [source,js]
  310. --------------------------------------------------
  311. ([^\p{L}\d]+) # swallow non letters and numbers,
  312. | (?<=\D)(?=\d) # or non-number followed by number,
  313. | (?<=\d)(?=\D) # or number followed by non-number,
  314. | (?<=[ \p{L} && [^\p{Lu}]]) # or lower case
  315. (?=\p{Lu}) # followed by upper case,
  316. | (?<=\p{Lu}) # or upper case
  317. (?=\p{Lu} # followed by upper case
  318. [\p{L}&&[^\p{Lu}]] # then lower case
  319. )
  320. --------------------------------------------------