pathhierarchy-tokenizer.asciidoc 8.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364
  1. [[analysis-pathhierarchy-tokenizer]]
  2. === Path hierarchy tokenizer
  3. ++++
  4. <titleabbrev>Path hierarchy</titleabbrev>
  5. ++++
  6. The `path_hierarchy` tokenizer takes a hierarchical value like a filesystem
  7. path, splits on the path separator, and emits a term for each component in the
  8. tree. The `path_hierarcy` tokenizer uses Lucene's
  9. https://lucene.apache.org/core/{lucene_version_path}/analysis/common/org/apache/lucene/analysis/path/PathHierarchyTokenizer.html[PathHierarchyTokenizer]
  10. underneath.
  11. [discrete]
  12. === Example output
  13. [source,console]
  14. ---------------------------
  15. POST _analyze
  16. {
  17. "tokenizer": "path_hierarchy",
  18. "text": "/one/two/three"
  19. }
  20. ---------------------------
  21. /////////////////////
  22. [source,console-result]
  23. ----------------------------
  24. {
  25. "tokens": [
  26. {
  27. "token": "/one",
  28. "start_offset": 0,
  29. "end_offset": 4,
  30. "type": "word",
  31. "position": 0
  32. },
  33. {
  34. "token": "/one/two",
  35. "start_offset": 0,
  36. "end_offset": 8,
  37. "type": "word",
  38. "position": 0
  39. },
  40. {
  41. "token": "/one/two/three",
  42. "start_offset": 0,
  43. "end_offset": 14,
  44. "type": "word",
  45. "position": 0
  46. }
  47. ]
  48. }
  49. ----------------------------
  50. /////////////////////
  51. The above text would produce the following terms:
  52. [source,text]
  53. ---------------------------
  54. [ /one, /one/two, /one/two/three ]
  55. ---------------------------
  56. [discrete]
  57. === Configuration
  58. The `path_hierarchy` tokenizer accepts the following parameters:
  59. [horizontal]
  60. `delimiter`::
  61. The character to use as the path separator. Defaults to `/`.
  62. `replacement`::
  63. An optional replacement character to use for the delimiter.
  64. Defaults to the `delimiter`.
  65. `buffer_size`::
  66. The number of characters read into the term buffer in a single pass.
  67. Defaults to `1024`. The term buffer will grow by this size until all the
  68. text has been consumed. It is advisable not to change this setting.
  69. `reverse`::
  70. If `true`, uses Lucene's
  71. http://lucene.apache.org/core/{lucene_version_path}/analysis/common/org/apache/lucene/analysis/path/ReversePathHierarchyTokenizer.html[ReversePathHierarchyTokenizer],
  72. which is suitable for domain–like hierarchies. Defaults to `false`.
  73. `skip`::
  74. The number of initial tokens to skip. Defaults to `0`.
  75. [discrete]
  76. === Example configuration
  77. In this example, we configure the `path_hierarchy` tokenizer to split on `-`
  78. characters, and to replace them with `/`. The first two tokens are skipped:
  79. [source,console]
  80. ----------------------------
  81. PUT my-index-000001
  82. {
  83. "settings": {
  84. "analysis": {
  85. "analyzer": {
  86. "my_analyzer": {
  87. "tokenizer": "my_tokenizer"
  88. }
  89. },
  90. "tokenizer": {
  91. "my_tokenizer": {
  92. "type": "path_hierarchy",
  93. "delimiter": "-",
  94. "replacement": "/",
  95. "skip": 2
  96. }
  97. }
  98. }
  99. }
  100. }
  101. POST my-index-000001/_analyze
  102. {
  103. "analyzer": "my_analyzer",
  104. "text": "one-two-three-four-five"
  105. }
  106. ----------------------------
  107. /////////////////////
  108. [source,console-result]
  109. ----------------------------
  110. {
  111. "tokens": [
  112. {
  113. "token": "/three",
  114. "start_offset": 7,
  115. "end_offset": 13,
  116. "type": "word",
  117. "position": 0
  118. },
  119. {
  120. "token": "/three/four",
  121. "start_offset": 7,
  122. "end_offset": 18,
  123. "type": "word",
  124. "position": 0
  125. },
  126. {
  127. "token": "/three/four/five",
  128. "start_offset": 7,
  129. "end_offset": 23,
  130. "type": "word",
  131. "position": 0
  132. }
  133. ]
  134. }
  135. ----------------------------
  136. /////////////////////
  137. The above example produces the following terms:
  138. [source,text]
  139. ---------------------------
  140. [ /three, /three/four, /three/four/five ]
  141. ---------------------------
  142. If we were to set `reverse` to `true`, it would produce the following:
  143. [source,text]
  144. ---------------------------
  145. [ one/two/three/, two/three/, three/ ]
  146. ---------------------------
  147. [discrete]
  148. [[analysis-pathhierarchy-tokenizer-detailed-examples]]
  149. === Detailed examples
  150. A common use-case for the `path_hierarchy` tokenizer is filtering results by
  151. file paths. If indexing a file path along with the data, the use of the
  152. `path_hierarchy` tokenizer to analyze the path allows filtering the results
  153. by different parts of the file path string.
  154. This example configures an index to have two custom analyzers and applies
  155. those analyzers to multifields of the `file_path` text field that will
  156. store filenames. One of the two analyzers uses reverse tokenization.
  157. Some sample documents are then indexed to represent some file paths
  158. for photos inside photo folders of two different users.
  159. [source,console]
  160. --------------------------------------------------
  161. PUT file-path-test
  162. {
  163. "settings": {
  164. "analysis": {
  165. "analyzer": {
  166. "custom_path_tree": {
  167. "tokenizer": "custom_hierarchy"
  168. },
  169. "custom_path_tree_reversed": {
  170. "tokenizer": "custom_hierarchy_reversed"
  171. }
  172. },
  173. "tokenizer": {
  174. "custom_hierarchy": {
  175. "type": "path_hierarchy",
  176. "delimiter": "/"
  177. },
  178. "custom_hierarchy_reversed": {
  179. "type": "path_hierarchy",
  180. "delimiter": "/",
  181. "reverse": "true"
  182. }
  183. }
  184. }
  185. },
  186. "mappings": {
  187. "properties": {
  188. "file_path": {
  189. "type": "text",
  190. "fields": {
  191. "tree": {
  192. "type": "text",
  193. "analyzer": "custom_path_tree"
  194. },
  195. "tree_reversed": {
  196. "type": "text",
  197. "analyzer": "custom_path_tree_reversed"
  198. }
  199. }
  200. }
  201. }
  202. }
  203. }
  204. POST file-path-test/_doc/1
  205. {
  206. "file_path": "/User/alice/photos/2017/05/16/my_photo1.jpg"
  207. }
  208. POST file-path-test/_doc/2
  209. {
  210. "file_path": "/User/alice/photos/2017/05/16/my_photo2.jpg"
  211. }
  212. POST file-path-test/_doc/3
  213. {
  214. "file_path": "/User/alice/photos/2017/05/16/my_photo3.jpg"
  215. }
  216. POST file-path-test/_doc/4
  217. {
  218. "file_path": "/User/alice/photos/2017/05/15/my_photo1.jpg"
  219. }
  220. POST file-path-test/_doc/5
  221. {
  222. "file_path": "/User/bob/photos/2017/05/16/my_photo1.jpg"
  223. }
  224. --------------------------------------------------
  225. A search for a particular file path string against the text field matches all
  226. the example documents, with Bob's documents ranking highest due to `bob` also
  227. being one of the terms created by the standard analyzer boosting relevance for
  228. Bob's documents.
  229. [source,console]
  230. --------------------------------------------------
  231. GET file-path-test/_search
  232. {
  233. "query": {
  234. "match": {
  235. "file_path": "/User/bob/photos/2017/05"
  236. }
  237. }
  238. }
  239. --------------------------------------------------
  240. // TEST[continued]
  241. It's simple to match or filter documents with file paths that exist within a
  242. particular directory using the `file_path.tree` field.
  243. [source,console]
  244. --------------------------------------------------
  245. GET file-path-test/_search
  246. {
  247. "query": {
  248. "term": {
  249. "file_path.tree": "/User/alice/photos/2017/05/16"
  250. }
  251. }
  252. }
  253. --------------------------------------------------
  254. // TEST[continued]
  255. With the reverse parameter for this tokenizer, it's also possible to match
  256. from the other end of the file path, such as individual file names or a deep
  257. level subdirectory. The following example shows a search for all files named
  258. `my_photo1.jpg` within any directory via the `file_path.tree_reversed` field
  259. configured to use the reverse parameter in the mapping.
  260. [source,console]
  261. --------------------------------------------------
  262. GET file-path-test/_search
  263. {
  264. "query": {
  265. "term": {
  266. "file_path.tree_reversed": {
  267. "value": "my_photo1.jpg"
  268. }
  269. }
  270. }
  271. }
  272. --------------------------------------------------
  273. // TEST[continued]
  274. Viewing the tokens generated with both forward and reverse is instructive
  275. in showing the tokens created for the same file path value.
  276. [source,console]
  277. --------------------------------------------------
  278. POST file-path-test/_analyze
  279. {
  280. "analyzer": "custom_path_tree",
  281. "text": "/User/alice/photos/2017/05/16/my_photo1.jpg"
  282. }
  283. POST file-path-test/_analyze
  284. {
  285. "analyzer": "custom_path_tree_reversed",
  286. "text": "/User/alice/photos/2017/05/16/my_photo1.jpg"
  287. }
  288. --------------------------------------------------
  289. // TEST[continued]
  290. It's also useful to be able to filter with file paths when combined with other
  291. types of searches, such as this example looking for any files paths with `16`
  292. that also must be in Alice's photo directory.
  293. [source,console]
  294. --------------------------------------------------
  295. GET file-path-test/_search
  296. {
  297. "query": {
  298. "bool" : {
  299. "must" : {
  300. "match" : { "file_path" : "16" }
  301. },
  302. "filter": {
  303. "term" : { "file_path.tree" : "/User/alice" }
  304. }
  305. }
  306. }
  307. }
  308. --------------------------------------------------
  309. // TEST[continued]