analyze.asciidoc 9.6 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373
  1. [[indices-analyze]]
  2. === Analyze API
  3. ++++
  4. <titleabbrev>Analyze</titleabbrev>
  5. ++++
  6. Performs <<analysis,analysis>> on a text string
  7. and returns the resulting tokens.
  8. [source,console]
  9. --------------------------------------------------
  10. GET /_analyze
  11. {
  12. "analyzer" : "standard",
  13. "text" : "Quick Brown Foxes!"
  14. }
  15. --------------------------------------------------
  16. [[analyze-api-request]]
  17. ==== {api-request-title}
  18. `GET /_analyze`
  19. `POST /_analyze`
  20. `GET /<index>/_analyze`
  21. `POST /<index>/_analyze`
  22. [[analyze-api-path-params]]
  23. ==== {api-path-parms-title}
  24. `<index>`::
  25. +
  26. --
  27. (Optional, string)
  28. Index used to derive the analyzer.
  29. If specified,
  30. the `analyzer` or `<field>` parameter overrides this value.
  31. If no analyzer or field are specified,
  32. the analyze API uses the default analyzer for the index.
  33. If no index is specified
  34. or the index does not have a default analyzer,
  35. the analyze API uses the <<analysis-standard-analyzer,standard analyzer>>.
  36. --
  37. [[analyze-api-query-params]]
  38. ==== {api-query-parms-title}
  39. `analyzer`::
  40. +
  41. --
  42. (Optional, string)
  43. The name of the analyzer that should be applied to the provided `text`. This could be a
  44. <<analysis-analyzers, built-in analyzer>>, or an analyzer that's been configured in the index.
  45. If this parameter is not specified,
  46. the analyze API uses the analyzer defined in the field's mapping.
  47. If no field is specified,
  48. the analyze API uses the default analyzer for the index.
  49. If no index is specified,
  50. or the index does not have a default analyzer,
  51. the analyze API uses the <<analysis-standard-analyzer,standard analyzer>>.
  52. --
  53. `attributes`::
  54. (Optional, array of strings)
  55. Array of token attributes used to filter the output of the `explain` parameter.
  56. `char_filter`::
  57. (Optional, array of strings)
  58. Array of character filters used to preprocess characters before the tokenizer.
  59. See <<analysis-charfilters>> for a list of character filters.
  60. `explain`::
  61. (Optional, boolean)
  62. If `true`, the response includes token attributes and additional details.
  63. Defaults to `false`.
  64. experimental:[The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.]
  65. `field`::
  66. +
  67. --
  68. (Optional, string)
  69. Field used to derive the analyzer.
  70. To use this parameter,
  71. you must specify an index.
  72. If specified,
  73. the `analyzer` parameter overrides this value.
  74. If no field is specified,
  75. the analyze API uses the default analyzer for the index.
  76. If no index is specified
  77. or the index does not have a default analyzer,
  78. the analyze API uses the <<analysis-standard-analyzer,standard analyzer>>.
  79. --
  80. `filter`::
  81. (Optional, Array of strings)
  82. Array of token filters used to apply after the tokenizer.
  83. See <<analysis-tokenfilters>> for a list of token filters.
  84. `normalizer`::
  85. (Optional, string)
  86. Normalizer to use to convert text into a single token.
  87. See <<analysis-normalizers>> for a list of normalizers.
  88. `text`::
  89. (Required, string or array of strings)
  90. Text to analyze.
  91. If an array of strings is provided, it is analyzed as a multi-value field.
  92. `tokenizer`::
  93. (Optional, string)
  94. Tokenizer to use to convert text into tokens.
  95. See <<analysis-tokenizers>> for a list of tokenizers.
  96. [[analyze-api-example]]
  97. ==== {api-examples-title}
  98. [[analyze-api-no-index-ex]]
  99. ===== No index specified
  100. You can apply any of the built-in analyzers to the text string without
  101. specifying an index.
  102. [source,console]
  103. --------------------------------------------------
  104. GET /_analyze
  105. {
  106. "analyzer" : "standard",
  107. "text" : "this is a test"
  108. }
  109. --------------------------------------------------
  110. [[analyze-api-text-array-ex]]
  111. ===== Array of text strings
  112. If the `text` parameter is provided as array of strings, it is analyzed as a multi-value field.
  113. [source,console]
  114. --------------------------------------------------
  115. GET /_analyze
  116. {
  117. "analyzer" : "standard",
  118. "text" : ["this is a test", "the second text"]
  119. }
  120. --------------------------------------------------
  121. [[analyze-api-custom-analyzer-ex]]
  122. ===== Custom analyzer
  123. You can use the analyze API to test a custom transient analyzer built from
  124. tokenizers, token filters, and char filters. Token filters use the `filter`
  125. parameter:
  126. [source,console]
  127. --------------------------------------------------
  128. GET /_analyze
  129. {
  130. "tokenizer" : "keyword",
  131. "filter" : ["lowercase"],
  132. "text" : "this is a test"
  133. }
  134. --------------------------------------------------
  135. [source,console]
  136. --------------------------------------------------
  137. GET /_analyze
  138. {
  139. "tokenizer" : "keyword",
  140. "filter" : ["lowercase"],
  141. "char_filter" : ["html_strip"],
  142. "text" : "this is a <b>test</b>"
  143. }
  144. --------------------------------------------------
  145. Custom tokenizers, token filters, and character filters can be specified in the request body as follows:
  146. [source,console]
  147. --------------------------------------------------
  148. GET /_analyze
  149. {
  150. "tokenizer" : "whitespace",
  151. "filter" : ["lowercase", {"type": "stop", "stopwords": ["a", "is", "this"]}],
  152. "text" : "this is a test"
  153. }
  154. --------------------------------------------------
  155. [[analyze-api-specific-index-ex]]
  156. ===== Specific index
  157. You can also run the analyze API against a specific index:
  158. [source,console]
  159. --------------------------------------------------
  160. GET /analyze_sample/_analyze
  161. {
  162. "text" : "this is a test"
  163. }
  164. --------------------------------------------------
  165. // TEST[setup:analyze_sample]
  166. The above will run an analysis on the "this is a test" text, using the
  167. default index analyzer associated with the `analyze_sample` index. An `analyzer`
  168. can also be provided to use a different analyzer:
  169. [source,console]
  170. --------------------------------------------------
  171. GET /analyze_sample/_analyze
  172. {
  173. "analyzer" : "whitespace",
  174. "text" : "this is a test"
  175. }
  176. --------------------------------------------------
  177. // TEST[setup:analyze_sample]
  178. [[analyze-api-field-ex]]
  179. ===== Derive analyzer from a field mapping
  180. The analyzer can be derived based on a field mapping, for example:
  181. [source,console]
  182. --------------------------------------------------
  183. GET /analyze_sample/_analyze
  184. {
  185. "field" : "obj1.field1",
  186. "text" : "this is a test"
  187. }
  188. --------------------------------------------------
  189. // TEST[setup:analyze_sample]
  190. Will cause the analysis to happen based on the analyzer configured in the
  191. mapping for `obj1.field1` (and if not, the default index analyzer).
  192. [[analyze-api-normalizer-ex]]
  193. ===== Normalizer
  194. A `normalizer` can be provided for keyword field with normalizer associated with the `analyze_sample` index.
  195. [source,console]
  196. --------------------------------------------------
  197. GET /analyze_sample/_analyze
  198. {
  199. "normalizer" : "my_normalizer",
  200. "text" : "BaR"
  201. }
  202. --------------------------------------------------
  203. // TEST[setup:analyze_sample]
  204. Or by building a custom transient normalizer out of token filters and char filters.
  205. [source,console]
  206. --------------------------------------------------
  207. GET /_analyze
  208. {
  209. "filter" : ["lowercase"],
  210. "text" : "BaR"
  211. }
  212. --------------------------------------------------
  213. [[explain-analyze-api]]
  214. ===== Explain analyze
  215. If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token.
  216. You can filter token attributes you want to output by setting `attributes` option.
  217. NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
  218. [source,console]
  219. --------------------------------------------------
  220. GET /_analyze
  221. {
  222. "tokenizer" : "standard",
  223. "filter" : ["snowball"],
  224. "text" : "detailed output",
  225. "explain" : true,
  226. "attributes" : ["keyword"] <1>
  227. }
  228. --------------------------------------------------
  229. <1> Set "keyword" to output "keyword" attribute only
  230. The request returns the following result:
  231. [source,console-result]
  232. --------------------------------------------------
  233. {
  234. "detail" : {
  235. "custom_analyzer" : true,
  236. "charfilters" : [ ],
  237. "tokenizer" : {
  238. "name" : "standard",
  239. "tokens" : [ {
  240. "token" : "detailed",
  241. "start_offset" : 0,
  242. "end_offset" : 8,
  243. "type" : "<ALPHANUM>",
  244. "position" : 0
  245. }, {
  246. "token" : "output",
  247. "start_offset" : 9,
  248. "end_offset" : 15,
  249. "type" : "<ALPHANUM>",
  250. "position" : 1
  251. } ]
  252. },
  253. "tokenfilters" : [ {
  254. "name" : "snowball",
  255. "tokens" : [ {
  256. "token" : "detail",
  257. "start_offset" : 0,
  258. "end_offset" : 8,
  259. "type" : "<ALPHANUM>",
  260. "position" : 0,
  261. "keyword" : false <1>
  262. }, {
  263. "token" : "output",
  264. "start_offset" : 9,
  265. "end_offset" : 15,
  266. "type" : "<ALPHANUM>",
  267. "position" : 1,
  268. "keyword" : false <1>
  269. } ]
  270. } ]
  271. }
  272. }
  273. --------------------------------------------------
  274. <1> Output only "keyword" attribute, since specify "attributes" in the request.
  275. [[tokens-limit-settings]]
  276. ===== Setting a token limit
  277. Generating excessive amount of tokens may cause a node to run out of memory.
  278. The following setting allows to limit the number of tokens that can be produced:
  279. `index.analyze.max_token_count`::
  280. The maximum number of tokens that can be produced using `_analyze` API.
  281. The default value is `10000`. If more than this limit of tokens gets
  282. generated, an error will be thrown. The `_analyze` endpoint without a specified
  283. index will always use `10000` value as a limit. This setting allows you to control
  284. the limit for a specific index:
  285. [source,console]
  286. --------------------------------------------------
  287. PUT /analyze_sample
  288. {
  289. "settings" : {
  290. "index.analyze.max_token_count" : 20000
  291. }
  292. }
  293. --------------------------------------------------
  294. [source,console]
  295. --------------------------------------------------
  296. GET /analyze_sample/_analyze
  297. {
  298. "text" : "this is a test"
  299. }
  300. --------------------------------------------------
  301. // TEST[setup:analyze_sample]