categorize-text-aggregation.asciidoc 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472
  1. [[search-aggregations-bucket-categorize-text-aggregation]]
  2. === Categorize text aggregation
  3. ++++
  4. <titleabbrev>Categorize text</titleabbrev>
  5. ++++
  6. experimental::[]
  7. A multi-bucket aggregation that groups semi-structured text into buckets. Each `text` field is re-analyzed
  8. using a custom analyzer. The resulting tokens are then categorized creating buckets of similarly formatted
  9. text values. This aggregation works best with machine generated text like system logs. Only the first 100 analyzed
  10. tokens are used to categorize the text.
  11. NOTE: If you have considerable memory allocated to your JVM but are receiving circuit breaker exceptions from this
  12. aggregation, you may be attempting to categorize text that is poorly formatted for categorization. Consider
  13. adding `categorization_filters` or running under <<search-aggregations-bucket-sampler-aggregation,sampler>>,
  14. <<search-aggregations-bucket-diversified-sampler-aggregation,diversified sampler>>, or
  15. <<search-aggregations-random-sampler-aggregation,random sampler>> to explore the created categories.
  16. [[bucket-categorize-text-agg-syntax]]
  17. ==== Parameters
  18. `field`::
  19. (Required, string)
  20. The semi-structured text field to categorize.
  21. `max_unique_tokens`::
  22. (Optional, integer, default: `50`)
  23. The maximum number of unique tokens at any position up to `max_matched_tokens`.
  24. Must be larger than 1. Smaller values use less memory and create fewer categories.
  25. Larger values will use more memory and create narrower categories.
  26. Max allowed value is `100`.
  27. `max_matched_tokens`::
  28. (Optional, integer, default: `5`)
  29. The maximum number of token positions to match on before attempting to merge categories.
  30. Larger values will use more memory and create narrower categories.
  31. Max allowed value is `100`.
  32. Example:
  33. `max_matched_tokens` of 2 would disallow merging of the categories
  34. [`foo` `bar` `baz`]
  35. [`foo` `baz` `bozo`]
  36. As the first 2 tokens are required to match for the category.
  37. NOTE: Once `max_unique_tokens` is reached at a given position, a new `*` token is
  38. added and all new tokens at that position are matched by the `*` token.
  39. `similarity_threshold`::
  40. (Optional, integer, default: `50`)
  41. The minimum percentage of tokens that must match for text to be added to the
  42. category bucket.
  43. Must be between 1 and 100. The larger the value the narrower the categories.
  44. Larger values will increase memory usage and create narrower categories.
  45. `categorization_filters`::
  46. (Optional, array of strings)
  47. This property expects an array of regular expressions. The expressions
  48. are used to filter out matching sequences from the categorization field values.
  49. You can use this functionality to fine tune the categorization by excluding
  50. sequences from consideration when categories are defined. For example, you can
  51. exclude SQL statements that appear in your log files. This
  52. property cannot be used at the same time as `categorization_analyzer`. If you
  53. only want to define simple regular expression filters that are applied prior to
  54. tokenization, setting this property is the easiest method. If you also want to
  55. customize the tokenizer or post-tokenization filtering, use the
  56. `categorization_analyzer` property instead and include the filters as
  57. `pattern_replace` character filters.
  58. `categorization_analyzer`::
  59. (Optional, object or string)
  60. The categorization analyzer specifies how the text is analyzed and tokenized before
  61. being categorized. The syntax is very similar to that used to define the `analyzer` in the
  62. <<indices-analyze,Analyze endpoint>>. This
  63. property cannot be used at the same time as `categorization_filters`.
  64. +
  65. The `categorization_analyzer` field can be specified either as a string or as an
  66. object. If it is a string it must refer to a
  67. <<analysis-analyzers,built-in analyzer>> or one added by another plugin. If it
  68. is an object it has the following properties:
  69. +
  70. .Properties of `categorization_analyzer`
  71. [%collapsible%open]
  72. =====
  73. `char_filter`::::
  74. (array of strings or objects)
  75. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=char-filter]
  76. `tokenizer`::::
  77. (string or object)
  78. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=tokenizer]
  79. `filter`::::
  80. (array of strings or objects)
  81. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter]
  82. =====
  83. `shard_size`::
  84. (Optional, integer)
  85. The number of categorization buckets to return from each shard before merging
  86. all the results.
  87. `size`::
  88. (Optional, integer, default: `10`)
  89. The number of buckets to return.
  90. `min_doc_count`::
  91. (Optional, integer)
  92. The minimum number of documents for a bucket to be returned to the results.
  93. `shard_min_doc_count`::
  94. (Optional, integer)
  95. The minimum number of documents for a bucket to be returned from the shard before
  96. merging.
  97. ==== Basic use
  98. WARNING: Re-analyzing _large_ result sets will require a lot of time and memory. This aggregation should be
  99. used in conjunction with <<async-search, Async search>>. Additionally, you may consider
  100. using the aggregation as a child of either the <<search-aggregations-bucket-sampler-aggregation,sampler>> or
  101. <<search-aggregations-bucket-diversified-sampler-aggregation,diversified sampler>> aggregation.
  102. This will typically improve speed and memory use.
  103. Example:
  104. [source,console]
  105. --------------------------------------------------
  106. POST log-messages/_search?filter_path=aggregations
  107. {
  108. "aggs": {
  109. "categories": {
  110. "categorize_text": {
  111. "field": "message"
  112. }
  113. }
  114. }
  115. }
  116. --------------------------------------------------
  117. // TEST[setup:categorize_text]
  118. Response:
  119. [source,console-result]
  120. --------------------------------------------------
  121. {
  122. "aggregations" : {
  123. "categories" : {
  124. "buckets" : [
  125. {
  126. "doc_count" : 3,
  127. "key" : "Node shutting down"
  128. },
  129. {
  130. "doc_count" : 1,
  131. "key" : "Node starting up"
  132. },
  133. {
  134. "doc_count" : 1,
  135. "key" : "User foo_325 logging on"
  136. },
  137. {
  138. "doc_count" : 1,
  139. "key" : "User foo_864 logged off"
  140. }
  141. ]
  142. }
  143. }
  144. }
  145. --------------------------------------------------
  146. Here is an example using `categorization_filters`
  147. [source,console]
  148. --------------------------------------------------
  149. POST log-messages/_search?filter_path=aggregations
  150. {
  151. "aggs": {
  152. "categories": {
  153. "categorize_text": {
  154. "field": "message",
  155. "categorization_filters": ["\\w+\\_\\d{3}"] <1>
  156. }
  157. }
  158. }
  159. }
  160. --------------------------------------------------
  161. // TEST[setup:categorize_text]
  162. <1> The filters to apply to the analyzed tokens. It filters
  163. out tokens like `bar_123`.
  164. Note how the `foo_<number>` tokens are not part of the
  165. category results
  166. [source,console-result]
  167. --------------------------------------------------
  168. {
  169. "aggregations" : {
  170. "categories" : {
  171. "buckets" : [
  172. {
  173. "doc_count" : 3,
  174. "key" : "Node shutting down"
  175. },
  176. {
  177. "doc_count" : 1,
  178. "key" : "Node starting up"
  179. },
  180. {
  181. "doc_count" : 1,
  182. "key" : "User logged off"
  183. },
  184. {
  185. "doc_count" : 1,
  186. "key" : "User logging on"
  187. }
  188. ]
  189. }
  190. }
  191. }
  192. --------------------------------------------------
  193. Here is an example using `categorization_filters`.
  194. The default analyzer is a whitespace analyzer with a custom token filter
  195. which filters out tokens that start with any number.
  196. But, it may be that a token is a known highly-variable token (formatted usernames, emails, etc.). In that case, it is good to supply
  197. custom `categorization_filters` to filter out those tokens for better categories. These filters will also reduce memory usage as fewer
  198. tokens are held in memory for the categories.
  199. [source,console]
  200. --------------------------------------------------
  201. POST log-messages/_search?filter_path=aggregations
  202. {
  203. "aggs": {
  204. "categories": {
  205. "categorize_text": {
  206. "field": "message",
  207. "categorization_filters": ["\\w+\\_\\d{3}"], <1>
  208. "max_matched_tokens": 2, <2>
  209. "similarity_threshold": 30 <3>
  210. }
  211. }
  212. }
  213. }
  214. --------------------------------------------------
  215. // TEST[setup:categorize_text]
  216. <1> The filters to apply to the analyzed tokens. It filters
  217. out tokens like `bar_123`.
  218. <2> Require at least 2 tokens before the log categories attempt to merge together
  219. <3> Require 30% of the tokens to match before expanding a log categories
  220. to add a new log entry
  221. The resulting categories are now broad, matching the first token
  222. and merging the log groups.
  223. [source,console-result]
  224. --------------------------------------------------
  225. {
  226. "aggregations" : {
  227. "categories" : {
  228. "buckets" : [
  229. {
  230. "doc_count" : 4,
  231. "key" : "Node *"
  232. },
  233. {
  234. "doc_count" : 2,
  235. "key" : "User *"
  236. }
  237. ]
  238. }
  239. }
  240. }
  241. --------------------------------------------------
  242. This aggregation can have both sub-aggregations and itself be a sub-aggregation. This allows gathering the top daily categories and the
  243. top sample doc as below.
  244. [source,console]
  245. --------------------------------------------------
  246. POST log-messages/_search?filter_path=aggregations
  247. {
  248. "aggs": {
  249. "daily": {
  250. "date_histogram": {
  251. "field": "time",
  252. "fixed_interval": "1d"
  253. },
  254. "aggs": {
  255. "categories": {
  256. "categorize_text": {
  257. "field": "message",
  258. "categorization_filters": ["\\w+\\_\\d{3}"]
  259. },
  260. "aggs": {
  261. "hit": {
  262. "top_hits": {
  263. "size": 1,
  264. "sort": ["time"],
  265. "_source": "message"
  266. }
  267. }
  268. }
  269. }
  270. }
  271. }
  272. }
  273. }
  274. --------------------------------------------------
  275. // TEST[setup:categorize_text]
  276. [source,console-result]
  277. --------------------------------------------------
  278. {
  279. "aggregations" : {
  280. "daily" : {
  281. "buckets" : [
  282. {
  283. "key_as_string" : "2016-02-07T00:00:00.000Z",
  284. "key" : 1454803200000,
  285. "doc_count" : 3,
  286. "categories" : {
  287. "buckets" : [
  288. {
  289. "doc_count" : 2,
  290. "key" : "Node shutting down",
  291. "hit" : {
  292. "hits" : {
  293. "total" : {
  294. "value" : 2,
  295. "relation" : "eq"
  296. },
  297. "max_score" : null,
  298. "hits" : [
  299. {
  300. "_index" : "log-messages",
  301. "_id" : "1",
  302. "_score" : null,
  303. "_source" : {
  304. "message" : "2016-02-07T00:00:00+0000 Node 3 shutting down"
  305. },
  306. "sort" : [
  307. 1454803260000
  308. ]
  309. }
  310. ]
  311. }
  312. }
  313. },
  314. {
  315. "doc_count" : 1,
  316. "key" : "Node starting up",
  317. "hit" : {
  318. "hits" : {
  319. "total" : {
  320. "value" : 1,
  321. "relation" : "eq"
  322. },
  323. "max_score" : null,
  324. "hits" : [
  325. {
  326. "_index" : "log-messages",
  327. "_id" : "2",
  328. "_score" : null,
  329. "_source" : {
  330. "message" : "2016-02-07T00:00:00+0000 Node 5 starting up"
  331. },
  332. "sort" : [
  333. 1454803320000
  334. ]
  335. }
  336. ]
  337. }
  338. }
  339. }
  340. ]
  341. }
  342. },
  343. {
  344. "key_as_string" : "2016-02-08T00:00:00.000Z",
  345. "key" : 1454889600000,
  346. "doc_count" : 3,
  347. "categories" : {
  348. "buckets" : [
  349. {
  350. "doc_count" : 1,
  351. "key" : "Node shutting down",
  352. "hit" : {
  353. "hits" : {
  354. "total" : {
  355. "value" : 1,
  356. "relation" : "eq"
  357. },
  358. "max_score" : null,
  359. "hits" : [
  360. {
  361. "_index" : "log-messages",
  362. "_id" : "4",
  363. "_score" : null,
  364. "_source" : {
  365. "message" : "2016-02-08T00:00:00+0000 Node 5 shutting down"
  366. },
  367. "sort" : [
  368. 1454889660000
  369. ]
  370. }
  371. ]
  372. }
  373. }
  374. },
  375. {
  376. "doc_count" : 1,
  377. "key" : "User logged off",
  378. "hit" : {
  379. "hits" : {
  380. "total" : {
  381. "value" : 1,
  382. "relation" : "eq"
  383. },
  384. "max_score" : null,
  385. "hits" : [
  386. {
  387. "_index" : "log-messages",
  388. "_id" : "6",
  389. "_score" : null,
  390. "_source" : {
  391. "message" : "2016-02-08T00:00:00+0000 User foo_864 logged off"
  392. },
  393. "sort" : [
  394. 1454889840000
  395. ]
  396. }
  397. ]
  398. }
  399. }
  400. },
  401. {
  402. "doc_count" : 1,
  403. "key" : "User logging on",
  404. "hit" : {
  405. "hits" : {
  406. "total" : {
  407. "value" : 1,
  408. "relation" : "eq"
  409. },
  410. "max_score" : null,
  411. "hits" : [
  412. {
  413. "_index" : "log-messages",
  414. "_id" : "5",
  415. "_score" : null,
  416. "_source" : {
  417. "message" : "2016-02-08T00:00:00+0000 User foo_325 logging on"
  418. },
  419. "sort" : [
  420. 1454889720000
  421. ]
  422. }
  423. ]
  424. }
  425. }
  426. }
  427. ]
  428. }
  429. }
  430. ]
  431. }
  432. }
  433. }
  434. --------------------------------------------------