categorize-text-aggregation.asciidoc 18 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536
  1. [[search-aggregations-bucket-categorize-text-aggregation]]
  2. === Categorize text aggregation
  3. ++++
  4. <titleabbrev>Categorize text</titleabbrev>
  5. ++++
  6. experimental::[]
  7. A multi-bucket aggregation that groups semi-structured text into buckets. Each `text` field is re-analyzed
  8. using a custom analyzer. The resulting tokens are then categorized creating buckets of similarly formatted
  9. text values. This aggregation works best with machine generated text like system logs. Only the first 100 analyzed
  10. tokens are used to categorize the text.
  11. NOTE: If you have considerable memory allocated to your JVM but are receiving circuit breaker exceptions from this
  12. aggregation, you may be attempting to categorize text that is poorly formatted for categorization. Consider
  13. adding `categorization_filters` or running under <<search-aggregations-bucket-sampler-aggregation,sampler>>,
  14. <<search-aggregations-bucket-diversified-sampler-aggregation,diversified sampler>>, or
  15. <<search-aggregations-random-sampler-aggregation,random sampler>> to explore the created categories.
  16. NOTE: The algorithm used for categorization was completely changed in version 8.3.0. As a result this aggregation
  17. will not work in a mixed version cluster where some nodes are on version 8.3.0 or higher and others are
  18. on a version older than 8.3.0. Upgrade all nodes in your cluster to the same version if you experience
  19. an error related to this change.
  20. [[bucket-categorize-text-agg-syntax]]
  21. ==== Parameters
  22. `categorization_analyzer`::
  23. (Optional, object or string)
  24. The categorization analyzer specifies how the text is analyzed and tokenized before
  25. being categorized. The syntax is very similar to that used to define the `analyzer` in the
  26. <<indices-analyze,Analyze endpoint>>. This
  27. property cannot be used at the same time as `categorization_filters`.
  28. +
  29. The `categorization_analyzer` field can be specified either as a string or as an
  30. object. If it is a string it must refer to a
  31. <<analysis-analyzers,built-in analyzer>> or one added by another plugin. If it
  32. is an object it has the following properties:
  33. +
  34. .Properties of `categorization_analyzer`
  35. [%collapsible%open]
  36. =====
  37. `char_filter`::::
  38. (array of strings or objects)
  39. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=char-filter]
  40. `tokenizer`::::
  41. (string or object)
  42. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=tokenizer]
  43. `filter`::::
  44. (array of strings or objects)
  45. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=filter]
  46. =====
  47. `categorization_filters`::
  48. (Optional, array of strings)
  49. This property expects an array of regular expressions. The expressions
  50. are used to filter out matching sequences from the categorization field values.
  51. You can use this functionality to fine tune the categorization by excluding
  52. sequences from consideration when categories are defined. For example, you can
  53. exclude SQL statements that appear in your log files. This
  54. property cannot be used at the same time as `categorization_analyzer`. If you
  55. only want to define simple regular expression filters that are applied prior to
  56. tokenization, setting this property is the easiest method. If you also want to
  57. customize the tokenizer or post-tokenization filtering, use the
  58. `categorization_analyzer` property instead and include the filters as
  59. `pattern_replace` character filters.
  60. `field`::
  61. (Required, string)
  62. The semi-structured text field to categorize.
  63. `max_matched_tokens`::
  64. (Optional, integer)
  65. This parameter does nothing now, but is permitted for compatibility with the original
  66. pre-8.3.0 implementation.
  67. `max_unique_tokens`::
  68. (Optional, integer)
  69. This parameter does nothing now, but is permitted for compatibility with the original
  70. pre-8.3.0 implementation.
  71. `min_doc_count`::
  72. (Optional, integer)
  73. The minimum number of documents for a bucket to be returned to the results.
  74. `shard_min_doc_count`::
  75. (Optional, integer)
  76. The minimum number of documents for a bucket to be returned from the shard before
  77. merging.
  78. `shard_size`::
  79. (Optional, integer)
  80. The number of categorization buckets to return from each shard before merging
  81. all the results.
  82. `similarity_threshold`::
  83. (Optional, integer, default: `70`)
  84. The minimum percentage of token weight that must match for text to be added to the
  85. category bucket.
  86. Must be between 1 and 100. The larger the value the narrower the categories.
  87. Larger values will increase memory usage and create narrower categories.
  88. `size`::
  89. (Optional, integer, default: `10`)
  90. The number of buckets to return.
  91. [[bucket-categorize-text-agg-response]]
  92. ==== Response body
  93. `key`::
  94. (string)
  95. Consists of the tokens (extracted by the `categorization_analyzer`)
  96. that are common to all values of the input field included in the category.
  97. `doc_count`::
  98. (integer)
  99. Number of documents matching the category.
  100. `max_matching_length`::
  101. (integer)
  102. Categories from short messages containing few tokens may also match
  103. categories containing many tokens derived from much longer messages.
  104. `max_matching_length` is an indication of the maximum length of messages
  105. that should be considered to belong to the category. When searching for
  106. messages that match the category, any messages longer than
  107. `max_matching_length` should be excluded. Use this field to prevent a
  108. search for members of a category of short messages from matching much longer
  109. ones.
  110. `regex`::
  111. (string)
  112. A regular expression that will match all values of the input field included
  113. in the category. It is possible that the `regex` does not incorporate every
  114. term in `key`, if ordering varies between the values included in the
  115. category. However, in simple cases the `regex` will be the ordered terms
  116. concatenated into a regular expression that allows for arbitrary sections
  117. in between them. It is not recommended to use the `regex` as the primary
  118. mechanism for searching for the original documents that were categorized.
  119. Search using a regular expression is very slow. Instead the terms in the
  120. `key` field should be used to search for matching documents, as a terms
  121. search can use the inverted index and hence be much faster. However, there
  122. may be situations where it is useful to use the `regex` field to test whether
  123. a small set of messages that have not been indexed match the category, or to
  124. confirm that the terms in the `key` occur in the correct order in all the
  125. matched documents.
  126. ==== Basic use
  127. WARNING: Re-analyzing _large_ result sets will require a lot of time and memory. This aggregation should be
  128. used in conjunction with <<async-search, Async search>>. Additionally, you may consider
  129. using the aggregation as a child of either the <<search-aggregations-bucket-sampler-aggregation,sampler>> or
  130. <<search-aggregations-bucket-diversified-sampler-aggregation,diversified sampler>> aggregation.
  131. This will typically improve speed and memory use.
  132. Example:
  133. [source,console]
  134. --------------------------------------------------
  135. POST log-messages/_search?filter_path=aggregations
  136. {
  137. "aggs": {
  138. "categories": {
  139. "categorize_text": {
  140. "field": "message"
  141. }
  142. }
  143. }
  144. }
  145. --------------------------------------------------
  146. // TEST[setup:categorize_text]
  147. Response:
  148. [source,console-result]
  149. --------------------------------------------------
  150. {
  151. "aggregations" : {
  152. "categories" : {
  153. "buckets" : [
  154. {
  155. "doc_count" : 3,
  156. "key" : "Node shutting down",
  157. "regex" : ".*?Node.+?shutting.+?down.*?",
  158. "max_matching_length" : 49
  159. },
  160. {
  161. "doc_count" : 1,
  162. "key" : "Node starting up",
  163. "regex" : ".*?Node.+?starting.+?up.*?",
  164. "max_matching_length" : 47
  165. },
  166. {
  167. "doc_count" : 1,
  168. "key" : "User foo_325 logging on",
  169. "regex" : ".*?User.+?foo_325.+?logging.+?on.*?",
  170. "max_matching_length" : 52
  171. },
  172. {
  173. "doc_count" : 1,
  174. "key" : "User foo_864 logged off",
  175. "regex" : ".*?User.+?foo_864.+?logged.+?off.*?",
  176. "max_matching_length" : 52
  177. }
  178. ]
  179. }
  180. }
  181. }
  182. --------------------------------------------------
  183. Here is an example using `categorization_filters`
  184. [source,console]
  185. --------------------------------------------------
  186. POST log-messages/_search?filter_path=aggregations
  187. {
  188. "aggs": {
  189. "categories": {
  190. "categorize_text": {
  191. "field": "message",
  192. "categorization_filters": ["\\w+\\_\\d{3}"] <1>
  193. }
  194. }
  195. }
  196. }
  197. --------------------------------------------------
  198. // TEST[setup:categorize_text]
  199. <1> The filters to apply to the analyzed tokens. It filters
  200. out tokens like `bar_123`.
  201. Note how the `foo_<number>` tokens are not part of the
  202. category results
  203. [source,console-result]
  204. --------------------------------------------------
  205. {
  206. "aggregations" : {
  207. "categories" : {
  208. "buckets" : [
  209. {
  210. "doc_count" : 3,
  211. "key" : "Node shutting down",
  212. "regex" : ".*?Node.+?shutting.+?down.*?",
  213. "max_matching_length" : 49
  214. },
  215. {
  216. "doc_count" : 1,
  217. "key" : "Node starting up",
  218. "regex" : ".*?Node.+?starting.+?up.*?",
  219. "max_matching_length" : 47
  220. },
  221. {
  222. "doc_count" : 1,
  223. "key" : "User logged off",
  224. "regex" : ".*?User.+?logged.+?off.*?",
  225. "max_matching_length" : 52
  226. },
  227. {
  228. "doc_count" : 1,
  229. "key" : "User logging on",
  230. "regex" : ".*?User.+?logging.+?on.*?",
  231. "max_matching_length" : 52
  232. }
  233. ]
  234. }
  235. }
  236. }
  237. --------------------------------------------------
  238. Here is an example using `categorization_filters`.
  239. The default analyzer uses the `ml_standard` tokenizer which is similar to a whitespace tokenizer
  240. but filters out tokens that could be interpreted as hexadecimal numbers. The default analyzer
  241. also uses the `first_line_with_letters` character filter, so that only the first meaningful line
  242. of multi-line messages is considered.
  243. But, it may be that a token is a known highly-variable token (formatted usernames, emails, etc.). In that case, it is good to supply
  244. custom `categorization_filters` to filter out those tokens for better categories. These filters may also reduce memory usage as fewer
  245. tokens are held in memory for the categories. (If there are sufficient examples of different usernames, emails, etc., then
  246. categories will form that naturally discard them as variables, but for small input data where only one example exists this won't
  247. happen.)
  248. [source,console]
  249. --------------------------------------------------
  250. POST log-messages/_search?filter_path=aggregations
  251. {
  252. "aggs": {
  253. "categories": {
  254. "categorize_text": {
  255. "field": "message",
  256. "categorization_filters": ["\\w+\\_\\d{3}"], <1>
  257. "similarity_threshold": 11 <2>
  258. }
  259. }
  260. }
  261. }
  262. --------------------------------------------------
  263. // TEST[setup:categorize_text]
  264. <1> The filters to apply to the analyzed tokens. It filters
  265. out tokens like `bar_123`.
  266. <2> Require 11% of token weight to match before adding a message to an
  267. existing category rather than creating a new one.
  268. The resulting categories are now very broad, merging the log groups.
  269. (A `similarity_threshold` of 11% is generally too low. Settings over
  270. 50% are usually better.)
  271. [source,console-result]
  272. --------------------------------------------------
  273. {
  274. "aggregations" : {
  275. "categories" : {
  276. "buckets" : [
  277. {
  278. "doc_count" : 4,
  279. "key" : "Node",
  280. "regex" : ".*?Node.*?",
  281. "max_matching_length" : 49
  282. },
  283. {
  284. "doc_count" : 2,
  285. "key" : "User",
  286. "regex" : ".*?User.*?",
  287. "max_matching_length" : 52
  288. }
  289. ]
  290. }
  291. }
  292. }
  293. --------------------------------------------------
  294. This aggregation can have both sub-aggregations and itself be a sub-aggregation. This allows gathering the top daily categories and the
  295. top sample doc as below.
  296. [source,console]
  297. --------------------------------------------------
  298. POST log-messages/_search?filter_path=aggregations
  299. {
  300. "aggs": {
  301. "daily": {
  302. "date_histogram": {
  303. "field": "time",
  304. "fixed_interval": "1d"
  305. },
  306. "aggs": {
  307. "categories": {
  308. "categorize_text": {
  309. "field": "message",
  310. "categorization_filters": ["\\w+\\_\\d{3}"]
  311. },
  312. "aggs": {
  313. "hit": {
  314. "top_hits": {
  315. "size": 1,
  316. "sort": ["time"],
  317. "_source": "message"
  318. }
  319. }
  320. }
  321. }
  322. }
  323. }
  324. }
  325. }
  326. --------------------------------------------------
  327. // TEST[setup:categorize_text]
  328. [source,console-result]
  329. --------------------------------------------------
  330. {
  331. "aggregations" : {
  332. "daily" : {
  333. "buckets" : [
  334. {
  335. "key_as_string" : "2016-02-07T00:00:00.000Z",
  336. "key" : 1454803200000,
  337. "doc_count" : 3,
  338. "categories" : {
  339. "buckets" : [
  340. {
  341. "doc_count" : 2,
  342. "key" : "Node shutting down",
  343. "regex" : ".*?Node.+?shutting.+?down.*?",
  344. "max_matching_length" : 49,
  345. "hit" : {
  346. "hits" : {
  347. "total" : {
  348. "value" : 2,
  349. "relation" : "eq"
  350. },
  351. "max_score" : null,
  352. "hits" : [
  353. {
  354. "_index" : "log-messages",
  355. "_id" : "1",
  356. "_score" : null,
  357. "_source" : {
  358. "message" : "2016-02-07T00:00:00+0000 Node 3 shutting down"
  359. },
  360. "sort" : [
  361. 1454803260000
  362. ]
  363. }
  364. ]
  365. }
  366. }
  367. },
  368. {
  369. "doc_count" : 1,
  370. "key" : "Node starting up",
  371. "regex" : ".*?Node.+?starting.+?up.*?",
  372. "max_matching_length" : 47,
  373. "hit" : {
  374. "hits" : {
  375. "total" : {
  376. "value" : 1,
  377. "relation" : "eq"
  378. },
  379. "max_score" : null,
  380. "hits" : [
  381. {
  382. "_index" : "log-messages",
  383. "_id" : "2",
  384. "_score" : null,
  385. "_source" : {
  386. "message" : "2016-02-07T00:00:00+0000 Node 5 starting up"
  387. },
  388. "sort" : [
  389. 1454803320000
  390. ]
  391. }
  392. ]
  393. }
  394. }
  395. }
  396. ]
  397. }
  398. },
  399. {
  400. "key_as_string" : "2016-02-08T00:00:00.000Z",
  401. "key" : 1454889600000,
  402. "doc_count" : 3,
  403. "categories" : {
  404. "buckets" : [
  405. {
  406. "doc_count" : 1,
  407. "key" : "Node shutting down",
  408. "regex" : ".*?Node.+?shutting.+?down.*?",
  409. "max_matching_length" : 49,
  410. "hit" : {
  411. "hits" : {
  412. "total" : {
  413. "value" : 1,
  414. "relation" : "eq"
  415. },
  416. "max_score" : null,
  417. "hits" : [
  418. {
  419. "_index" : "log-messages",
  420. "_id" : "4",
  421. "_score" : null,
  422. "_source" : {
  423. "message" : "2016-02-08T00:00:00+0000 Node 5 shutting down"
  424. },
  425. "sort" : [
  426. 1454889660000
  427. ]
  428. }
  429. ]
  430. }
  431. }
  432. },
  433. {
  434. "doc_count" : 1,
  435. "key" : "User logged off",
  436. "regex" : ".*?User.+?logged.+?off.*?",
  437. "max_matching_length" : 52,
  438. "hit" : {
  439. "hits" : {
  440. "total" : {
  441. "value" : 1,
  442. "relation" : "eq"
  443. },
  444. "max_score" : null,
  445. "hits" : [
  446. {
  447. "_index" : "log-messages",
  448. "_id" : "6",
  449. "_score" : null,
  450. "_source" : {
  451. "message" : "2016-02-08T00:00:00+0000 User foo_864 logged off"
  452. },
  453. "sort" : [
  454. 1454889840000
  455. ]
  456. }
  457. ]
  458. }
  459. }
  460. },
  461. {
  462. "doc_count" : 1,
  463. "key" : "User logging on",
  464. "regex" : ".*?User.+?logging.+?on.*?",
  465. "max_matching_length" : 52,
  466. "hit" : {
  467. "hits" : {
  468. "total" : {
  469. "value" : 1,
  470. "relation" : "eq"
  471. },
  472. "max_score" : null,
  473. "hits" : [
  474. {
  475. "_index" : "log-messages",
  476. "_id" : "5",
  477. "_score" : null,
  478. "_source" : {
  479. "message" : "2016-02-08T00:00:00+0000 User foo_325 logging on"
  480. },
  481. "sort" : [
  482. 1454889720000
  483. ]
  484. }
  485. ]
  486. }
  487. }
  488. }
  489. ]
  490. }
  491. }
  492. ]
  493. }
  494. }
  495. }
  496. --------------------------------------------------