profile.asciidoc 37 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931932933934
  1. [[search-profile]]
  2. === Profile API
  3. WARNING: The Profile API is a debugging tool and adds significant overhead to search execution.
  4. Provides detailed timing information about the execution of individual
  5. components in a search request.
  6. [[search-profile-api-desc]]
  7. ==== {api-description-title}
  8. The Profile API gives the user insight into how search requests are executed at
  9. a low level so that the user can understand why certain requests are slow, and
  10. take steps to improve them. Note that the Profile API,
  11. <<profile-limitations, amongst other things>>, doesn't measure network latency,
  12. time spent in the search fetch phase, time spent while the requests spends in
  13. queues or while merging shard responses on the coordinating node.
  14. The output from the Profile API is *very* verbose, especially for complicated
  15. requests executed across many shards. Pretty-printing the response is
  16. recommended to help understand the output.
  17. [[search-profile-api-example]]
  18. ==== {api-examples-title}
  19. Any `_search` request can be profiled by adding a top-level `profile` parameter:
  20. [source,console]
  21. --------------------------------------------------
  22. GET /twitter/_search
  23. {
  24. "profile": true,<1>
  25. "query" : {
  26. "match" : { "message" : "some number" }
  27. }
  28. }
  29. --------------------------------------------------
  30. // TEST[setup:twitter]
  31. <1> Setting the top-level `profile` parameter to `true` will enable profiling
  32. for the search.
  33. The API returns the following result:
  34. [source,console-result]
  35. --------------------------------------------------
  36. {
  37. "took": 25,
  38. "timed_out": false,
  39. "_shards": {
  40. "total": 1,
  41. "successful": 1,
  42. "skipped" : 0,
  43. "failed": 0
  44. },
  45. "hits": {
  46. "total" : {
  47. "value": 4,
  48. "relation": "eq"
  49. },
  50. "max_score": 0.5093388,
  51. "hits": [...] <1>
  52. },
  53. "profile": {
  54. "shards": [
  55. {
  56. "id": "[2aE02wS1R8q_QFnYu6vDVQ][twitter][0]",
  57. "searches": [
  58. {
  59. "query": [
  60. {
  61. "type": "BooleanQuery",
  62. "description": "message:some message:number",
  63. "time_in_nanos": "1873811",
  64. "breakdown": {
  65. "score": 51306,
  66. "score_count": 4,
  67. "build_scorer": 2935582,
  68. "build_scorer_count": 1,
  69. "match": 0,
  70. "match_count": 0,
  71. "create_weight": 919297,
  72. "create_weight_count": 1,
  73. "next_doc": 53876,
  74. "next_doc_count": 5,
  75. "advance": 0,
  76. "advance_count": 0,
  77. "compute_max_score": 0,
  78. "compute_max_score_count": 0,
  79. "shallow_advance": 0,
  80. "shallow_advance_count": 0,
  81. "set_min_competitive_score": 0,
  82. "set_min_competitive_score_count": 0
  83. },
  84. "children": [
  85. {
  86. "type": "TermQuery",
  87. "description": "message:some",
  88. "time_in_nanos": "391943",
  89. "breakdown": {
  90. "score": 28776,
  91. "score_count": 4,
  92. "build_scorer": 784451,
  93. "build_scorer_count": 1,
  94. "match": 0,
  95. "match_count": 0,
  96. "create_weight": 1669564,
  97. "create_weight_count": 1,
  98. "next_doc": 10111,
  99. "next_doc_count": 5,
  100. "advance": 0,
  101. "advance_count": 0,
  102. "compute_max_score": 0,
  103. "compute_max_score_count": 0,
  104. "shallow_advance": 0,
  105. "shallow_advance_count": 0,
  106. "set_min_competitive_score": 0,
  107. "set_min_competitive_score_count": 0
  108. }
  109. },
  110. {
  111. "type": "TermQuery",
  112. "description": "message:number",
  113. "time_in_nanos": "210682",
  114. "breakdown": {
  115. "score": 4552,
  116. "score_count": 4,
  117. "build_scorer": 42602,
  118. "build_scorer_count": 1,
  119. "match": 0,
  120. "match_count": 0,
  121. "create_weight": 89323,
  122. "create_weight_count": 1,
  123. "next_doc": 2852,
  124. "next_doc_count": 5,
  125. "advance": 0,
  126. "advance_count": 0,
  127. "compute_max_score": 0,
  128. "compute_max_score_count": 0,
  129. "shallow_advance": 0,
  130. "shallow_advance_count": 0,
  131. "set_min_competitive_score": 0,
  132. "set_min_competitive_score_count": 0
  133. }
  134. }
  135. ]
  136. }
  137. ],
  138. "rewrite_time": 51443,
  139. "collector": [
  140. {
  141. "name": "SimpleTopScoreDocCollector",
  142. "reason": "search_top_hits",
  143. "time_in_nanos": "32273"
  144. }
  145. ]
  146. }
  147. ],
  148. "aggregations": []
  149. }
  150. ]
  151. }
  152. }
  153. --------------------------------------------------
  154. // TESTRESPONSE[s/"took": 25/"took": $body.took/]
  155. // TESTRESPONSE[s/"hits": \[...\]/"hits": $body.$_path/]
  156. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  157. // TESTRESPONSE[s/\[2aE02wS1R8q_QFnYu6vDVQ\]\[twitter\]\[0\]/$body.$_path/]
  158. <1> Search results are returned, but were omitted here for brevity.
  159. Even for a simple query, the response is relatively complicated. Let's break it
  160. down piece-by-piece before moving to more complex examples.
  161. The overall structure of the profile response is as follows:
  162. [source,console-result]
  163. --------------------------------------------------
  164. {
  165. "profile": {
  166. "shards": [
  167. {
  168. "id": "[2aE02wS1R8q_QFnYu6vDVQ][twitter][0]", <1>
  169. "searches": [
  170. {
  171. "query": [...], <2>
  172. "rewrite_time": 51443, <3>
  173. "collector": [...] <4>
  174. }
  175. ],
  176. "aggregations": [...] <5>
  177. }
  178. ]
  179. }
  180. }
  181. --------------------------------------------------
  182. // TESTRESPONSE[s/"profile": /"took": $body.took, "timed_out": $body.timed_out, "_shards": $body._shards, "hits": $body.hits, "profile": /]
  183. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  184. // TESTRESPONSE[s/\[2aE02wS1R8q_QFnYu6vDVQ\]\[twitter\]\[0\]/$body.$_path/]
  185. // TESTRESPONSE[s/"query": \[...\]/"query": $body.$_path/]
  186. // TESTRESPONSE[s/"collector": \[...\]/"collector": $body.$_path/]
  187. // TESTRESPONSE[s/"aggregations": \[...\]/"aggregations": []/]
  188. <1> A profile is returned for each shard that participated in the response, and
  189. is identified by a unique ID.
  190. <2> Each profile contains a section which holds details about the query
  191. execution.
  192. <3> Each profile has a single time representing the cumulative rewrite time.
  193. <4> Each profile also contains a section about the Lucene Collectors which run
  194. the search.
  195. <5> Each profile contains a section which holds the details about the
  196. aggregation execution.
  197. Because a search request may be executed against one or more shards in an index,
  198. and a search may cover one or more indices, the top level element in the profile
  199. response is an array of `shard` objects. Each shard object lists its `id` which
  200. uniquely identifies the shard. The ID's format is
  201. `[nodeID][indexName][shardID]`.
  202. The profile itself may consist of one or more "searches", where a search is a
  203. query executed against the underlying Lucene index. Most search requests
  204. submitted by the user will only execute a single `search` against the Lucene
  205. index. But occasionally multiple searches will be executed, such as including a
  206. global aggregation (which needs to execute a secondary "match_all" query for the
  207. global context).
  208. Inside each `search` object there will be two arrays of profiled information:
  209. a `query` array and a `collector` array. Alongside the `search` object is an
  210. `aggregations` object that contains the profile information for the
  211. aggregations. In the future, more sections may be added, such as `suggest`,
  212. `highlight`, etc.
  213. There will also be a `rewrite` metric showing the total time spent rewriting the
  214. query (in nanoseconds).
  215. NOTE: As with other statistics apis, the Profile API supports human readable outputs. This can be turned on by adding
  216. `?human=true` to the query string. In this case, the output contains the additional `time` field containing rounded,
  217. human readable timing information (e.g. `"time": "391,9ms"`, `"time": "123.3micros"`).
  218. [[profiling-queries]]
  219. ==== Profiling Queries
  220. [NOTE]
  221. =======================================
  222. The details provided by the Profile API directly expose Lucene class names and concepts, which means
  223. that complete interpretation of the results require fairly advanced knowledge of Lucene. This
  224. page attempts to give a crash-course in how Lucene executes queries so that you can use the Profile API to successfully
  225. diagnose and debug queries, but it is only an overview. For complete understanding, please refer
  226. to Lucene's documentation and, in places, the code.
  227. With that said, a complete understanding is often not required to fix a slow query. It is usually
  228. sufficient to see that a particular component of a query is slow, and not necessarily understand why
  229. the `advance` phase of that query is the cause, for example.
  230. =======================================
  231. [[query-section]]
  232. ===== `query` Section
  233. The `query` section contains detailed timing of the query tree executed by
  234. Lucene on a particular shard. The overall structure of this query tree will
  235. resemble your original Elasticsearch query, but may be slightly (or sometimes
  236. very) different. It will also use similar but not always identical naming.
  237. Using our previous `match` query example, let's analyze the `query` section:
  238. [source,console-result]
  239. --------------------------------------------------
  240. "query": [
  241. {
  242. "type": "BooleanQuery",
  243. "description": "message:some message:number",
  244. "time_in_nanos": "1873811",
  245. "breakdown": {...}, <1>
  246. "children": [
  247. {
  248. "type": "TermQuery",
  249. "description": "message:some",
  250. "time_in_nanos": "391943",
  251. "breakdown": {...}
  252. },
  253. {
  254. "type": "TermQuery",
  255. "description": "message:number",
  256. "time_in_nanos": "210682",
  257. "breakdown": {...}
  258. }
  259. ]
  260. }
  261. ]
  262. --------------------------------------------------
  263. // TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n/]
  264. // TESTRESPONSE[s/]$/],"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/]
  265. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  266. // TESTRESPONSE[s/"breakdown": \{...\}/"breakdown": $body.$_path/]
  267. <1> The breakdown timings are omitted for simplicity.
  268. Based on the profile structure, we can see that our `match` query was rewritten
  269. by Lucene into a BooleanQuery with two clauses (both holding a TermQuery). The
  270. `type` field displays the Lucene class name, and often aligns with the
  271. equivalent name in Elasticsearch. The `description` field displays the Lucene
  272. explanation text for the query, and is made available to help differentiating
  273. between parts of your query (e.g. both `message:search` and `message:test` are
  274. TermQuery's and would appear identical otherwise.
  275. The `time_in_nanos` field shows that this query took ~1.8ms for the entire
  276. BooleanQuery to execute. The recorded time is inclusive of all children.
  277. The `breakdown` field will give detailed stats about how the time was spent,
  278. we'll look at that in a moment. Finally, the `children` array lists any
  279. sub-queries that may be present. Because we searched for two values ("search
  280. test"), our BooleanQuery holds two children TermQueries. They have identical
  281. information (type, time, breakdown, etc). Children are allowed to have their
  282. own children.
  283. ===== Timing Breakdown
  284. The `breakdown` component lists detailed timing statistics about low-level
  285. Lucene execution:
  286. [source,console-result]
  287. --------------------------------------------------
  288. "breakdown": {
  289. "score": 51306,
  290. "score_count": 4,
  291. "build_scorer": 2935582,
  292. "build_scorer_count": 1,
  293. "match": 0,
  294. "match_count": 0,
  295. "create_weight": 919297,
  296. "create_weight_count": 1,
  297. "next_doc": 53876,
  298. "next_doc_count": 5,
  299. "advance": 0,
  300. "advance_count": 0,
  301. "compute_max_score": 0,
  302. "compute_max_score_count": 0,
  303. "shallow_advance": 0,
  304. "shallow_advance_count": 0,
  305. "set_min_competitive_score": 0,
  306. "set_min_competitive_score_count": 0
  307. }
  308. --------------------------------------------------
  309. // TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": [{\n"type": "BooleanQuery",\n"description": "message:some message:number",\n"time_in_nanos": $body.$_path,/]
  310. // TESTRESPONSE[s/}$/},\n"children": $body.$_path}],\n"rewrite_time": $body.$_path, "collector": $body.$_path}], "aggregations": []}]}}/]
  311. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  312. Timings are listed in wall-clock nanoseconds and are not normalized at all. All
  313. caveats about the overall `time_in_nanos` apply here. The intention of the
  314. breakdown is to give you a feel for A) what machinery in Lucene is actually
  315. eating time, and B) the magnitude of differences in times between the various
  316. components. Like the overall time, the breakdown is inclusive of all children
  317. times.
  318. The meaning of the stats are as follows:
  319. [float]
  320. ===== All parameters:
  321. [horizontal]
  322. `create_weight`::
  323. A Query in Lucene must be capable of reuse across multiple IndexSearchers (think of it as the engine that
  324. executes a search against a specific Lucene Index). This puts Lucene in a tricky spot, since many queries
  325. need to accumulate temporary state/statistics associated with the index it is being used against, but the
  326. Query contract mandates that it must be immutable.
  327. {empty} +
  328. {empty} +
  329. To get around this, Lucene asks each query to generate a Weight object which acts as a temporary context
  330. object to hold state associated with this particular (IndexSearcher, Query) tuple. The `weight` metric
  331. shows how long this process takes
  332. `build_scorer`::
  333. This parameter shows how long it takes to build a Scorer for the query. A Scorer is the mechanism that
  334. iterates over matching documents and generates a score per-document (e.g. how well does "foo" match the document?).
  335. Note, this records the time required to generate the Scorer object, not actually score the documents. Some
  336. queries have faster or slower initialization of the Scorer, depending on optimizations, complexity, etc.
  337. {empty} +
  338. {empty} +
  339. This may also show timing associated with caching, if enabled and/or applicable for the query
  340. `next_doc`::
  341. The Lucene method `next_doc` returns Doc ID of the next document matching the query. This statistic shows
  342. the time it takes to determine which document is the next match, a process that varies considerably depending
  343. on the nature of the query. Next_doc is a specialized form of advance() which is more convenient for many
  344. queries in Lucene. It is equivalent to advance(docId() + 1)
  345. `advance`::
  346. `advance` is the "lower level" version of next_doc: it serves the same purpose of finding the next matching
  347. doc, but requires the calling query to perform extra tasks such as identifying and moving past skips, etc.
  348. However, not all queries can use next_doc, so `advance` is also timed for those queries.
  349. {empty} +
  350. {empty} +
  351. Conjunctions (e.g. `must` clauses in a boolean) are typical consumers of `advance`
  352. `match`::
  353. Some queries, such as phrase queries, match documents using a "two-phase" process. First, the document is
  354. "approximately" matched, and if it matches approximately, it is checked a second time with a more rigorous
  355. (and expensive) process. The second phase verification is what the `match` statistic measures.
  356. {empty} +
  357. {empty} +
  358. For example, a phrase query first checks a document approximately by ensuring all terms in the phrase are
  359. present in the doc. If all the terms are present, it then executes the second phase verification to ensure
  360. the terms are in-order to form the phrase, which is relatively more expensive than just checking for presence
  361. of the terms.
  362. {empty} +
  363. {empty} +
  364. Because this two-phase process is only used by a handful of queries, the `match` statistic is often zero
  365. `score`::
  366. This records the time taken to score a particular document via its Scorer
  367. `*_count`::
  368. Records the number of invocations of the particular method. For example, `"next_doc_count": 2,`
  369. means the `nextDoc()` method was called on two different documents. This can be used to help judge
  370. how selective queries are, by comparing counts between different query components.
  371. [[collectors-section]]
  372. ===== `collectors` Section
  373. The Collectors portion of the response shows high-level execution details.
  374. Lucene works by defining a "Collector" which is responsible for coordinating the
  375. traversal, scoring, and collection of matching documents. Collectors are also
  376. how a single query can record aggregation results, execute unscoped "global"
  377. queries, execute post-query filters, etc.
  378. Looking at the previous example:
  379. [source,console-result]
  380. --------------------------------------------------
  381. "collector": [
  382. {
  383. "name": "SimpleTopScoreDocCollector",
  384. "reason": "search_top_hits",
  385. "time_in_nanos": "32273"
  386. }
  387. ]
  388. --------------------------------------------------
  389. // TESTRESPONSE[s/^/{\n"took": $body.took,\n"timed_out": $body.timed_out,\n"_shards": $body._shards,\n"hits": $body.hits,\n"profile": {\n"shards": [ {\n"id": "$body.$_path",\n"searches": [{\n"query": $body.$_path,\n"rewrite_time": $body.$_path,/]
  390. // TESTRESPONSE[s/]$/]}], "aggregations": []}]}}/]
  391. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  392. We see a single collector named `SimpleTopScoreDocCollector` wrapped into
  393. `CancellableCollector`. `SimpleTopScoreDocCollector` is the default "scoring and
  394. sorting" `Collector` used by {es}. The `reason` field attempts to give a plain
  395. English description of the class name. The `time_in_nanos` is similar to the
  396. time in the Query tree: a wall-clock time inclusive of all children. Similarly,
  397. `children` lists all sub-collectors. The `CancellableCollector` that wraps
  398. `SimpleTopScoreDocCollector` is used by {es} to detect if the current search was
  399. cancelled and stop collecting documents as soon as it occurs.
  400. It should be noted that Collector times are **independent** from the Query
  401. times. They are calculated, combined, and normalized independently! Due to the
  402. nature of Lucene's execution, it is impossible to "merge" the times from the
  403. Collectors into the Query section, so they are displayed in separate portions.
  404. For reference, the various collector reasons are:
  405. [horizontal]
  406. `search_sorted`::
  407. A collector that scores and sorts documents. This is the most common collector and will be seen in most
  408. simple searches
  409. `search_count`::
  410. A collector that only counts the number of documents that match the query, but does not fetch the source.
  411. This is seen when `size: 0` is specified
  412. `search_terminate_after_count`::
  413. A collector that terminates search execution after `n` matching documents have been found. This is seen
  414. when the `terminate_after_count` query parameter has been specified
  415. `search_min_score`::
  416. A collector that only returns matching documents that have a score greater than `n`. This is seen when
  417. the top-level parameter `min_score` has been specified.
  418. `search_multi`::
  419. A collector that wraps several other collectors. This is seen when combinations of search, aggregations,
  420. global aggs, and post_filters are combined in a single search.
  421. `search_timeout`::
  422. A collector that halts execution after a specified period of time. This is seen when a `timeout` top-level
  423. parameter has been specified.
  424. `aggregation`::
  425. A collector that Elasticsearch uses to run aggregations against the query scope. A single `aggregation`
  426. collector is used to collect documents for *all* aggregations, so you will see a list of aggregations
  427. in the name rather.
  428. `global_aggregation`::
  429. A collector that executes an aggregation against the global query scope, rather than the specified query.
  430. Because the global scope is necessarily different from the executed query, it must execute its own
  431. match_all query (which you will see added to the Query section) to collect your entire dataset
  432. [[rewrite-section]]
  433. ===== `rewrite` Section
  434. All queries in Lucene undergo a "rewriting" process. A query (and its
  435. sub-queries) may be rewritten one or more times, and the process continues until
  436. the query stops changing. This process allows Lucene to perform optimizations,
  437. such as removing redundant clauses, replacing one query for a more efficient
  438. execution path, etc. For example a Boolean -> Boolean -> TermQuery can be
  439. rewritten to a TermQuery, because all the Booleans are unnecessary in this case.
  440. The rewriting process is complex and difficult to display, since queries can
  441. change drastically. Rather than showing the intermediate results, the total
  442. rewrite time is simply displayed as a value (in nanoseconds). This value is
  443. cumulative and contains the total time for all queries being rewritten.
  444. ===== A more complex example
  445. To demonstrate a slightly more complex query and the associated results, we can
  446. profile the following query:
  447. [source,console]
  448. --------------------------------------------------
  449. GET /twitter/_search
  450. {
  451. "profile": true,
  452. "query": {
  453. "term": {
  454. "user": {
  455. "value": "test"
  456. }
  457. }
  458. },
  459. "aggs": {
  460. "my_scoped_agg": {
  461. "terms": {
  462. "field": "likes"
  463. }
  464. },
  465. "my_global_agg": {
  466. "global": {},
  467. "aggs": {
  468. "my_level_agg": {
  469. "terms": {
  470. "field": "likes"
  471. }
  472. }
  473. }
  474. }
  475. },
  476. "post_filter": {
  477. "match": {
  478. "message": "some"
  479. }
  480. }
  481. }
  482. --------------------------------------------------
  483. // TEST[s/_search/_search\?filter_path=profile.shards.id,profile.shards.searches,profile.shards.aggregations/]
  484. // TEST[continued]
  485. This example has:
  486. - A query
  487. - A scoped aggregation
  488. - A global aggregation
  489. - A post_filter
  490. The API returns the following result:
  491. [source,console-result]
  492. --------------------------------------------------
  493. {
  494. ...
  495. "profile": {
  496. "shards": [
  497. {
  498. "id": "[P6-vulHtQRWuD4YnubWb7A][test][0]",
  499. "searches": [
  500. {
  501. "query": [
  502. {
  503. "type": "TermQuery",
  504. "description": "message:some",
  505. "time_in_nanos": "409456",
  506. "breakdown": {
  507. "score": 0,
  508. "build_scorer_count": 1,
  509. "match_count": 0,
  510. "create_weight": 31584,
  511. "next_doc": 0,
  512. "match": 0,
  513. "create_weight_count": 1,
  514. "next_doc_count": 2,
  515. "score_count": 1,
  516. "build_scorer": 377872,
  517. "advance": 0,
  518. "advance_count": 0,
  519. "compute_max_score": 0,
  520. "compute_max_score_count": 0,
  521. "shallow_advance": 0,
  522. "shallow_advance_count": 0,
  523. "set_min_competitive_score": 0,
  524. "set_min_competitive_score_count": 0
  525. }
  526. },
  527. {
  528. "type": "TermQuery",
  529. "description": "user:test",
  530. "time_in_nanos": "303702",
  531. "breakdown": {
  532. "score": 0,
  533. "build_scorer_count": 1,
  534. "match_count": 0,
  535. "create_weight": 185215,
  536. "next_doc": 5936,
  537. "match": 0,
  538. "create_weight_count": 1,
  539. "next_doc_count": 2,
  540. "score_count": 1,
  541. "build_scorer": 112551,
  542. "advance": 0,
  543. "advance_count": 0,
  544. "compute_max_score": 0,
  545. "compute_max_score_count": 0,
  546. "shallow_advance": 0,
  547. "shallow_advance_count": 0,
  548. "set_min_competitive_score": 0,
  549. "set_min_competitive_score_count": 0
  550. }
  551. }
  552. ],
  553. "rewrite_time": 7208,
  554. "collector": [
  555. {
  556. "name": "MultiCollector",
  557. "reason": "search_multi",
  558. "time_in_nanos": 1820,
  559. "children": [
  560. {
  561. "name": "FilteredCollector",
  562. "reason": "search_post_filter",
  563. "time_in_nanos": 7735,
  564. "children": [
  565. {
  566. "name": "SimpleTopScoreDocCollector",
  567. "reason": "search_top_hits",
  568. "time_in_nanos": 1328
  569. }
  570. ]
  571. },
  572. {
  573. "name": "MultiBucketCollector: [[my_scoped_agg, my_global_agg]]",
  574. "reason": "aggregation",
  575. "time_in_nanos": 8273
  576. }
  577. ]
  578. }
  579. ]
  580. }
  581. ],
  582. "aggregations": [...] <1>
  583. }
  584. ]
  585. }
  586. }
  587. --------------------------------------------------
  588. // TESTRESPONSE[s/"aggregations": \[\.\.\.\]/"aggregations": $body.$_path/]
  589. // TESTRESPONSE[s/\.\.\.//]
  590. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  591. // TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[test\]\[0\]"/"id": $body.profile.shards.0.id/]
  592. <1> The `"aggregations"` portion has been omitted because it will be covered in
  593. the next section.
  594. As you can see, the output is significantly more verbose than before. All the
  595. major portions of the query are represented:
  596. 1. The first `TermQuery` (user:test) represents the main `term` query.
  597. 2. The second `TermQuery` (message:some) represents the `post_filter` query.
  598. The Collector tree is fairly straightforward, showing how a single
  599. CancellableCollector wraps a MultiCollector which also wraps a FilteredCollector
  600. to execute the post_filter (and in turn wraps the normal scoring
  601. SimpleCollector), a BucketCollector to run all scoped aggregations.
  602. ===== Understanding MultiTermQuery output
  603. A special note needs to be made about the `MultiTermQuery` class of queries.
  604. This includes wildcards, regex, and fuzzy queries. These queries emit very
  605. verbose responses, and are not overly structured.
  606. Essentially, these queries rewrite themselves on a per-segment basis. If you
  607. imagine the wildcard query `b*`, it technically can match any token that begins
  608. with the letter "b". It would be impossible to enumerate all possible
  609. combinations, so Lucene rewrites the query in context of the segment being
  610. evaluated, e.g., one segment may contain the tokens `[bar, baz]`, so the query
  611. rewrites to a BooleanQuery combination of "bar" and "baz". Another segment may
  612. only have the token `[bakery]`, so the query rewrites to a single TermQuery for
  613. "bakery".
  614. Due to this dynamic, per-segment rewriting, the clean tree structure becomes
  615. distorted and no longer follows a clean "lineage" showing how one query rewrites
  616. into the next. At present time, all we can do is apologize, and suggest you
  617. collapse the details for that query's children if it is too confusing. Luckily,
  618. all the timing statistics are correct, just not the physical layout in the
  619. response, so it is sufficient to just analyze the top-level MultiTermQuery and
  620. ignore its children if you find the details too tricky to interpret.
  621. Hopefully this will be fixed in future iterations, but it is a tricky problem to
  622. solve and still in-progress. :)
  623. [[profiling-aggregations]]
  624. ===== Profiling Aggregations
  625. [[agg-section]]
  626. ====== `aggregations` Section
  627. The `aggregations` section contains detailed timing of the aggregation tree
  628. executed by a particular shard. The overall structure of this aggregation tree
  629. will resemble your original {es} request. Let's execute the previous query again
  630. and look at the aggregation profile this time:
  631. [source,console]
  632. --------------------------------------------------
  633. GET /twitter/_search
  634. {
  635. "profile": true,
  636. "query": {
  637. "term": {
  638. "user": {
  639. "value": "test"
  640. }
  641. }
  642. },
  643. "aggs": {
  644. "my_scoped_agg": {
  645. "terms": {
  646. "field": "likes"
  647. }
  648. },
  649. "my_global_agg": {
  650. "global": {},
  651. "aggs": {
  652. "my_level_agg": {
  653. "terms": {
  654. "field": "likes"
  655. }
  656. }
  657. }
  658. }
  659. },
  660. "post_filter": {
  661. "match": {
  662. "message": "some"
  663. }
  664. }
  665. }
  666. --------------------------------------------------
  667. // TEST[s/_search/_search\?filter_path=profile.shards.aggregations/]
  668. // TEST[continued]
  669. This yields the following aggregation profile output:
  670. [source,console-result]
  671. --------------------------------------------------
  672. {
  673. "profile" : {
  674. "shards" : [
  675. {
  676. "aggregations" : [
  677. {
  678. "type" : "LongTermsAggregator",
  679. "description" : "my_scoped_agg",
  680. "time_in_nanos" : 195386,
  681. "breakdown" : {
  682. "reduce" : 0,
  683. "build_aggregation" : 81171,
  684. "build_aggregation_count" : 1,
  685. "initialize" : 22753,
  686. "initialize_count" : 1,
  687. "reduce_count" : 0,
  688. "collect" : 91456,
  689. "collect_count" : 4
  690. }
  691. },
  692. {
  693. "type" : "GlobalAggregator",
  694. "description" : "my_global_agg",
  695. "time_in_nanos" : 190430,
  696. "breakdown" : {
  697. "reduce" : 0,
  698. "build_aggregation" : 59990,
  699. "build_aggregation_count" : 1,
  700. "initialize" : 29619,
  701. "initialize_count" : 1,
  702. "reduce_count" : 0,
  703. "collect" : 100815,
  704. "collect_count" : 4
  705. },
  706. "children" : [
  707. {
  708. "type" : "LongTermsAggregator",
  709. "description" : "my_level_agg",
  710. "time_in_nanos" : 160329,
  711. "breakdown" : {
  712. "reduce" : 0,
  713. "build_aggregation" : 55712,
  714. "build_aggregation_count" : 1,
  715. "initialize" : 10559,
  716. "initialize_count" : 1,
  717. "reduce_count" : 0,
  718. "collect" : 94052,
  719. "collect_count" : 4
  720. }
  721. }
  722. ]
  723. }
  724. ]
  725. }
  726. ]
  727. }
  728. }
  729. --------------------------------------------------
  730. // TESTRESPONSE[s/\.\.\.//]
  731. // TESTRESPONSE[s/(?<=[" ])\d+(\.\d+)?/$body.$_path/]
  732. // TESTRESPONSE[s/"id": "\[P6-vulHtQRWuD4YnubWb7A\]\[test\]\[0\]"/"id": $body.profile.shards.0.id/]
  733. From the profile structure we can see that the `my_scoped_agg` is internally
  734. being run as a `LongTermsAggregator` (because the field it is aggregating,
  735. `likes`, is a numeric field). At the same level, we see a `GlobalAggregator`
  736. which comes from `my_global_agg`. That aggregation then has a child
  737. `LongTermsAggregator` which comes from the second term's aggregation on `likes`.
  738. The `time_in_nanos` field shows the time executed by each aggregation, and is
  739. inclusive of all children. While the overall time is useful, the `breakdown`
  740. field will give detailed stats about how the time was spent.
  741. ===== Timing Breakdown
  742. The `breakdown` component lists detailed timing statistics about low-level
  743. Lucene execution:
  744. [source,js]
  745. --------------------------------------------------
  746. "breakdown": {
  747. "reduce": 0,
  748. "reduce_count": 0,
  749. "build_aggregation": 49765,
  750. "build_aggregation_count": 300,
  751. "initialize": 52785,
  752. "initialize_count": 300,
  753. "reduce_count": 0,
  754. "collect": 3155490036,
  755. "collect_count": 1800
  756. }
  757. --------------------------------------------------
  758. // NOTCONSOLE
  759. Timings are listed in wall-clock nanoseconds and are not normalized at all. All
  760. caveats about the overall `time` apply here. The intention of the breakdown is
  761. to give you a feel for A) what machinery in {es} is actually eating time, and B)
  762. the magnitude of differences in times between the various components. Like the
  763. overall time, the breakdown is inclusive of all children times.
  764. The meaning of the stats are as follows:
  765. [float]
  766. ===== All parameters:
  767. [horizontal]
  768. `initialise`::
  769. This times how long it takes to create and initialise the aggregation before starting to collect documents.
  770. `collect`::
  771. This represents the cumulative time spent in the collect phase of the aggregation. This is where matching documents are passed to the aggregation and the state of the aggregator is updated based on the information contained in the documents.
  772. `build_aggregation`::
  773. This represents the time spent creating the shard level results of the aggregation ready to pass back to the reducing node after the collection of documents is finished.
  774. `reduce`::
  775. This is not currently used and will always report `0`. Currently aggregation profiling only times the shard level parts of the aggregation execution. Timing of the reduce phase will be added later.
  776. `*_count`::
  777. Records the number of invocations of the particular method. For example, `"collect_count": 2,`
  778. means the `collect()` method was called on two different documents.
  779. [[profiling-considerations]]
  780. ===== Profiling Considerations
  781. Like any profiler, the Profile API introduces a non-negligible overhead to
  782. search execution. The act of instrumenting low-level method calls such as
  783. `collect`, `advance`, and `next_doc` can be fairly expensive, since these
  784. methods are called in tight loops. Therefore, profiling should not be enabled
  785. in production settings by default, and should not be compared against
  786. non-profiled query times. Profiling is just a diagnostic tool.
  787. There are also cases where special Lucene optimizations are disabled, since they
  788. are not amenable to profiling. This could cause some queries to report larger
  789. relative times than their non-profiled counterparts, but in general should not
  790. have a drastic effect compared to other components in the profiled query.
  791. [[profile-limitations]]
  792. ===== Limitations
  793. - Profiling currently does not measure the search fetch phase nor the network
  794. overhead.
  795. - Profiling also does not account for time spent in the queue, merging shard
  796. responses on the coordinating node, or additional work such as building global
  797. ordinals (an internal data structure used to speed up search).
  798. - Profiling statistics are currently not available for suggestions,
  799. highlighting, `dfs_query_then_fetch`.
  800. - Profiling of the reduce phase of aggregation is currently not available.
  801. - The Profiler is still highly experimental. The Profiler is instrumenting parts
  802. of Lucene that were never designed to be exposed in this manner, and so all
  803. results should be viewed as a best effort to provide detailed diagnostics. We
  804. hope to improve this over time. If you find obviously wrong numbers, strange
  805. query structures, or other bugs, please report them!