get-trained-models-stats.asciidoc 12 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462
  1. [role="xpack"]
  2. [[get-trained-models-stats]]
  3. = Get trained models statistics API
  4. [subs="attributes"]
  5. ++++
  6. <titleabbrev>Get trained models stats</titleabbrev>
  7. ++++
  8. Retrieves usage information for trained models.
  9. [[ml-get-trained-models-stats-request]]
  10. == {api-request-title}
  11. `GET _ml/trained_models/_stats` +
  12. `GET _ml/trained_models/_all/_stats` +
  13. `GET _ml/trained_models/<model_id>/_stats` +
  14. `GET _ml/trained_models/<model_id>,<model_id_2>/_stats` +
  15. `GET _ml/trained_models/<model_id_pattern*>,<model_id_2>/_stats`
  16. [[ml-get-trained-models-stats-prereq]]
  17. == {api-prereq-title}
  18. Requires the `monitor_ml` cluster privilege. This privilege is included in the
  19. `machine_learning_user` built-in role.
  20. [[ml-get-trained-models-stats-desc]]
  21. == {api-description-title}
  22. You can get usage information for multiple trained models in a single API
  23. request by using a comma-separated list of model IDs or a wildcard expression.
  24. [[ml-get-trained-models-stats-path-params]]
  25. == {api-path-parms-title}
  26. `<model_id>`::
  27. (Optional, string)
  28. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id-or-alias]
  29. [[ml-get-trained-models-stats-query-params]]
  30. == {api-query-parms-title}
  31. `allow_no_match`::
  32. (Optional, Boolean)
  33. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match-models]
  34. `from`::
  35. (Optional, integer)
  36. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=from-models]
  37. `size`::
  38. (Optional, integer)
  39. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=size-models]
  40. [role="child_attributes"]
  41. [[ml-get-trained-models-stats-results]]
  42. == {api-response-body-title}
  43. `count`::
  44. (integer)
  45. The total number of trained model statistics that matched the requested ID
  46. patterns. Could be higher than the number of items in the `trained_model_stats`
  47. array as the size of the array is restricted by the supplied `size` parameter.
  48. `trained_model_stats`::
  49. (array)
  50. An array of trained model statistics, which are sorted by the `model_id` value
  51. in ascending order.
  52. +
  53. .Properties of trained model stats
  54. [%collapsible%open]
  55. ====
  56. `deployment_stats`:::
  57. (list)
  58. A collection of deployment stats if one of the provided `model_id` values
  59. is deployed
  60. +
  61. .Properties of deployment stats
  62. [%collapsible%open]
  63. =====
  64. `allocation_status`:::
  65. (object)
  66. The detailed allocation status given the deployment configuration.
  67. +
  68. .Properties of allocation stats
  69. [%collapsible%open]
  70. ======
  71. `allocation_count`:::
  72. (integer)
  73. The current number of nodes where the model is allocated.
  74. `cache_size`:::
  75. (<<byte-units,byte value>>)
  76. The inference cache size (in memory outside the JVM heap) per node for the model.
  77. `state`:::
  78. (string)
  79. The detailed allocation state related to the nodes.
  80. +
  81. --
  82. * `starting`: Allocations are being attempted but no node currently has the model allocated.
  83. * `started`: At least one node has the model allocated.
  84. * `fully_allocated`: The deployment is fully allocated and satisfies the `target_allocation_count`.
  85. --
  86. `target_allocation_count`:::
  87. (integer)
  88. The desired number of nodes for model allocation.
  89. ======
  90. `error_count`:::
  91. (integer)
  92. The sum of `error_count` for all nodes in the deployment.
  93. `inference_count`:::
  94. (integer)
  95. The sum of `inference_count` for all nodes in the deployment.
  96. `model_id`:::
  97. (string)
  98. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id]
  99. `nodes`:::
  100. (array of objects)
  101. The deployment stats for each node that currently has the model allocated.
  102. +
  103. .Properties of node stats
  104. [%collapsible%open]
  105. ======
  106. `average_inference_time_ms`:::
  107. (double)
  108. The average time for each inference call to complete on this node.
  109. The average is calculated over the lifetime of the deployment.
  110. `average_inference_time_ms_excluding_cache_hits`:::
  111. (double)
  112. The average time to perform inference on the trained model excluding
  113. occasions where the response comes from the cache. Cached inference
  114. calls return very quickly as the model is not evaluated, by excluding
  115. cache hits this value is an accurate measure of the average time taken
  116. to evaluate the model.
  117. `average_inference_time_ms_last_minute`:::
  118. (double)
  119. The average time for each inference call to complete on this node
  120. in the last minute.
  121. `error_count`:::
  122. (integer)
  123. The number of errors when evaluating the trained model.
  124. `inference_cache_hit_count`:::
  125. (integer)
  126. The total number of inference calls made against this node for this
  127. model that were served from the inference cache.
  128. `inference_cache_hit_count_last_minute`:::
  129. (integer)
  130. The number of inference calls made against this node for this model
  131. in the last minute that were served from the inference cache.
  132. `inference_count`:::
  133. (integer)
  134. The total number of inference calls made against this node for this model.
  135. `last_access`:::
  136. (long)
  137. The epoch time stamp of the last inference call for the model on this node.
  138. `node`:::
  139. (object)
  140. Information pertaining to the node.
  141. +
  142. .Properties of node
  143. [%collapsible%open]
  144. ========
  145. `attributes`:::
  146. (object)
  147. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-attributes]
  148. `ephemeral_id`:::
  149. (string)
  150. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-ephemeral-id]
  151. `id`:::
  152. (string)
  153. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-id]
  154. `name`:::
  155. (string) The node name.
  156. `transport_address`:::
  157. (string)
  158. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=node-transport-address]
  159. ========
  160. `number_of_allocations`:::
  161. (integer)
  162. The number of allocations assigned to this node.
  163. `number_of_pending_requests`:::
  164. (integer)
  165. The number of inference requests queued to be processed.
  166. `peak_throughput_per_minute`:::
  167. (integer)
  168. The peak number of requests processed in a 1 minute period.
  169. `routing_state`:::
  170. (object)
  171. The current routing state and reason for the current routing state for this allocation.
  172. +
  173. .Properties of routing_state
  174. [%collapsible%open]
  175. ========
  176. `reason`:::
  177. (string)
  178. The reason for the current state. Usually only populated when the `routing_state` is `failed`.
  179. `routing_state`:::
  180. (string)
  181. The current routing state.
  182. --
  183. * `starting`: The model is attempting to allocate on this model, inference calls are not yet accepted.
  184. * `started`: The model is allocated and ready to accept inference requests.
  185. * `stopping`: The model is being deallocated from this node.
  186. * `stopped`: The model is fully deallocated from this node.
  187. * `failed`: The allocation attempt failed, see `reason` field for the potential cause.
  188. --
  189. ========
  190. `rejected_execution_count`:::
  191. (integer)
  192. The number of inference requests that were not processed because the
  193. queue was full.
  194. `start_time`:::
  195. (long)
  196. The epoch timestamp when the allocation started.
  197. `threads_per_allocation`:::
  198. (integer)
  199. The number of threads for each allocation during inference.
  200. This value is limited by the number of hardware threads on the node;
  201. it might therefore differ from the `threads_per_allocation` value in the <<start-trained-model-deployment>> API.
  202. `timeout_count`:::
  203. (integer)
  204. The number of inference requests that timed out before being processed.
  205. `throughput_last_minute`:::
  206. (integer)
  207. The number of requests processed in the last 1 minute.
  208. ======
  209. `number_of_allocations`:::
  210. (integer)
  211. The requested number of allocations for the trained model deployment.
  212. `peak_throughput_per_minute`:::
  213. (integer)
  214. The peak number of requests processed in a 1 minute period for
  215. all nodes in the deployment. This is calculated as the sum of
  216. each node's `peak_throughput_per_minute` value.
  217. `rejected_execution_count`:::
  218. (integer)
  219. The sum of `rejected_execution_count` for all nodes in the deployment.
  220. Individual nodes reject an inference request if the inference queue is full.
  221. The queue size is controlled by the `queue_capacity` setting in the
  222. <<start-trained-model-deployment>> API.
  223. `reason`:::
  224. (string)
  225. The reason for the current deployment state.
  226. Usually only populated when the model is not deployed to a node.
  227. `start_time`:::
  228. (long)
  229. The epoch timestamp when the deployment started.
  230. `state`:::
  231. (string)
  232. The overall state of the deployment. The values may be:
  233. +
  234. --
  235. * `starting`: The deployment has recently started but is not yet usable as the model is not allocated on any nodes.
  236. * `started`: The deployment is usable as at least one node has the model allocated.
  237. * `stopping`: The deployment is preparing to stop and deallocate the model from the relevant nodes.
  238. --
  239. `threads_per_allocation`:::
  240. (integer)
  241. The number of threads per allocation used by the inference process.
  242. `timeout_count`:::
  243. (integer)
  244. The sum of `timeout_count` for all nodes in the deployment.
  245. `queue_capacity`:::
  246. (integer)
  247. The number of inference requests that may be queued before new requests are
  248. rejected.
  249. =====
  250. `inference_stats`:::
  251. (object)
  252. A collection of inference stats fields.
  253. +
  254. .Properties of inference stats
  255. [%collapsible%open]
  256. =====
  257. `missing_all_fields_count`:::
  258. (integer)
  259. The number of inference calls where all the training features for the model
  260. were missing.
  261. `inference_count`:::
  262. (integer)
  263. The total number of times the model has been called for inference.
  264. This is across all inference contexts, including all pipelines.
  265. `cache_miss_count`:::
  266. (integer)
  267. The number of times the model was loaded for inference and was not retrieved
  268. from the cache. If this number is close to the `inference_count`, then the cache
  269. is not being appropriately used. This can be solved by increasing the cache size
  270. or its time-to-live (TTL). See <<general-ml-settings>> for the appropriate
  271. settings.
  272. `failure_count`:::
  273. (integer)
  274. The number of failures when using the model for inference.
  275. `timestamp`:::
  276. (<<time-units,time units>>)
  277. The time when the statistics were last updated.
  278. =====
  279. `ingest`:::
  280. (object)
  281. A collection of ingest stats for the model across all nodes. The values are
  282. summations of the individual node statistics. The format matches the `ingest`
  283. section in <<cluster-nodes-stats>>.
  284. `model_id`:::
  285. (string)
  286. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id]
  287. `model_size_stats`:::
  288. (object)
  289. A collection of model size stats fields.
  290. +
  291. .Properties of model size stats
  292. [%collapsible%open]
  293. =====
  294. `model_size_bytes`:::
  295. (integer)
  296. The size of the model in bytes.
  297. `required_native_memory_bytes`:::
  298. (integer)
  299. The amount of memory required to load the model in bytes.
  300. =====
  301. `pipeline_count`:::
  302. (integer)
  303. The number of ingest pipelines that currently refer to the model.
  304. ====
  305. [[ml-get-trained-models-stats-response-codes]]
  306. == {api-response-codes-title}
  307. `404` (Missing resources)::
  308. If `allow_no_match` is `false`, this code indicates that there are no
  309. resources that match the request or only partial matches for the request.
  310. [[ml-get-trained-models-stats-example]]
  311. == {api-examples-title}
  312. The following example gets usage information for all the trained models:
  313. [source,console]
  314. --------------------------------------------------
  315. GET _ml/trained_models/_stats
  316. --------------------------------------------------
  317. // TEST[skip:TBD]
  318. The API returns the following results:
  319. [source,console-result]
  320. ----
  321. {
  322. "count": 2,
  323. "trained_model_stats": [
  324. {
  325. "model_id": "flight-delay-prediction-1574775339910",
  326. "pipeline_count": 0,
  327. "inference_stats": {
  328. "failure_count": 0,
  329. "inference_count": 4,
  330. "cache_miss_count": 3,
  331. "missing_all_fields_count": 0,
  332. "timestamp": 1592399986979
  333. }
  334. },
  335. {
  336. "model_id": "regression-job-one-1574775307356",
  337. "pipeline_count": 1,
  338. "inference_stats": {
  339. "failure_count": 0,
  340. "inference_count": 178,
  341. "cache_miss_count": 3,
  342. "missing_all_fields_count": 0,
  343. "timestamp": 1592399986979
  344. },
  345. "ingest": {
  346. "total": {
  347. "count": 178,
  348. "time_in_millis": 8,
  349. "current": 0,
  350. "failed": 0
  351. },
  352. "pipelines": {
  353. "flight-delay": {
  354. "count": 178,
  355. "time_in_millis": 8,
  356. "current": 0,
  357. "failed": 0,
  358. "processors": [
  359. {
  360. "inference": {
  361. "type": "inference",
  362. "stats": {
  363. "count": 178,
  364. "time_in_millis": 7,
  365. "current": 0,
  366. "failed": 0
  367. }
  368. }
  369. }
  370. ]
  371. }
  372. }
  373. }
  374. }
  375. ]
  376. }
  377. ----
  378. // NOTCONSOLE