get-trained-models-stats.asciidoc 5.3 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[get-trained-models-stats]]
  4. = Get trained models statistics API
  5. [subs="attributes"]
  6. ++++
  7. <titleabbrev>Get trained models stats</titleabbrev>
  8. ++++
  9. Retrieves usage information for trained models.
  10. [[ml-get-trained-models-stats-request]]
  11. == {api-request-title}
  12. `GET _ml/trained_models/_stats` +
  13. `GET _ml/trained_models/_all/_stats` +
  14. `GET _ml/trained_models/<model_id>/_stats` +
  15. `GET _ml/trained_models/<model_id>,<model_id_2>/_stats` +
  16. `GET _ml/trained_models/<model_id_pattern*>,<model_id_2>/_stats`
  17. [[ml-get-trained-models-stats-prereq]]
  18. == {api-prereq-title}
  19. Requires the `monitor_ml` cluster privilege. This privilege is included in the
  20. `machine_learning_user` built-in role.
  21. [[ml-get-trained-models-stats-desc]]
  22. == {api-description-title}
  23. You can get usage information for multiple trained models in a single API
  24. request by using a comma-separated list of model IDs or a wildcard expression.
  25. [[ml-get-trained-models-stats-path-params]]
  26. == {api-path-parms-title}
  27. `<model_id>`::
  28. (Optional, string)
  29. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id-or-alias]
  30. [[ml-get-trained-models-stats-query-params]]
  31. == {api-query-parms-title}
  32. `allow_no_match`::
  33. (Optional, Boolean)
  34. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=allow-no-match-models]
  35. `from`::
  36. (Optional, integer)
  37. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=from-models]
  38. `size`::
  39. (Optional, integer)
  40. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=size-models]
  41. [role="child_attributes"]
  42. [[ml-get-trained-models-stats-results]]
  43. == {api-response-body-title}
  44. `count`::
  45. (integer)
  46. The total number of trained model statistics that matched the requested ID
  47. patterns. Could be higher than the number of items in the `trained_model_stats`
  48. array as the size of the array is restricted by the supplied `size` parameter.
  49. `trained_model_stats`::
  50. (array)
  51. An array of trained model statistics, which are sorted by the `model_id` value
  52. in ascending order.
  53. +
  54. .Properties of trained model stats
  55. [%collapsible%open]
  56. ====
  57. `model_id`:::
  58. (string)
  59. include::{es-repo-dir}/ml/ml-shared.asciidoc[tag=model-id]
  60. `pipeline_count`:::
  61. (integer)
  62. The number of ingest pipelines that currently refer to the model.
  63. `inference_stats`:::
  64. (object)
  65. A collection of inference stats fields.
  66. +
  67. .Properties of inference stats
  68. [%collapsible%open]
  69. =====
  70. `missing_all_fields_count`:::
  71. (integer)
  72. The number of inference calls where all the training features for the model
  73. were missing.
  74. `inference_count`:::
  75. (integer)
  76. The total number of times the model has been called for inference.
  77. This is across all inference contexts, including all pipelines.
  78. `cache_miss_count`:::
  79. (integer)
  80. The number of times the model was loaded for inference and was not retrieved
  81. from the cache. If this number is close to the `inference_count`, then the cache
  82. is not being appropriately used. This can be solved by increasing the cache size
  83. or its time-to-live (TTL). See <<general-ml-settings>> for the appropriate
  84. settings.
  85. `failure_count`:::
  86. (integer)
  87. The number of failures when using the model for inference.
  88. `timestamp`:::
  89. (<<time-units,time units>>)
  90. The time when the statistics were last updated.
  91. =====
  92. `ingest`:::
  93. (object)
  94. A collection of ingest stats for the model across all nodes. The values are
  95. summations of the individual node statistics. The format matches the `ingest`
  96. section in <<cluster-nodes-stats>>.
  97. ====
  98. [[ml-get-trained-models-stats-response-codes]]
  99. == {api-response-codes-title}
  100. `404` (Missing resources)::
  101. If `allow_no_match` is `false`, this code indicates that there are no
  102. resources that match the request or only partial matches for the request.
  103. [[ml-get-trained-models-stats-example]]
  104. == {api-examples-title}
  105. The following example gets usage information for all the trained models:
  106. [source,console]
  107. --------------------------------------------------
  108. GET _ml/trained_models/_stats
  109. --------------------------------------------------
  110. // TEST[skip:TBD]
  111. The API returns the following results:
  112. [source,console-result]
  113. ----
  114. {
  115. "count": 2,
  116. "trained_model_stats": [
  117. {
  118. "model_id": "flight-delay-prediction-1574775339910",
  119. "pipeline_count": 0,
  120. "inference_stats": {
  121. "failure_count": 0,
  122. "inference_count": 4,
  123. "cache_miss_count": 3,
  124. "missing_all_fields_count": 0,
  125. "timestamp": 1592399986979
  126. }
  127. },
  128. {
  129. "model_id": "regression-job-one-1574775307356",
  130. "pipeline_count": 1,
  131. "inference_stats": {
  132. "failure_count": 0,
  133. "inference_count": 178,
  134. "cache_miss_count": 3,
  135. "missing_all_fields_count": 0,
  136. "timestamp": 1592399986979
  137. },
  138. "ingest": {
  139. "total": {
  140. "count": 178,
  141. "time_in_millis": 8,
  142. "current": 0,
  143. "failed": 0
  144. },
  145. "pipelines": {
  146. "flight-delay": {
  147. "count": 178,
  148. "time_in_millis": 8,
  149. "current": 0,
  150. "failed": 0,
  151. "processors": [
  152. {
  153. "inference": {
  154. "type": "inference",
  155. "stats": {
  156. "count": 178,
  157. "time_in_millis": 7,
  158. "current": 0,
  159. "failed": 0
  160. }
  161. }
  162. }
  163. ]
  164. }
  165. }
  166. }
  167. }
  168. ]
  169. }
  170. ----
  171. // NOTCONSOLE