collectors.asciidoc 8.4 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
  1. [role="xpack"]
  2. [[es-monitoring-collectors]]
  3. == Collectors
  4. [IMPORTANT]
  5. =========================
  6. {metricbeat} is the recommended method for collecting and shipping monitoring
  7. data to a monitoring cluster.
  8. If you have previously configured legacy collection methods, you should migrate
  9. to using {metricbeat} collection methods. Use either {metricbeat} collection or
  10. legacy collection methods; do not use both.
  11. Learn more about <<configuring-metricbeat>>.
  12. =========================
  13. Collectors, as their name implies, collect things. Each collector runs once for
  14. each collection interval to obtain data from the public APIs in {es} and {xpack}
  15. that it chooses to monitor. When the data collection is finished, the data is
  16. handed in bulk to the <<es-monitoring-exporters,exporters>> to be sent to the
  17. monitoring clusters. Regardless of the number of exporters, each collector only
  18. runs once per collection interval.
  19. There is only one collector per data type gathered. In other words, for any
  20. monitoring document that is created, it comes from a single collector rather
  21. than being merged from multiple collectors. The {es} {monitor-features}
  22. currently have a few collectors because the goal is to minimize overlap between
  23. them for optimal performance.
  24. Each collector can create zero or more monitoring documents. For example,
  25. the `index_stats` collector collects all index statistics at the same time to
  26. avoid many unnecessary calls.
  27. [options="header"]
  28. |=======================
  29. | Collector | Data Types | Description
  30. | Cluster Stats | `cluster_stats`
  31. | Gathers details about the cluster state, including parts of the actual cluster
  32. state (for example `GET /_cluster/state`) and statistics about it (for example,
  33. `GET /_cluster/stats`). This produces a single document type. In versions prior
  34. to X-Pack 5.5, this was actually three separate collectors that resulted in
  35. three separate types: `cluster_stats`, `cluster_state`, and `cluster_info`. In
  36. 5.5 and later, all three are combined into `cluster_stats`. This only runs on
  37. the _elected_ master node and the data collected (`cluster_stats`) largely
  38. controls the UI. When this data is not present, it indicates either a
  39. misconfiguration on the elected master node, timeouts related to the collection
  40. of the data, or issues with storing the data. Only a single document is produced
  41. per collection.
  42. | Index Stats | `indices_stats`, `index_stats`
  43. | Gathers details about the indices in the cluster, both in summary and
  44. individually. This creates many documents that represent parts of the index
  45. statistics output (for example, `GET /_stats`). This information only needs to
  46. be collected once, so it is collected on the _elected_ master node. The most
  47. common failure for this collector relates to an extreme number of indices -- and
  48. therefore time to gather them -- resulting in timeouts. One summary
  49. `indices_stats` document is produced per collection and one `index_stats`
  50. document is produced per index, per collection.
  51. | Index Recovery | `index_recovery`
  52. | Gathers details about index recovery in the cluster. Index recovery represents
  53. the assignment of _shards_ at the cluster level. If an index is not recovered,
  54. it is not usable. This also corresponds to shard restoration via snapshots. This
  55. information only needs to be collected once, so it is collected on the _elected_
  56. master node. The most common failure for this collector relates to an extreme
  57. number of shards -- and therefore time to gather them -- resulting in timeouts.
  58. This creates a single document that contains all recoveries by default, which
  59. can be quite large, but it gives the most accurate picture of recovery in the
  60. production cluster.
  61. | Shards | `shards`
  62. | Gathers details about all _allocated_ shards for all indices, particularly
  63. including what node the shard is allocated to. This information only needs to be
  64. collected once, so it is collected on the _elected_ master node. The collector
  65. uses the local cluster state to get the routing table without any network
  66. timeout issues unlike most other collectors. Each shard is represented by a
  67. separate monitoring document.
  68. | Jobs | `job_stats`
  69. | Gathers details about all machine learning job statistics (for example, `GET
  70. /_ml/anomaly_detectors/_stats`). This information only needs to be collected
  71. once, so it is collected on the _elected_ master node. However, for the master
  72. node to be able to perform the collection, the master node must have
  73. `xpack.ml.enabled` set to true (default) and a license level that supports {ml}.
  74. | Node Stats | `node_stats`
  75. | Gathers details about the running node, such as memory utilization and CPU
  76. usage (for example, `GET /_nodes/_local/stats`). This runs on _every_ node with
  77. {monitor-features} enabled. One common failure results in the timeout of the node
  78. stats request due to too many segment files. As a result, the collector spends
  79. too much time waiting for the file system stats to be calculated until it
  80. finally times out. A single `node_stats` document is created per collection.
  81. This is collected per node to help to discover issues with nodes communicating
  82. with each other, but not with the monitoring cluster (for example, intermittent
  83. network issues or memory pressure).
  84. |=======================
  85. The {es} {monitor-features} use a single threaded scheduler to run the
  86. collection of {es} monitoring data by all of the appropriate collectors on each
  87. node. This scheduler is managed locally by each node and its interval is
  88. controlled by specifying the `xpack.monitoring.collection.interval`, which
  89. defaults to 10 seconds (`10s`), at either the node or cluster level.
  90. Fundamentally, each collector works on the same principle. Per collection
  91. interval, each collector is checked to see whether it should run and then the
  92. appropriate collectors run. The failure of an individual collector does not
  93. impact any other collector.
  94. Once collection has completed, all of the monitoring data is passed to the
  95. exporters to route the monitoring data to the monitoring clusters.
  96. If gaps exist in the monitoring charts in {kib}, it is typically because either
  97. a collector failed or the monitoring cluster did not receive the data (for
  98. example, it was being restarted). In the event that a collector fails, a logged
  99. error should exist on the node that attempted to perform the collection.
  100. NOTE: Collection is currently done serially, rather than in parallel, to avoid
  101. extra overhead on the elected master node. The downside to this approach
  102. is that collectors might observe a different version of the cluster state
  103. within the same collection period. In practice, this does not make a
  104. significant difference and running the collectors in parallel would not
  105. prevent such a possibility.
  106. For more information about the configuration options for the collectors, see
  107. <<monitoring-collection-settings>>.
  108. [discrete]
  109. [[es-monitoring-stack]]
  110. ==== Collecting data from across the Elastic Stack
  111. {es} {monitor-features} also receive monitoring data from other parts of the
  112. Elastic Stack. In this way, it serves as an unscheduled monitoring data
  113. collector for the stack.
  114. By default, data collection is disabled. {es} monitoring data is not
  115. collected and all monitoring data from other sources such as {kib}, Beats, and
  116. Logstash is ignored. You must set `xpack.monitoring.collection.enabled` to `true`
  117. to enable the collection of monitoring data. See <<monitoring-settings>>.
  118. Once data is received, it is forwarded to the exporters
  119. to be routed to the monitoring cluster like all monitoring data.
  120. WARNING: Because this stack-level "collector" lives outside of the collection
  121. interval of {es} {monitor-features}, it is not impacted by the
  122. `xpack.monitoring.collection.interval` setting. Therefore, data is passed to the
  123. exporters whenever it is received. This behavior can result in indices for {kib},
  124. Logstash, or Beats being created somewhat unexpectedly.
  125. While the monitoring data is collected and processed, some production cluster
  126. metadata is added to incoming documents. This metadata enables {kib} to link the
  127. monitoring data to the appropriate cluster. If this linkage is unimportant to
  128. the infrastructure that you're monitoring, it might be simpler to configure
  129. Logstash and Beats to report monitoring data directly to the monitoring cluster.
  130. This scenario also prevents the production cluster from adding extra overhead
  131. related to monitoring data, which can be very useful when there are a large
  132. number of Logstash nodes or Beats.
  133. For more information about typical monitoring architectures, see
  134. <<how-monitoring-works>>.