collectors.asciidoc 8.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150
  1. [role="xpack"]
  2. [testenv="basic"]
  3. [[es-monitoring-collectors]]
  4. == Collectors
  5. Collectors, as their name implies, collect things. Each collector runs once for
  6. each collection interval to obtain data from the public APIs in {es} and {xpack}
  7. that it chooses to monitor. When the data collection is finished, the data is
  8. handed in bulk to the <<es-monitoring-exporters,exporters>> to be sent to the
  9. monitoring clusters. Regardless of the number of exporters, each collector only
  10. runs once per collection interval.
  11. There is only one collector per data type gathered. In other words, for any
  12. monitoring document that is created, it comes from a single collector rather
  13. than being merged from multiple collectors. {monitoring} for {es} currently has
  14. a few collectors because the goal is to minimize overlap between them for
  15. optimal performance.
  16. Each collector can create zero or more monitoring documents. For example,
  17. the `index_stats` collector collects all index statistics at the same time to
  18. avoid many unnecessary calls.
  19. [options="header"]
  20. |=======================
  21. | Collector | Data Types | Description
  22. | Cluster Stats | `cluster_stats`
  23. | Gathers details about the cluster state, including parts of
  24. the actual cluster state (for example `GET /_cluster/state`) and statistics
  25. about it (for example, `GET /_cluster/stats`). This produces a single document
  26. type. In versions prior to X-Pack 5.5, this was actually three separate collectors
  27. that resulted in three separate types: `cluster_stats`, `cluster_state`, and
  28. `cluster_info`. In 5.5 and later, all three are combined into `cluster_stats`.
  29. +
  30. This only runs on the _elected_ master node and the data collected
  31. (`cluster_stats`) largely controls the UI. When this data is not present, it
  32. indicates either a misconfiguration on the elected master node, timeouts related
  33. to the collection of the data, or issues with storing the data. Only a single
  34. document is produced per collection.
  35. | Index Stats | `indices_stats`, `index_stats`
  36. | Gathers details about the indices in the cluster, both in summary and
  37. individually. This creates many documents that represent parts of the index
  38. statistics output (for example, `GET /_stats`).
  39. +
  40. This information only needs to be collected once, so it is collected on the
  41. _elected_ master node. The most common failure for this collector relates to an
  42. extreme number of indices -- and therefore time to gather them -- resulting in
  43. timeouts. One summary `indices_stats` document is produced per collection and one
  44. `index_stats` document is produced per index, per collection.
  45. | Index Recovery | `index_recovery`
  46. | Gathers details about index recovery in the cluster. Index recovery represents
  47. the assignment of _shards_ at the cluster level. If an index is not recovered,
  48. it is not usable. This also corresponds to shard restoration via snapshots.
  49. +
  50. This information only needs to be collected once, so it is collected on the
  51. _elected_ master node. The most common failure for this collector relates to an
  52. extreme number of shards -- and therefore time to gather them -- resulting in
  53. timeouts. This creates a single document that contains all recoveries by default,
  54. which can be quite large, but it gives the most accurate picture of recovery in
  55. the production cluster.
  56. | Shards | `shards`
  57. | Gathers details about all _allocated_ shards for all indices, particularly
  58. including what node the shard is allocated to.
  59. +
  60. This information only needs to be collected once, so it is collected on the
  61. _elected_ master node. The collector uses the local cluster state to get the
  62. routing table without any network timeout issues unlike most other collectors.
  63. Each shard is represented by a separate monitoring document.
  64. | Jobs | `job_stats`
  65. | Gathers details about all machine learning job statistics (for example,
  66. `GET /_ml/anomaly_detectors/_stats`).
  67. +
  68. This information only needs to be collected once, so it is collected on the
  69. _elected_ master node. However, for the master node to be able to perform the
  70. collection, the master node must have `xpack.ml.enabled` set to true (default)
  71. and a license level that supports {ml}.
  72. | Node Stats | `node_stats`
  73. | Gathers details about the running node, such as memory utilization and CPU
  74. usage (for example, `GET /_nodes/_local/stats`).
  75. +
  76. This runs on _every_ node with {monitoring} enabled. One common failure
  77. results in the timeout of the node stats request due to too many segment files.
  78. As a result, the collector spends too much time waiting for the file system
  79. stats to be calculated until it finally times out. A single `node_stats`
  80. document is created per collection. This is collected per node to help to
  81. discover issues with nodes communicating with each other, but not with the
  82. monitoring cluster (for example, intermittent network issues or memory pressure).
  83. |=======================
  84. {monitoring} uses a single threaded scheduler to run the collection of {es}
  85. monitoring data by all of the appropriate collectors on each node. This
  86. scheduler is managed locally by each node and its interval is controlled by
  87. specifying the `xpack.monitoring.collection.interval`, which defaults to 10
  88. seconds (`10s`), at either the node or cluster level.
  89. Fundamentally, each collector works on the same principle. Per collection
  90. interval, each collector is checked to see whether it should run and then the
  91. appropriate collectors run. The failure of an individual collector does not
  92. impact any other collector.
  93. Once collection has completed, all of the monitoring data is passed to the
  94. exporters to route the monitoring data to the monitoring clusters.
  95. If gaps exist in the monitoring charts in {kib}, it is typically because either
  96. a collector failed or the monitoring cluster did not receive the data (for
  97. example, it was being restarted). In the event that a collector fails, a logged
  98. error should exist on the node that attempted to perform the collection.
  99. NOTE: Collection is currently done serially, rather than in parallel, to avoid
  100. extra overhead on the elected master node. The downside to this approach
  101. is that collectors might observe a different version of the cluster state
  102. within the same collection period. In practice, this does not make a
  103. significant difference and running the collectors in parallel would not
  104. prevent such a possibility.
  105. For more information about the configuration options for the collectors, see
  106. <<monitoring-collection-settings>>.
  107. [float]
  108. [[es-monitoring-stack]]
  109. === Collecting data from across the Elastic Stack
  110. {monitoring} in {es} also receives monitoring data from other parts of the
  111. Elastic Stack. In this way, it serves as an unscheduled monitoring data
  112. collector for the stack.
  113. By default, data collection is disabled. {es} monitoring data is not
  114. collected and all monitoring data from other sources such as {kib}, Beats, and
  115. Logstash is ignored. You must set `xpack.monitoring.collection.enabled` to `true`
  116. to enable the collection of monitoring data. See <<monitoring-settings>>.
  117. Once data is received, it is forwarded to the exporters
  118. to be routed to the monitoring cluster like all monitoring data.
  119. WARNING: Because this stack-level "collector" lives outside of the collection
  120. interval of {monitoring} for {es}, it is not impacted by the
  121. `xpack.monitoring.collection.interval` setting. Therefore, data is passed to the
  122. exporters whenever it is received. This behavior can result in indices for {kib},
  123. Logstash, or Beats being created somewhat unexpectedly.
  124. While the monitoring data is collected and processed, some production cluster
  125. metadata is added to incoming documents. This metadata enables {kib} to link the
  126. monitoring data to the appropriate cluster. If this linkage is unimportant to
  127. the infrastructure that you're monitoring, it might be simpler to configure
  128. Logstash and Beats to report monitoring data directly to the monitoring cluster.
  129. This scenario also prevents the production cluster from adding extra overhead
  130. related to monitoring data, which can be very useful when there are a large
  131. number of Logstash nodes or Beats.
  132. For more information about typical monitoring architectures, see
  133. {xpack-ref}/how-monitoring-works.html[How Monitoring Works].