You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While working on #3315, I noticed that we only include the input / pipeline metrics on log messages, rather than on the metric data streams. We should expose this data on metrics instead in order to optimize storage costs via TSDB and avoid sparse fields on logs.
This will also help users find these metrics more easily. An example metrics are the monitoring.metrics.libbeat.pipeline.* ones, but there may be others that may be useful as well.
As part of this, we may also want to remove the "Non-zero metrics in the last 30s" logs from Beats when running under Agent and instead rely on metrics collection.
The text was updated successfully, but these errors were encountered:
As part of this, we may also want to remove the "Non-zero metrics in the last 30s" logs from Beats when running under Agent and instead rely on metrics collection.
The nice thing about this is it keeps metrics in the diagnostics as part of the logs (with history). So I'd want to keep this, but we could stop shipping them to Fleet. This is something we can do with a drop processor in the monitoring filestream instance.
While working on #3315, I noticed that we only include the input / pipeline metrics on log messages, rather than on the metric data streams. We should expose this data on metrics instead in order to optimize storage costs via TSDB and avoid sparse fields on logs.
This will also help users find these metrics more easily. An example metrics are the
monitoring.metrics.libbeat.pipeline.*
ones, but there may be others that may be useful as well.As part of this, we may also want to remove the "Non-zero metrics in the last 30s" logs from Beats when running under Agent and instead rely on metrics collection.
The text was updated successfully, but these errors were encountered: