You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue was first detected when trying to stop node and namespace watchers from starting (issue is this one). At first, it seemed that we only had these watchers based on features add_resource_metadata and hints, but having a better look, we were actually starting watchers at many places.
One for the resource of that metricset, one for node and one for namespace.
Since we can have multiple metricsets enabled, then we will also have multiple node and namespace watchers running. One for each metricset.
To solve this, we need to find a way to share these watchers between the metricsets, similar to what we did to fetch metrics from KSM (PR) and Kubelet.
Additionally, from the way we create the resource specific watcher
we can see that in case the function was called from state_node/state_namespace metricset (resource is node/namespace), we will end up creating yet another watcher for node/namespace. We need to, at least, add a condition to stop the duplicated watcher from starting.
More on this, we seem to always start the three watchers for all the resources that the NewResourceMetadataEnricher is called by. These watchers are created for metadata enrichment and for some resources the existence of one or some of these watchers is not relevant. For example node watcher is not needed in case we want to enrich deployments or statefulsets.
We need to start only the relevant watchers for each resource.
Possibly this can be solved if we share the watchers between metricsets.
The better handling of watchers initialization will lead to less Kubernetes API calls and possible issues related to that in large scale clusters.
MichaelKatsoulis
changed the title
[Metricbeat][Kubernetes] Share namespace/node watchers between state metricsets
[Metricbeat][Kubernetes] Refactor watchers used in Kubernetes metricsets for metadata enrichment
Nov 30, 2023
This is the kubernetes provider. The provider like the add_kubernetes_metadata processor start watchers but only 3. One for pod, one for node and one for namespace. I mean that they are not per metricset.
Background
This issue was first detected when trying to stop node and namespace watchers from starting (issue is this one). At first, it seemed that we only had these watchers based on features
add_resource_metadata
andhints
, but having a better look, we were actually starting watchers at many places.Issue
All
state_*
metricsets are using this functionbeats/metricbeat/helper/kubernetes/state_metricset.go
Line 46 in d9139c9
state_container
andstate_resourcequota
.This function adds a metadata enricher to each metricset by calling
NewResourceMetadataEnricher
function:beats/metricbeat/helper/kubernetes/state_metricset.go
Line 91 in d9139c9
Also pod and node metricsets call NewResourceMetadataEnricher to enrich the events with metadata.
This enricher will create 3 watchers:
beats/metricbeat/module/kubernetes/util/kubernetes.go
Line 158 in d9139c9
One for the resource of that metricset, one for node and one for namespace.
Since we can have multiple metricsets enabled, then we will also have multiple node and namespace watchers running. One for each metricset.
To solve this, we need to find a way to share these watchers between the metricsets, similar to what we did to fetch metrics from KSM (PR) and Kubelet.
Additionally, from the way we create the resource specific watcher
beats/metricbeat/module/kubernetes/util/kubernetes.go
Line 454 in d9139c9
state_node
/state_namespace
metricset (resource is node/namespace), we will end up creating yet another watcher for node/namespace. We need to, at least, add a condition to stop the duplicated watcher from starting.More on this, we seem to always start the three watchers for all the resources that the
NewResourceMetadataEnricher
is called by. These watchers are created for metadata enrichment and for some resources the existence of one or some of these watchers is not relevant. For example node watcher is not needed in case we want to enrich deployments or statefulsets.We need to start only the relevant watchers for each resource.
Possibly this can be solved if we share the watchers between metricsets.
The better handling of watchers initialization will lead to less Kubernetes API calls and possible issues related to that in large scale clusters.
Issues
Relates to elastic/elastic-agent#3801.
Current watchers
These are the current watchers when starting metricbeat with default configurations.
For example, like this.
For the
state_*
metricsets grouped as described above:For all the other metricsets:
Expected watchers
Watchers needed for each metricset by default (without counting
add_resource_metadata.deployment/cronjob
):Checks
PRs
Tasks
The text was updated successfully, but these errors were encountered: