-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Agent] Implement leader election on k8s for Elastic Agent #24267
Comments
Pinging @elastic/integrations (Team:Integrations) |
Pinging @elastic/agent (Team:Agent) |
@ChrsMark I am trying to understand the benefits of this feature. Without this feature, users need to deploy daemonset to collect worker node metrics and logs and then a separate and just a single pod deployment for a cluster for collecting cluster level metrics, master node metrics and logs? So the only con is a separate deployment to track these metrics. Is that right or there are additional cons if we don't support this feature. |
At first I was going to say this should really already be handled with the coordinator with Fleet Server. But then that would prevent this from working in standalone mode. Being we need standalone more to work, I think this is a good proposal and something we should do. I think the need for a specific Agent running a specific configmap just to get this information, make the configuration more complicated. Having this would simplify this greatly. |
@exekias Are you able to confirm the benefits of this feature? Is it more than what I stated in above comment? |
Your statement is accurate @mukeshelastic. I would say leader election is not a blocker / must-have. It's still a desirable feature to fit in when we have solved the main parts, as in our experience, the extra Deployment needed causes confusion and complicates the architecture. |
In Beats we have leader election feature for k8s(#19731), so as to make it possible to avoid deploying a singleton instance of Metricbeat via a k8s Deployment for cluster wide metrics collection. The implementation is based on go-client's leader election implementation: https://pkg.go.dev/k8s.io/client-go/tools/leaderelection
It would be nice to support the similar in Elastic Agent too. What comes first to my mind is a separate provider called
kubernetes_leaderelection
since its implementation currently is not related tokubernetes
provider (no resource discovery is required).An example configuration would look like this:
leader_lease
: This will be the name of the lock lease. One can monitor the status of the lease withkubectl describe lease leader-election-elastic-agent
.And then users can define inputs that would be enabled only by the leader Pod with:
The input will be enabled by the Pod that will acquire the lock and will set
kubernetes_leaderelection.leader
totrue
.(condition can be set on input level too I guess so as to be the same for all state_* datastreams)
With this setup Managed version could be simplified since we will only have to deal with a Deamonset with the same configuration along the Pods.
@blakerouse @ph @ruflin @masci @exekias @mukeshelastic let me know what you think.
The text was updated successfully, but these errors were encountered: