-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Steady memory leak in VPA recommender #6368
Comments
Hey @vkhacharia @DLakin01 thanks for bringing this up! To some extend, this behavior is expected and given only these graphs it is hard to tell, if the behavior is normal or not. Even with memory saver mode enabled, there's some grow in memory expected:
So if you're rolling approximately the same number of times per week, your memory is expected to grow for ~2 weeks. If you're adding Containers and don't have memory saver mode enabled, memory will grow with every Container. If all of those parameters are controlled and you still see memory growth, I guess this really is a memory leak that shouldn't happen. |
@voelzmo Thanks for the quick response, I wanted to try it now but noticed that I am on k8s version 1.24 which has compabitility with 0.11 of vpa recommender. I dont see the parameter |
Hey @vkhacharia, thanks for your efforts! VPA 0.11.0 also has So you can still turn on |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/area vertical-pod-autoscaler |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which component are you using?:
vertical-pod-autoscaler, recommender only
What version of the component are you using?
0.14.0
What k8s version are you using (
kubectl version
)?:1.26
What environment is this in?:
AWS EKS, multiple clusters and accounts, multiple types of applications running on the cluster
What did you expect to happen?:
VPA recommender should run at more or less at the same memory level throughout the lifetime of a particular pod
What happened instead?:
There is a steady memory leak that is especially visible over a period of days, as seen here in a screen capture of our DataDog:
The upper lines with the steeper slope are from our large multi-tenant clusters, but the smaller clusters also experience the leak, albeit more slowly. If left alone, the memory will reach 200% of requests before the pod gets kicked. The recommender in the largest cluster is tracking 3161
PodStates
at the time of creating this issueHow to reproduce it (as minimally and precisely as possible):
Not sure how reproducible the issue is outside of running VPA in a large cluster with > 3000 pods and waiting several days to see if the memory creeps up.
Anything else we need to know?:
We haven't yet created any VPA CRDs to generate recommendations, waiting until a future sprint to begin rolling those out.
The text was updated successfully, but these errors were encountered: