-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VPA] Pod scheduled with memory limit above limitrange #3319
Comments
Thanks for reporting! @jbartosik could you take a look? I think you have the most context. @rhysemmas Can you let us know which version of VPA you are using? |
Hey @bskiba @jbartosik thanks for taking a look at this! We're using version 0.8.0 |
I took a look at this. I added a test and in the test I get the following capped memory recommendations for pods (I set up test to have limit ranges and pods like described in this issue):
Which adds up to The solution would be to round memory recommendations down to 1B. |
With minimums we should round up. |
Hello @jbartosik @bskiba, I am facing the same issue again with latest VPA version pods are failed to schedule because of the following error:
In my VPA configuration, I've given:
and my limit range looks like this:
VPA recommendations are:
it's probably the same issue as @rhysemmas pointed out but the issue seems to be not fixed |
@jbartosik Can you take a look? |
@surajnarwade can you give me some more details? For example what limits did VPA set for the pod? |
@jbartosik
|
Hello, @jbartosik I am seeing this issue again, I thought it should respect limit range limits and assign accordingly, right? here's my error:
here's the limitrange:
here's VPA definition:
here's the VPA recommendation:
|
Hi there,
We're seeing an issue with pods that have more than one container which VPA recommends and updates memory requests/limits for. The pods are being scheduled with memory limits that are above the pod memory limit configured in the namespace's limitrange.
We don't see this issue when a pod only has one container - VPA updates the memory limit to be within the limitrange. The issue only seems to occur when there are multiple containers in a pod which VPA updates requests/limits for.
Also, I should mention that we're not configuring any
minAllowed
ormaxAllowed
limits via container policies in the VPA resource policy, so we don't expect there to be any conflicts which would cause VPA to set limits above the limit range. In case it makes a difference, we are setting a request/limit ratio of 1:1 when initially deploying the pod.When looking at the replicaset when the pod fails to schedule, the pod is scheduled with a total memory limit which is 1 byte over the limit imposed by the limitrange.
E.g:
Namespace limitrange has a memory limit of 115Gi (== 123480309760 bytes)
VPA recommends memory request for two containers in a pod which totals above the limitrange limit (as expected)
Pod seems to be updated with requests/limits that attempt to fall more in-line with the limitrange, but fails to schedule due to total pod memory limit being 1 byte above the limitrange:
As the pods are consistently scheduled with only 1 byte over the limitrange, it makes me think there may be an error in the calculation that updates requests/limits for multiple containers in a pod, while trying to keep the total requests of the pod within the limitrange. I'm not sure though, so if I am missing something I'd be really grateful for any help!
The text was updated successfully, but these errors were encountered: