-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Webhook wrongly patches "request" into a shared pool user's manifest #14
Comments
Merged
TimoLindqvist
pushed a commit
that referenced
this issue
Jun 14, 2019
nxsre
pushed a commit
to nxsre/CPU-Pooler
that referenced
this issue
Apr 14, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
Mutating webhook shall only add limits to a Pod's CPU resource field when a Pod asks for shared pool devices, but shall not add requests.
Reasoning
Limits are only added to the Pod to make default Kubernetes logic provision CFS quotas for the requested shared CPU slices.
However, by also adding request to the Pod manifests' comes with a major side-effect. K8s does not know that the requesting Pod will not actually use the Node's Kubelet allocatable CPU resources, and will unnecessarily decrease it with the patched amount.
This effectively decreases the usable capacity of the default pool when a shared pool using Pod is scheduled to the Node.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
3. Upon the instantiation of the Pod Kubelet recognized Noda Allocatable CPU pool remains the same, and is not decreased with 200ms
Additional info
Long term we should probably set the quotas on our own to avoid presenting an unrealistic picture on the Kubernetes interfaces. In short term setting limit via K8s can be still okay as it has a much smaller effect on the state of the cluster compare to setting request.
The text was updated successfully, but these errors were encountered: