Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Webhook wrongly patches "request" into a shared pool user's manifest #14

Closed
Levovar opened this issue May 22, 2019 · 0 comments · Fixed by #16
Closed

Webhook wrongly patches "request" into a shared pool user's manifest #14

Levovar opened this issue May 22, 2019 · 0 comments · Fixed by #16

Comments

@Levovar
Copy link
Collaborator

Levovar commented May 22, 2019

Describe the bug
Mutating webhook shall only add limits to a Pod's CPU resource field when a Pod asks for shared pool devices, but shall not add requests.

Reasoning
Limits are only added to the Pod to make default Kubernetes logic provision CFS quotas for the requested shared CPU slices.
However, by also adding request to the Pod manifests' comes with a major side-effect. K8s does not know that the requesting Pod will not actually use the Node's Kubelet allocatable CPU resources, and will unnecessarily decrease it with the patched amount.
This effectively decreases the usable capacity of the default pool when a shared pool using Pod is scheduled to the Node.

To Reproduce
Steps to reproduce the behavior:

  1. Create shared and default pool on a Node
  2. Decrease Kubelet Node Allocatable CPU capacity to match the size of the default pool
  3. Create a Pod asking for e.g. 200ms slice from the Node's shared pool

Expected behavior
3. Upon the instantiation of the Pod Kubelet recognized Noda Allocatable CPU pool remains the same, and is not decreased with 200ms

Additional info
Long term we should probably set the quotas on our own to avoid presenting an unrealistic picture on the Kubernetes interfaces. In short term setting limit via K8s can be still okay as it has a much smaller effect on the state of the cluster compare to setting request.

balintTobik added a commit to Levovar/CPU-Pooler that referenced this issue Jun 7, 2019
- Issue nokia#10: First check there is no more than 1 shared pool, and if it's correct, start CPU DP server and register resources
- Issue nokia#14: set container's cpu request to "0m" in case of shared pool
balintTobik added a commit to Levovar/CPU-Pooler that referenced this issue Jun 7, 2019
- Issue nokia#10: First check there is no more than 1 shared pool, and if it's correct, start CPU DP server and register resources
- Issue nokia#14: set container's cpu request to "0m" in case of shared pool
balintTobik added a commit to Levovar/CPU-Pooler that referenced this issue Jun 12, 2019
- Issue nokia#10: First check there is no more than 1 shared pool, and if it's correct, start CPU DP server and register resources
- Issue nokia#14: set container's cpu request to "0m" in case of shared pool
balintTobik added a commit to Levovar/CPU-Pooler that referenced this issue Jun 12, 2019
- Issue nokia#10: First check there is no more than 1 shared pool, and if it's correct, start CPU DP server and register resources
- Issue nokia#14: set container's cpu request to "0m" in case of shared pool
TimoLindqvist pushed a commit that referenced this issue Jun 14, 2019
- Issue #10: First check there is no more than 1 shared pool, and if it's correct, start CPU DP server and register resources
- Issue #14: set container's cpu request to "0m" in case of shared pool
nxsre pushed a commit to nxsre/CPU-Pooler that referenced this issue Apr 14, 2024
- Issue nokia#10: First check there is no more than 1 shared pool, and if it's correct, start CPU DP server and register resources
- Issue nokia#14: set container's cpu request to "0m" in case of shared pool
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants