-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Resync the workload resource values upon LimitRange changes #611
Comments
|
I prefer 1. Two thougts:
|
+1 |
Considering created pods will not be affected with LimitRange, prefer to handle this simply unless we get more feedbacks or requirements:
A special case is that when workloads admitted but the job pending to be unsuspended(or pods is waiting to be created), we update the limitRange, then the result might be wrong. And I didn't buy the idea of designing the API for debugging, but it can be a side benefit. Treat this as opt-4? |
Yes, in general we need to ignore changes in LimitRange after the workload is admitted. We might still get cases where the pods don't match the calculated requests, but we can ignore that for now assuming LimitRanges aren't supposed to change often.
Yes, that is the point of the proposal. The question is how do we detect the changes without having to look at the original object (Job, MPIJob). If we have to look at the original object, every job-controller implementation has to implement watching LimitRanges. If, instead, we just store the original requests in Workload, we only need to do the checks for LimitRanges in one place: the workload-controller. The rest of the controllers just have to do a simple semantic equality check. |
Yes, this is what came to my mind first. |
awesome, we are all in agreement. When you have a chance, PTAL at the WIP #600 |
/asign |
What would you like to be added:
This is a followup of #541 with the purpose of deciding on a way to recompute the resource values of a workload in case the cluster/namespaces LimitRanges change during the waiting time of a workload.
Why is this needed:
Currently I can see three options for this.
Completion requirements:
This enhancement requires the following artifacts:
The artifacts should be linked in subsequent comments.
cc: @alculquicondor , @mwielgus
The text was updated successfully, but these errors were encountered: