-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scale-down-delay-after-delete parameter doesn't work properly #3568
Comments
Should we be returning |
Thanks for pointing this out Marwan! Although I'd stick to using |
The bug should be fixed by #3570. |
@MaciekPytel - maybe you know the answer to the above question? |
@ryaneorth we can prepare the cherry-pick PRs. I'm guessing we'll have one set of patch releases before K8s 1.20 and another around 1.20. |
Sounds great, thanks @marwanad . I'm happy to perform the cherry-picks if you'd like - let me know! |
@ryaneorth that would be great, thanks! |
All of the above cherry-picks are complete. @MaciekPytel - do you have any information as to when the next patches will be released? |
@ryaneorth keep an 👀 on #3611 |
I believe that started around the time the switch to the new scale down processor happened.
The parameter goes into effect and puts scaledown in cooldown mode based on the
lastScaleDownDeleteTime
.autoscaler/cluster-autoscaler/core/static_autoscaler.go
Line 499 in 774390e
However, this is only set when the scale down status result is
ScaleDownNodeDeleted
autoscaler/cluster-autoscaler/core/static_autoscaler.go
Lines 532 to 535 in 774390e
And it seems with the switch to the async deletions, this is no longer set:
autoscaler/cluster-autoscaler/core/scale_down.go
Lines 943 to 964 in 3071905
/kind bug
The text was updated successfully, but these errors were encountered: