-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Autoscaler not scaling down pods whenkind
is of the resource is same between controllers
#5977
Comments
Your suspicion seems correct to me (not an expert, I just stumbled over this for other reasons): autoscaler/cluster-autoscaler/utils/drain/drain.go Lines 187 to 197 in f9a7c7f
|
haha, you need to write a custom codes to avoid this situation |
Maybe higher version of ‘skipNodesWithCustomControllerPods’ option is useful. |
This won't help, the problem isn't with a custom controller, it's with a controller with the same |
cluster-autoscaler
1.25.0
v1.25.10
kops
on AWSWe are using OpenKruise and Advanced DaemonSet.
Autoscaler seems like detects it as regular
DaemonSet
and trying to find the correspondingDaemonSet
for the workload, and fails. This prevents from scaling down to occur.I'm not sure whether it's the root cause or not, but I suspect that Autoscaler doesn't respect the API group. The pods created by Advanced DaemonSet have the following
ownerReferenses
:The
kind
isDaemonSet
, but theapiVersion
isapps.kruise.io/v1alpha1
. So might be that theapiVersion
is being ignored, and autoscaler just looks for regularDaemonSet
namedexample-advanced-daemonset
, which obviously doesn't exist.Here are relevant logs from autoscaler:
Can someone advice if this expected that the
apiVersion
is ignored or not being respected, am I missing something?The text was updated successfully, but these errors were encountered: