-
Notifications
You must be signed in to change notification settings - Fork 904
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ResourceBinding aggregated status is not updated after unjoinning a target cluster #768
Comments
I suppose this could be a bug rather than an intention, isn't it? If so, I'm willing to fix this :) |
Yeah, I think it's a bug too. But any idea about how to fix it? |
Please see dddddai@981201c, ResourceBindingController should aggregate status no matter if ensured work or not, WDYT? |
Is it possible to put the |
Agree, May I commit dddddai@981201c first for quick fix and move |
@RainbowMango @XiShanYongYe-Chang Any ideas about this? |
Oh, sorry, i missed this issue. I looked at the commit but can't remember why the
Perhaps not worth introducing a controller yet to aggregate status. Also, I'm thinking after the cluster has been removed, why does |
/assign @dddddai |
@RainbowMango The procedure is as follows:
As I metioned above, karmada/pkg/controllers/binding/binding_controller.go Lines 95 to 102 in c3cf3a3
So this commit delays the requeue and makes it able to aggregate status
Yes I'm also thinking about this problem, and it confuses me that if scheduler cleans up the cluster from binding, karmada/pkg/controllers/binding/binding_controller.go Lines 62 to 68 in c3cf3a3
|
Got it. Thanks. |
Yeah. Perhaps it's time to add a schedule condition I think. |
What happened:
ResourceBinding aggregated status is not updated after unjoinning a target cluster
What you expected to happen:
ResourceBinding aggregated status(along with workload status) should be updated after unjoinning a target cluster
How to reproduce it (as minimally and precisely as possible):
1.Set up environment
2.Unjoin member1
3.Check the ResourceBinding aggregated status and Deployment status
root@myserver:~/karmada# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE nginx 1/1 1 1 16m
Both ResourceBinding aggregated status and Deployment status show there is still one ready pod.
Anything else we need to know?:
Turns out ResourceBindingController would always fail to
ensureWork
and requeue the request since the target cluster has been removed, which makes itself has no chance to aggregate status.Related snippet:
karmada/pkg/controllers/binding/binding_controller.go
Lines 95 to 102 in c3cf3a3
Here is the karmada-controller-manager log.
Environment:
The text was updated successfully, but these errors were encountered: