-
Notifications
You must be signed in to change notification settings - Fork 532
The deletion of federated resources will be blocked forever by not-ready clusters #1254
Comments
/assign |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I have same issue in deleting federatedDaemonset |
/remove-lifecycle stale |
@relaxtheo Could you share the logs ? |
root@fedhost-master:/home/ubuntu# kubectl get fds --all-namespaces root@fedhost-master:/home/ubuntu# kubectl delete fds fedds-nano -n fedns-e2 root@fedhost-master:/home/ubuntu# kubectl delete fds fedds-nano -n fedns-e2 --force --grace-period=0 root@fedhost-master:/home/ubuntu# kubectl delete fds fedds-nano -n fedns-e2 --force --grace-period=0 -v 10 |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen This issue is still valid in Example log where
|
@jonathanbeber: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@RainbowMango I guess you are not working on this one, right? Do you mind if I give it a shot? I'd be interested in contributing. |
/reopen |
@jimmidyson: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-lifecycle rotten |
/assign |
Thanks @jonathanbeber for picking this up. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale #1499 is ready for review. |
What happened:
The deletion of federated resources will be blocked forever by Irrelevant/not-ready clusters.
What you expected to happen:
When deleting a federated resource, should ignore the irrelevant clusters no matter it's status is ready or not.
How to reproduce it (as minimally and precisely as possible):
spec.placement.clusters
only matches cluster1For now, everything works as expected, the deployment has been propagated only to cluster1.
The delete operation will be hanging there forever.
From the log, we can see the sync controller still trying to touch the not ready cluster.
Anything else we need to know?:
Environment:
kubectl version
)/kind bug
The text was updated successfully, but these errors were encountered: