-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pod resource leak when delete jobs #969
Comments
when you delete job via client-go, set a propagation policy in the delete options if you want the child pods to be deleted as well.
But when deleted by karmada, we didn't set the delete option in objectwacther.
The delete options will be |
What kind of delete option we are talking about? |
Right,I use kubectl delete job pi |
If the delete option is empty, the default option will be orphan. I found the code here: https://github.com/kubernetes/kubernetes/blob/16227cf09dcb6d1a71733d9fa20335007b0ca3d2/staging/src/k8s.io/apiserver/pkg/registry/generic/registry/store.go#L742
|
Hey @mrlihanbo, these tips are interesting, but I have not yet figured out why the deletion is normal in member cluster but have problems in karmada controll plane. |
I have just done testing on my side with the patch here. Hard to say if this is the final solution, but it can explain something. When you delete But, Karmada leaves the cascading deletion option empty when deleting the |
hi, @Garrybest There are three DeletionPropagation policies in kubernetes: Orphan, Background, Foreground. When you delete jobs in member cluster by |
Thanks you guys a lot, I think I got it. @RainbowMango @mrlihanbo |
It works and does not seem to be very incompatible. I think this patch could be a hot fix. @RainbowMango |
/assign |
What happened:
kubectl delete job pi
.You will see the pod in member clusters would not be deleted. Their ownerRef will be removed, while it has pointed to the job in member cluster before.
It is noticed that if you delete the job from member cluster, the pod garbage collection would not have any problems.
This issue is a little strange. May have something to do with the deletion approach of karmada.
What you expected to happen:
The job deletions will not cause any resource leak.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
The text was updated successfully, but these errors were encountered: