-
Notifications
You must be signed in to change notification settings - Fork 87
Using TemporaryError, but handler is not being called again. #358
Comments
So this is going to be the problem when using v1 of CRDs, of not allowing for kopf object in the status as examples in #321 show. IOW,
or:
I can't see that the need for using Was only getting this issue on the one cluster as the person who installed it there used the v1 versions of the CRDs and not the v1beta1 versions. |
@GrahamDumpleton Thanks for reporting this. Indeed, we also had few little issues on our side with these "structural schemas" of Kubernetes 1.16+ and v1beta1->v1 transition. I've now added few special notes to the docs (#364). The whole problem is also addressed in #331 with switching from status to annotations by default (docs). This should keep Kopf running out of the box for all CRs & builtins. There is also a PR #339 to warn if the resulting object does match the patching intentions, but it is slightly more complicated to implement — we'll try to add it to Kopf 0.28 (with a general topic: stability & recoverability) a bit later. Otherwise, Kubernetes silently ignores the patches and returns HTTP 200 OK, despite the patch is not applied — which cause this and similar issues in Kopf. |
I got another thing for you to think about (not creating separate issue for it as yet), although it goes counter a little bit to the idea of using annotations to track operator state. But then am not seeing this as something you rely on, but informational only. I would like to see one kopf status field that tracks the overall processing state, but persists and doesn't go away when the processing of the custom resource is complete, which is usually when kopf state is removed. Whether anyone relies on this would be optional. The issue is that when going 'kubectl get crdname', it is hard to have a 'STATUS' field as printer column which reflects overall processing state. I would like to be able to say:
When CR just created and nothing done, would show empty value in 'kubectl get' output. Once kopf/operator starts processing it, could change value based on what is happening, eg., processing, retrying, failed, error. Importantly, when the operator has successfully finished processing and all is good, the 'status.kopf.processingState' wouldn't be removed. Instead, would be updated with value of success or complete. This would make it much easier when using 'kubectl get' to see what is going on without delving into the YAML or using 'describe'. The only other option is one has to have two state columns, one for kopf side which tracks something from its state (although that will stop working now if you use annotations) and another displaying something from the users status output by the operator. Hopefully you understand what suggesting. If you want to compare it to something, imagine the 'STATUS' field of a pod. |
First, to note: now, both annotations AND status are used. And in general, I would prefer Second: This is actually a good feature request! I.e. making the handling progress/state exposable via "printer columns", both individually per-handler and aggregated. Which includes persisting the progress after the handling is done; but also implies few more columns for better presentation. One way, which works right now, would be to This is what we do now in our apps: report from children pods and other children resource to I never thought of reporting the handlers' statuses or the overall processing state in the printed columns. I'll definitely think on this now! |
Long story short
When raising
TemporaryError
, the custom resource create handler is not being called after delay to try again.Description
Logs show:
but it is never retried.
What is odd is that it works fine on Minikube, and previously worked fine on a separate production grade cluster, but now the production grade cluster no longer works. Also works when run kopf on macOS connected to Minikube in place of operator running in cluster.
Going to keep trying to debug it, but suggestions on how to do that would be welcome.
Environment
kopf==0.26
kubernetes==11.0.0
1.17
3.7.7 (in container)
3.7.3 (on macOS)
Using official python3.7 docker image from Docker Hub. Presumably Debian or Ubuntu based.
Python packages installed
The text was updated successfully, but these errors were encountered: