-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
duplicited CRD in dist/install.yaml #3767
Comments
@camilamacedo86 can we continue in discussion here because current implementation has problem with duplication of CRDs. Do you want to generate CRD always independently on config/default? |
Hi @lukas016, Feel free to check it and provide a solution that works |
Just to add to this, if you use webhooks, the duplicated CRDs are actually different. $(KUSTOMIZE) build config/crd > dist/install.yaml In this version the The second copy of the CRD is generated with: $(KUSTOMIZE) build config/default >> dist/install.yaml The webhook:
clientConfig:
service:
name: webhook-service
namespace: system
path: /convert and the other patched e.g. webhook:
clientConfig:
service:
name: blueprint-controller-webhook-service
namespace: kloudy-system
path: /convert
|
Was gonna submit a PR to fix this but looks like someone else is already on it: #3814 |
@antonosmond good point and thx, i wanted look on it but i was busy. I wrote you one suggestion. |
@camilamacedo86 his solution is completely identical with deploy job. Do we want to keep it independent? Because we can use dist/install.yaml as input for target deploy and build-installer can be dependency. dist/install.yaml will always update with deploy or manually with build-installer, what can be helpful because i always forget call generate for this project what is for me similar problem. |
sorry not looking for issues prior to opening PR #3814 I like your idea @lukas016 ! Something like this, right? .PHONY: deploy
-deploy: manifests kustomize ## Deploy controller to the K8s cluster specified in ~/.kube/config.
+deploy: build-installer ## Deploy controller to the K8s cluster specified in ~/.kube/config.
- cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
- $(KUSTOMIZE) build config/default | $(KUBECTL) apply -f -
+ $(KUBECTL) apply -f dist/install.yaml
.PHONY: undeploy
-undeploy: kustomize ## Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
+undeploy: build-installer ## Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
- $(KUSTOMIZE) build config/default | $(KUBECTL) delete --ignore-not-found=$(ignore-not-found) -f -
+ $(KUBECTL) delete --ignore-not-found=$(ignore-not-found) -f dist/install.yaml (can be a follow up PR to not increase size of created PR) |
Another question related to topic: should we delete Basically, |
current solution and this new one has one problem. If you call make undeploy, you don't have guarantee order how are resources deleted. I had very often problem with leftover, when i forgot delete resources for CRD before call undeploy. Result was stuck deletion process. Deployment was delete before deleting of resources which were marked for deletion because CRD was marked for deletion with undeploy but deletion of CRD was stuck because resources had finalizer from controller which was removed. Do we want to somehow solve this dependency chain? If we want to solve it, we will need to keep install/uninstall targets probably and ideally crd won't be deleted with undeploy. I don't think it is critical issues and another operators have similar issue. |
yeah, I face this problem in one of the projects I work on. Before running undeploy/uninstall, I have to force delete all CRs (instances of CRDs). What we do in the project is use this command (example with Restore CR) cleanup:
$(KUBECTL) delete restore -n $(NAMESPACE) --all --wait=false
for restore_name in $(shell $(KUBECTL) get restore -n $(NAMESPACE) -o name);do $(KUBECTL) patch "$$restore_name" -n $(NAMESPACE) -p '{"metadata":{"finalizers":null}}' --type=merge;done prior to running undeploy/uninstall. Maybe this can be automatically generated by kubebuilder? Every time a new api is added ( |
i prefer cleaner way, maybe replace patch with delete specific resource and let it delete with controller. Because i have operators which create some stuff on host system and i would like delete them properly because i will have leftover on other places. |
yeah, good point. But even so, I think we could have trouble. One other project I have worked, the controller also have a CLI. If you run This seems a hard problem. Maybe for now just add more docs? Like
|
Hi @antonosmond We merged the PR: https://github.com/kubernetes-sigs/kubebuilder/pull/3814/files Following some comments inline:
No. This options should still. Regards the other comments over cleanup/undeploy, if you see that is required could you please raise an issue about this topic? Is hard to follow up if we keep many scopes in the same one. thank you for all help and understanding. |
from my side would just check with this would be interesting as follow up? #3767 (comment) |
What broke? What's expected?
current make build-install generate dist/install.yaml from config/crd and config/default.
config/default internally use config/crd too. Result are 2 same definitions of CRD in dist/install.yaml
Reproducing this issue
KubeBuilder (CLI) Version
master
PROJECT version
3
Plugin versions
No response
Other versions
No response
Extra Labels
No response
The text was updated successfully, but these errors were encountered: