-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding CRDs VolumeSnapshotClass, VolumeSnapshotContent, VolumeSnapshot for snapshot.storage.k8s.io/v1 #938
Adding CRDs VolumeSnapshotClass, VolumeSnapshotContent, VolumeSnapshot for snapshot.storage.k8s.io/v1 #938
Conversation
Welcome @missingcharacter! |
Hi @missingcharacter. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test thanks, this is overdue. Our test code attempts to work around the helm chart not including the files by applying the crd manually, let me see if it's still needed or if it fails or whatever https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/hack/e2e/run.sh#L156 |
/approve I have one suggestion and I am not sure if the tests will pass, if they don't then there needs to be one additional change |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: missingcharacter, wongma7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Ah so for 1.20, kops started installing the snapshot controller and its CRDs by default. https://github.com/kubernetes/kops/blob/864d2b64ccbd98940eba4800e921ea0f63afc925/upup/models/cloudup/resources/addons/snapshot-controller.addons.k8s.io/k8s-1.20.yaml.template
What this means is we can set enableVolumeSnapshot to false in https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/hack/values.yaml and I would expect tests to pass. Alternatively, maybe there is some helm logic where it can gracefully ignore the error, i.e. even if enableVolumeSnapshot is set true, ignore the error if the controller already exists. But IDK if that's possible and I don't think it's necessary (take your time with this PR , I will release a helm chart probably next week). |
i think the reason the tests are failing is because kops installs snapshot-controller v4.0 but we are still using csi-snapshotter sidecar v3.0. Previously we installed snapshot-controller v3.0 and used csi-snapshotter sidecar v3.0 so there was no incompatibility. I'm just guessing though, I'll try to find time tomorrow to reproduce the test failures. Sorry about the guessing and testing t oget the CI to pass, the test logs arent very helpful for diagnosing the failures. |
@wongma7 I can keep trying, more tests pass now 😃 |
@wongma7 seems like it works now, but I'm not sure if the changes I made are good with the project |
@missingcharacter if we are installing snapshot controller we definitely want to also install the CRDs, so yes I want this change, there's just some debate about the overall story there and what to name the values because the helm chart is like an API we cant break. I'm fine with merging this as-is and figuring out the naming before actually releasing it though. Reference: #942 (comment) . |
/cc @krmichel I'll give time for krmichel to weigh in |
@wongma7: GitHub didn't allow me to request PR reviews from the following users: krmichel. Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@wongma7 I think I have another way to try this in |
looks |
…t for snapshot.storage.k8s.io/v1 to helm chart
@wongma7 all tests pass now and CRDs are not installed via |
@wongma7 this way passes tests as well, but if a new kind was added to the same API version it will not be updated by the helm chart |
I assume there is no chart for the snapshot controller. I think it would be best to have that chart as a dependency if there is one. I am not sure what project produces the snapshot controller. Is there a predictable place to get the crds? If there is, would it be worthwhile to create a make target for updating them by copying new versions into the chart? Having the crds in the crd directory in the chart should install them if they aren't already installed, based on my understanding. Is the purpose of having a template that "copies" them into the main resources of the chart so the chart can not install them if the "enable" value is set to false? I think letting helm install them from the crd directory, while it wouldn't upgrade the ones that already exist, it would at least allow for new custom resource types to be added with new versions of the charts. Also, since they are in a directory named crds I think that the chart will install them anyway even if the flag is false. If this route is pursued then the directory should have a name that isn't crds. A potential downside to having them be "regular" resources under the templates directory is that I don't know if a helm uninstall will remove them or not. If uninstalling this chart removed crds that aren't technically part of this chart that would not be great. It would cause any data stored in instances of those crds to be lost, which I am pretty sure is why helm doesn't delete them. Helm might be smart enough to realize they are crds even if they come from the templates directory of the chart. I would at least want to see it tested and be informed how it behaves. |
@krmichel just to be sure I understood, you want me to add test(s) where the chart is uninstalled and then check if the crds exist? |
I don't think it needs to be an automated test. I just want to make sure that if this chart keeps installing CRDs that don't really belong to it that we don't cause anyone who has installed and then uninstalled this chart to lose data in CRD resources since normal helm handling of CRDs doesn't delete them. I think it would be sufficient to try these once:
I am also curious with the manifests being in the crds directory of the chart if they will get installed even if the |
@krmichel the results are below: NOTE: I used a clean installation of With a system that has the snapshotter and its CRDs already installed has this chart installed and then uninstalled and verify that the CRDs were not removedCRDs are NOT removed after uninstalling the chart before installing anything$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-558bd4d5db-9l2cp 1/1 Running 0 10m
kube-system pod/coredns-558bd4d5db-q8r8x 1/1 Running 0 10m
kube-system pod/etcd-docker-desktop 1/1 Running 0 10m
kube-system pod/kube-apiserver-docker-desktop 1/1 Running 0 10m
kube-system pod/kube-controller-manager-docker-desktop 1/1 Running 0 10m
kube-system pod/kube-proxy-ktpd2 1/1 Running 0 10m
kube-system pod/kube-scheduler-docker-desktop 1/1 Running 0 10m
kube-system pod/storage-provisioner 1/1 Running 0 9m42s
kube-system pod/vpnkit-controller 1/1 Running 2 9m42s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 10m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 10m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 10m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 10m
$ kubectl api-resources | grep 'storage.k8s.io'
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment Install snapshot controller$ git clone https://github.com/kubernetes-csi/external-snapshotter.git
Cloning into 'external-snapshotter'...
$ cd external-snapshotter/
$ kubectl create -f client/config/crd/
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
$ kubectl api-resources | grep 'storage.k8s.io'
volumesnapshotclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots snapshot.storage.k8s.io/v1 true VolumeSnapshot
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
$ kubectl create -f deploy/kubernetes/snapshot-controller
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created Install ebs-csi$ cd path/to/aws-ebs-csi-driver/charts/aws-ebs-csi-driver
$ helm install --set enableVolumeSnapshot=false -n kube-system ebs-csi ./
W0622 14:34:14.854779 11594 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0622 14:34:14.908261 11594 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: ebs-csi
LAST DEPLOYED: Tue Jun 22 14:34:14 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that aws-ebs-csi-driver has started, run:
kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-ebs-csi-driver,app.kubernetes.io/instance=ebs-csi"
WARNING: The following values have been deprecated in favor of moving them into the controller or node groups. They will be removed in a subsequent release.
affinity:
extraCreateMetadata:
extraVolumeTags:
k8sTagClusterId:
nodeSelector:
podAnnotations:
priorityClassName:
region:
replicaCount:
resources:
tolerations:
topologySpreadConstraints:
volumeAttachLimit:
are moving to
controller:
affinity:
extraCreateMetadata:
extraVolumeTags:
k8sTagClusterId:
nodeSelector:
podAnnotations:
priorityClassName:
region:
replicaCount:
resources:
tolerations:
topologySpreadConstraints:
node:
volumeAttachLimit: Uninstall ebs-csi helm chart$ helm uninstall -n kube-system ebs-csi
W0622 14:35:10.180535 12009 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
release "ebs-csi" uninstalled
$ kubectl api-resources | grep 'storage.k8s.io'
volumesnapshotclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots snapshot.storage.k8s.io/v1 true VolumeSnapshot
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment With a system that does not have the snapshotter and its CRDs already installed has this chart installed and then uninstalled and verify that the CRDs are not removedCRDs are NOT removed after uninstalling the chart before installation$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-558bd4d5db-2vqbz 1/1 Running 0 20m
kube-system pod/coredns-558bd4d5db-r97fj 1/1 Running 0 20m
kube-system pod/etcd-docker-desktop 1/1 Running 0 20m
kube-system pod/kube-apiserver-docker-desktop 1/1 Running 0 20m
kube-system pod/kube-controller-manager-docker-desktop 1/1 Running 0 20m
kube-system pod/kube-proxy-96q79 1/1 Running 0 20m
kube-system pod/kube-scheduler-docker-desktop 1/1 Running 0 20m
kube-system pod/storage-provisioner 1/1 Running 0 19m
kube-system pod/vpnkit-controller 1/1 Running 3 19m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 20m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 20m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 20m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-558bd4d5db 2 2 2 20m
$ kubectl api-resources | grep 'storage.k8s.io'
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment Installation$ helm install -n kube-system ebs-csi ./
W0622 13:58:58.893685 4380 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0622 13:58:58.947414 4380 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: ebs-csi
LAST DEPLOYED: Tue Jun 22 13:58:58 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that aws-ebs-csi-driver has started, run:
kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-ebs-csi-driver,app.kubernetes.io/instance=ebs-csi"
WARNING: The following values have been deprecated in favor of moving them into the controller or node groups. They will be removed in a subsequent release.
affinity:
extraCreateMetadata:
extraVolumeTags:
k8sTagClusterId:
nodeSelector:
podAnnotations:
priorityClassName:
region:
replicaCount:
resources:
tolerations:
topologySpreadConstraints:
volumeAttachLimit:
are moving to
controller:
affinity:
extraCreateMetadata:
extraVolumeTags:
k8sTagClusterId:
nodeSelector:
podAnnotations:
priorityClassName:
region:
replicaCount:
resources:
tolerations:
topologySpreadConstraints:
node:
volumeAttachLimit:
$ kubectl api-resources | grep 'storage.k8s.io'
volumesnapshotclasses snapshot.storage.k8s.io/v1 false VolumeSnapshotClass
volumesnapshotcontents snapshot.storage.k8s.io/v1 false VolumeSnapshotContent
volumesnapshots snapshot.storage.k8s.io/v1 true VolumeSnapshot
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment Uninstall helm chart
|
As long it doesn't cause problems, which the testing seems to indicate that it shouldn't, I think this is probably fine other than maybe the use of the crds directory in the chart. |
@krmichel should I rename |
I would probably rename the directory from |
@krmichel renaming the folder might confuse people who are accustomed to relying on |
I didn't know about the skip-crds thing, in that case do we really need enableVolumeSnapshot if skip-crds achieves the same thing? |
Makefile
Outdated
@@ -175,3 +175,4 @@ generate-kustomize: bin/helm | |||
cd charts/aws-ebs-csi-driver && ../../bin/helm template kustomize . -s templates/serviceaccount-csi-node.yaml > ../../deploy/kubernetes/base/serviceaccount-csi-node.yaml | |||
cd charts/aws-ebs-csi-driver && ../../bin/helm template kustomize . -s templates/serviceaccount-snapshot-controller.yaml > ../../deploy/kubernetes/base/serviceaccount-snapshot-controller.yaml | |||
cd charts/aws-ebs-csi-driver && ../../bin/helm template kustomize . -s templates/snapshot-controller.yaml -f ../../deploy/kubernetes/values/snapshotter.yaml > ../../deploy/kubernetes/base/snapshot_controller.yaml | |||
cd charts/aws-ebs-csi-driver && mkdir -p ../../deploy/kubernetes/cluster && cp crds/snapshot-controller.yaml ../../deploy/kubernetes/cluster/crd_snapshotter.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you rename the files so they are the same in helm and in kustomize, and to disambiguate snapshot-controller.yaml
cd charts/aws-ebs-csi-driver && mkdir -p ../../deploy/kubernetes/cluster && cp crds/snapshot-controller-crds.yaml ../../deploy/kubernetes/cluster/snapshot-controller-crds.yaml
otherwise, at a glance I got confgused what the difference between templates/snapshot-controller.yaml is and crds/snapshot-controller.yaml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, 1 moment.
I could switch back enableVolumeSnapshot
to installCRDs
in charts/aws-ebs-csi-driver/templates/crds.yml
for helm
version 2 compatibility if you want similar to how argocd does it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no let's leave as is, I'm tempted to deprecate enableVolumeSnapshot in another PR 😂 . We have users who want to control:
- whether to install snapshot controller
- whether to install snapshot sidecar
- whether to install snapshot crds.
IMO --skip-crds satisfies 3 . enableVolumeSnapshot is just not a good name anymore.
/lgtm |
I think this feature is worth a release on it's own :) Thank you! |
Is this a bug fix or adding new feature?
New Feature
What is this PR about? / Why do we need it?
Adds snapshot related CRDs and closes #536
What testing is done?
No