Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding CRDs VolumeSnapshotClass, VolumeSnapshotContent, VolumeSnapshot for snapshot.storage.k8s.io/v1 #938

Merged
merged 2 commits into from
Jun 22, 2021

Conversation

missingcharacter
Copy link
Contributor

Is this a bug fix or adding new feature?
New Feature

What is this PR about? / Why do we need it?
Adds snapshot related CRDs and closes #536

What testing is done?
No

@k8s-ci-robot
Copy link
Contributor

Welcome @missingcharacter!

It looks like this is your first PR to kubernetes-sigs/aws-ebs-csi-driver 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/aws-ebs-csi-driver has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 16, 2021
@k8s-ci-robot
Copy link
Contributor

Hi @missingcharacter. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot requested review from d-nishi and ddebroy June 16, 2021 16:56
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Jun 16, 2021
@wongma7
Copy link
Contributor

wongma7 commented Jun 16, 2021

/ok-to-test

thanks, this is overdue. Our test code attempts to work around the helm chart not including the files by applying the crd manually, let me see if it's still needed or if it fails or whatever https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/hack/e2e/run.sh#L156

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 16, 2021
@wongma7
Copy link
Contributor

wongma7 commented Jun 16, 2021

/approve

I have one suggestion and I am not sure if the tests will pass, if they don't then there needs to be one additional change

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: missingcharacter, wongma7

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. and removed size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Jun 16, 2021
@wongma7
Copy link
Contributor

wongma7 commented Jun 16, 2021

Ah so for 1.20, kops started installing the snapshot controller and its CRDs by default. https://github.com/kubernetes/kops/blob/864d2b64ccbd98940eba4800e921ea0f63afc925/upup/models/cloudup/resources/addons/snapshot-controller.addons.k8s.io/k8s-1.20.yaml.template

Error: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition "volumesnapshotclasses.snapshot.storage.k8s.io" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "aws-ebs-csi-driver"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kube-system"

What this means is we can set enableVolumeSnapshot to false in https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/hack/values.yaml and I would expect tests to pass.

Alternatively, maybe there is some helm logic where it can gracefully ignore the error, i.e. even if enableVolumeSnapshot is set true, ignore the error if the controller already exists. But IDK if that's possible and I don't think it's necessary

(take your time with this PR , I will release a helm chart probably next week).

@wongma7
Copy link
Contributor

wongma7 commented Jun 17, 2021

i think the reason the tests are failing is because kops installs snapshot-controller v4.0 but we are still using csi-snapshotter sidecar v3.0.

Previously we installed snapshot-controller v3.0 and used csi-snapshotter sidecar v3.0 so there was no incompatibility.

https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/charts/aws-ebs-csi-driver/values.yaml#L19

I'm just guessing though, I'll try to find time tomorrow to reproduce the test failures. Sorry about the guessing and testing t oget the CI to pass, the test logs arent very helpful for diagnosing the failures.

@missingcharacter
Copy link
Contributor Author

@wongma7 I can keep trying, more tests pass now 😃

@missingcharacter
Copy link
Contributor Author

@wongma7 seems like it works now, but I'm not sure if the changes I made are good with the project

@wongma7
Copy link
Contributor

wongma7 commented Jun 17, 2021

@missingcharacter if we are installing snapshot controller we definitely want to also install the CRDs, so yes I want this change, there's just some debate about the overall story there and what to name the values because the helm chart is like an API we cant break.

I'm fine with merging this as-is and figuring out the naming before actually releasing it though.

Reference: #942 (comment) .

@wongma7
Copy link
Contributor

wongma7 commented Jun 17, 2021

/cc @krmichel

I'll give time for krmichel to weigh in

@k8s-ci-robot
Copy link
Contributor

@wongma7: GitHub didn't allow me to request PR reviews from the following users: krmichel.

Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc @krmichel

I'll give time for krmichel to weigh in

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@missingcharacter
Copy link
Contributor Author

@wongma7 I think I have another way to try this in charts/aws-ebs-csi-driver/templates/crds.yml, where CRDs are NOT installed if .Capabilities.APIVersions.Has "snapshot.storage.k8s.io/v1"

@missingcharacter
Copy link
Contributor Author

looks ne does not work how I thought it did

…t for snapshot.storage.k8s.io/v1 to helm chart
@missingcharacter
Copy link
Contributor Author

@wongma7 all tests pass now and CRDs are not installed via helm if already present

@missingcharacter
Copy link
Contributor Author

@wongma7 this way passes tests as well, but if a new kind was added to the same API version it will not be updated by the helm chart

@krmichel
Copy link
Contributor

I assume there is no chart for the snapshot controller. I think it would be best to have that chart as a dependency if there is one. I am not sure what project produces the snapshot controller. Is there a predictable place to get the crds? If there is, would it be worthwhile to create a make target for updating them by copying new versions into the chart?

Having the crds in the crd directory in the chart should install them if they aren't already installed, based on my understanding. Is the purpose of having a template that "copies" them into the main resources of the chart so the chart can not install them if the "enable" value is set to false? I think letting helm install them from the crd directory, while it wouldn't upgrade the ones that already exist, it would at least allow for new custom resource types to be added with new versions of the charts. Also, since they are in a directory named crds I think that the chart will install them anyway even if the flag is false. If this route is pursued then the directory should have a name that isn't crds. A potential downside to having them be "regular" resources under the templates directory is that I don't know if a helm uninstall will remove them or not. If uninstalling this chart removed crds that aren't technically part of this chart that would not be great. It would cause any data stored in instances of those crds to be lost, which I am pretty sure is why helm doesn't delete them. Helm might be smart enough to realize they are crds even if they come from the templates directory of the chart. I would at least want to see it tested and be informed how it behaves.

@missingcharacter
Copy link
Contributor Author

@krmichel just to be sure I understood, you want me to add test(s) where the chart is uninstalled and then check if the crds exist?

@krmichel
Copy link
Contributor

I don't think it needs to be an automated test. I just want to make sure that if this chart keeps installing CRDs that don't really belong to it that we don't cause anyone who has installed and then uninstalled this chart to lose data in CRD resources since normal helm handling of CRDs doesn't delete them. I think it would be sufficient to try these once:

  1. With a system that has the snapshotter and its CRDs already installed has this chart installed and then uninstalled and verify that the CRDs were not removed
  2. With a system that does not have the snapshotter and its CRDs already installed has this chart installed and then uninstalled and verify that the CRDs are not removed

I am also curious with the manifests being in the crds directory of the chart if they will get installed even if the enableVolumeSnapshot option is disabled.

@missingcharacter
Copy link
Contributor Author

missingcharacter commented Jun 22, 2021

@krmichel the results are below:

NOTE: I used a clean installation of docker-desktop for mac with kubernetes enabled for both scenarios and helm version 3.5.4

With a system that has the snapshotter and its CRDs already installed has this chart installed and then uninstalled and verify that the CRDs were not removed

CRDs are NOT removed after uninstalling the chart

before installing anything

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-558bd4d5db-9l2cp                 1/1     Running   0          10m
kube-system   pod/coredns-558bd4d5db-q8r8x                 1/1     Running   0          10m
kube-system   pod/etcd-docker-desktop                      1/1     Running   0          10m
kube-system   pod/kube-apiserver-docker-desktop            1/1     Running   0          10m
kube-system   pod/kube-controller-manager-docker-desktop   1/1     Running   0          10m
kube-system   pod/kube-proxy-ktpd2                         1/1     Running   0          10m
kube-system   pod/kube-scheduler-docker-desktop            1/1     Running   0          10m
kube-system   pod/storage-provisioner                      1/1     Running   0          9m42s
kube-system   pod/vpnkit-controller                        1/1     Running   2          9m42s

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  10m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   10m

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   10m

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2/2     2            2           10m

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-558bd4d5db   2         2         2       10m

$ kubectl api-resources  | grep 'storage.k8s.io'
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

Install snapshot controller

$ git clone https://github.com/kubernetes-csi/external-snapshotter.git
Cloning into 'external-snapshotter'...
$ cd external-snapshotter/

$ kubectl create -f client/config/crd/
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created

$ kubectl api-resources  | grep 'storage.k8s.io'
volumesnapshotclasses                          snapshot.storage.k8s.io/v1             false        VolumeSnapshotClass
volumesnapshotcontents                         snapshot.storage.k8s.io/v1             false        VolumeSnapshotContent
volumesnapshots                                snapshot.storage.k8s.io/v1             true         VolumeSnapshot
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

$ kubectl create -f deploy/kubernetes/snapshot-controller
serviceaccount/snapshot-controller created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
deployment.apps/snapshot-controller created

Install ebs-csi

$ cd path/to/aws-ebs-csi-driver/charts/aws-ebs-csi-driver
$ helm install --set enableVolumeSnapshot=false -n kube-system ebs-csi ./
W0622 14:34:14.854779   11594 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0622 14:34:14.908261   11594 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: ebs-csi
LAST DEPLOYED: Tue Jun 22 14:34:14 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that aws-ebs-csi-driver has started, run:

    kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-ebs-csi-driver,app.kubernetes.io/instance=ebs-csi"


WARNING: The following values have been deprecated in favor of moving them into the controller or node groups.  They will be removed in a subsequent release.

affinity:
extraCreateMetadata:
extraVolumeTags:
k8sTagClusterId:
nodeSelector:
podAnnotations:
priorityClassName:
region:
replicaCount:
resources:
tolerations:
topologySpreadConstraints:
volumeAttachLimit:

are moving to

controller:
  affinity:
  extraCreateMetadata:
  extraVolumeTags:
  k8sTagClusterId:
  nodeSelector:
  podAnnotations:
  priorityClassName:
  region:
  replicaCount:
  resources:
  tolerations:
  topologySpreadConstraints:

node:
  volumeAttachLimit:

Uninstall ebs-csi helm chart

$ helm uninstall -n kube-system ebs-csi
W0622 14:35:10.180535   12009 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
release "ebs-csi" uninstalled

$ kubectl api-resources  | grep 'storage.k8s.io'
volumesnapshotclasses                          snapshot.storage.k8s.io/v1             false        VolumeSnapshotClass
volumesnapshotcontents                         snapshot.storage.k8s.io/v1             false        VolumeSnapshotContent
volumesnapshots                                snapshot.storage.k8s.io/v1             true         VolumeSnapshot
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

With a system that does not have the snapshotter and its CRDs already installed has this chart installed and then uninstalled and verify that the CRDs are not removed

CRDs are NOT removed after uninstalling the chart

before installation

$ kubectl get all --all-namespaces
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-558bd4d5db-2vqbz                 1/1     Running   0          20m
kube-system   pod/coredns-558bd4d5db-r97fj                 1/1     Running   0          20m
kube-system   pod/etcd-docker-desktop                      1/1     Running   0          20m
kube-system   pod/kube-apiserver-docker-desktop            1/1     Running   0          20m
kube-system   pod/kube-controller-manager-docker-desktop   1/1     Running   0          20m
kube-system   pod/kube-proxy-96q79                         1/1     Running   0          20m
kube-system   pod/kube-scheduler-docker-desktop            1/1     Running   0          20m
kube-system   pod/storage-provisioner                      1/1     Running   0          19m
kube-system   pod/vpnkit-controller                        1/1     Running   3          19m

NAMESPACE     NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP                  20m
kube-system   service/kube-dns     ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   20m

NAMESPACE     NAME                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/kube-proxy   1         1         1       1            1           kubernetes.io/os=linux   20m

NAMESPACE     NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns   2/2     2            2           20m

NAMESPACE     NAME                                 DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-558bd4d5db   2         2         2       20m

$ kubectl api-resources  | grep 'storage.k8s.io'
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

Installation

$ helm install -n kube-system ebs-csi ./
W0622 13:58:58.893685    4380 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
W0622 13:58:58.947414    4380 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
NAME: ebs-csi
LAST DEPLOYED: Tue Jun 22 13:58:58 2021
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
To verify that aws-ebs-csi-driver has started, run:

    kubectl get pod -n kube-system -l "app.kubernetes.io/name=aws-ebs-csi-driver,app.kubernetes.io/instance=ebs-csi"


WARNING: The following values have been deprecated in favor of moving them into the controller or node groups.  They will be removed in a subsequent release.

affinity:
extraCreateMetadata:
extraVolumeTags:
k8sTagClusterId:
nodeSelector:
podAnnotations:
priorityClassName:
region:
replicaCount:
resources:
tolerations:
topologySpreadConstraints:
volumeAttachLimit:

are moving to

controller:
  affinity:
  extraCreateMetadata:
  extraVolumeTags:
  k8sTagClusterId:
  nodeSelector:
  podAnnotations:
  priorityClassName:
  region:
  replicaCount:
  resources:
  tolerations:
  topologySpreadConstraints:

node:
  volumeAttachLimit:

$ kubectl api-resources  | grep 'storage.k8s.io'
volumesnapshotclasses                          snapshot.storage.k8s.io/v1             false        VolumeSnapshotClass
volumesnapshotcontents                         snapshot.storage.k8s.io/v1             false        VolumeSnapshotContent
volumesnapshots                                snapshot.storage.k8s.io/v1             true         VolumeSnapshot
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

Uninstall helm chart

$ helm uninstall -n kube-system ebs-csi
W0622 14:00:51.639004    4727 warnings.go:70] policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
release "ebs-csi" uninstalled

$ kubectl api-resources  | grep 'storage.k8s.io'
volumesnapshotclasses                          snapshot.storage.k8s.io/v1             false        VolumeSnapshotClass
volumesnapshotcontents                         snapshot.storage.k8s.io/v1             false        VolumeSnapshotContent
volumesnapshots                                snapshot.storage.k8s.io/v1             true         VolumeSnapshot
csidrivers                                     storage.k8s.io/v1                      false        CSIDriver
csinodes                                       storage.k8s.io/v1                      false        CSINode
csistoragecapacities                           storage.k8s.io/v1beta1                 true         CSIStorageCapacity
storageclasses                    sc           storage.k8s.io/v1                      false        StorageClass
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment

@krmichel
Copy link
Contributor

As long it doesn't cause problems, which the testing seems to indicate that it shouldn't, I think this is probably fine other than maybe the use of the crds directory in the chart.

@missingcharacter
Copy link
Contributor Author

@krmichel should I rename crds directory or move snapshot-controller.yaml to another directory?

@krmichel
Copy link
Contributor

I would probably rename the directory from crds to files since crds is a special directory for helm

https://helm.sh/docs/chart_best_practices/custom_resource_definitions/#method-1-let-helm-do-it-for-you

@missingcharacter
Copy link
Contributor Author

@krmichel renaming the folder might confuse people who are accustomed to relying on --skip-crds

@wongma7
Copy link
Contributor

wongma7 commented Jun 22, 2021

I didn't know about the skip-crds thing, in that case do we really need enableVolumeSnapshot if skip-crds achieves the same thing?

Makefile Outdated
@@ -175,3 +175,4 @@ generate-kustomize: bin/helm
cd charts/aws-ebs-csi-driver && ../../bin/helm template kustomize . -s templates/serviceaccount-csi-node.yaml > ../../deploy/kubernetes/base/serviceaccount-csi-node.yaml
cd charts/aws-ebs-csi-driver && ../../bin/helm template kustomize . -s templates/serviceaccount-snapshot-controller.yaml > ../../deploy/kubernetes/base/serviceaccount-snapshot-controller.yaml
cd charts/aws-ebs-csi-driver && ../../bin/helm template kustomize . -s templates/snapshot-controller.yaml -f ../../deploy/kubernetes/values/snapshotter.yaml > ../../deploy/kubernetes/base/snapshot_controller.yaml
cd charts/aws-ebs-csi-driver && mkdir -p ../../deploy/kubernetes/cluster && cp crds/snapshot-controller.yaml ../../deploy/kubernetes/cluster/crd_snapshotter.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you rename the files so they are the same in helm and in kustomize, and to disambiguate snapshot-controller.yaml

cd charts/aws-ebs-csi-driver && mkdir -p ../../deploy/kubernetes/cluster && cp crds/snapshot-controller-crds.yaml ../../deploy/kubernetes/cluster/snapshot-controller-crds.yaml 

otherwise, at a glance I got confgused what the difference between templates/snapshot-controller.yaml is and crds/snapshot-controller.yaml

Copy link
Contributor Author

@missingcharacter missingcharacter Jun 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, 1 moment.

I could switch back enableVolumeSnapshot to installCRDs in charts/aws-ebs-csi-driver/templates/crds.yml for helm version 2 compatibility if you want similar to how argocd does it
image

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no let's leave as is, I'm tempted to deprecate enableVolumeSnapshot in another PR 😂 . We have users who want to control:

  1. whether to install snapshot controller
  2. whether to install snapshot sidecar
  3. whether to install snapshot crds.

IMO --skip-crds satisfies 3 . enableVolumeSnapshot is just not a good name anymore.

@wongma7
Copy link
Contributor

wongma7 commented Jun 22, 2021

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 22, 2021
@k8s-ci-robot k8s-ci-robot merged commit 6441ca8 into kubernetes-sigs:master Jun 22, 2021
@missingcharacter missingcharacter deleted the snapshot-crds branch June 23, 2021 04:51
@Xyaren
Copy link

Xyaren commented Jul 1, 2021

I think this feature is worth a release on it's own :)
Any chance ?

Thank you!

@wongma7
Copy link
Contributor

wongma7 commented Jul 1, 2021

yes it's coming soon, I am releasing the helm chart and 1.1.1 at the same time so it will take a bit longer than usual.

#958 needs to merge
images need to build/get promoted
then #959 needs to merge.

I will re-evaluate enableVolumeSnapshot in a future release, still need to fix #942

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Snapshot CRDs are not installed
5 participants