-
Notifications
You must be signed in to change notification settings - Fork 807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix the name of the snapshot controller leader election RoleBinding #599
Fix the name of the snapshot controller leader election RoleBinding #599
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @robbie-demuth! |
Hi @robbie-demuth. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: robbie-demuth The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -13,7 +13,7 @@ subjects: | |||
namespace: kube-system | |||
roleRef: | |||
kind: Role | |||
name: snapshot-controller-leaderelection | |||
name: ebs-snapshot-controller-leaderelection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mentioned this briefly in the commit message / PR description, but I'm unsure how this repository's snapshot controller was working if the snapshot controller from the external-snapshotter repository was installed into a namespace other than kube-system. In that case, the role previously referenced here would not have existed. Are we sure that this role / role binding are needed?
Before, the RoleBinding was named snapshot-controller-leaderelection. It also incorrectly referenced the snapshot-controller-leaderelection Role instead of the ebs-snapshot-controller-leaderelection Role defined by the Helm chart By default, the objects for the snapshot controller from kubernetes-csi/external-snapshotter [1] are installed into the default namespace. The manifests, however, recommend installing the objects in the kube-system namespace instead. One of the objects is a RoleBinding named snapshot-controller-leaderelection. When installing the snapshot controller from the external-snapshotter repo into the kube-system namespace, the AWS EBS CSI driver Helm chart fails to install with the following error because the RoleBindings conflict: `Error: rendered manifests contain a resource that already exists. Unable to continue with install: RoleBinding "snapshot-controller-leaderelection" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "aws-ebs-csi-driver"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kube-system"` If the snapshot controller from the external-snapshotter repo is installed into the default namespace, the AWS EBS CSI driver Helm chart installs (and seems to work) without issue even though the snapshot-controller-leaderelection Role in the kube-system namespace does not exist [1] https://github.com/kubernetes-csi/external-snapshotter/tree/v3.0.1/deploy/kubernetes/snapshot-controller
@@ -3,7 +3,7 @@ | |||
kind: RoleBinding | |||
apiVersion: rbac.authorization.k8s.io/v1 | |||
metadata: | |||
name: snapshot-controller-leaderelection | |||
name: ebs-snapshot-controller-leaderelection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Running make generate-kustomize
as instructed by https://github.com/kubernetes-sigs/aws-ebs-csi-driver#helm-and-manifests resulted in some other changes, but they're not relevant to this PR, so I didn't commit them
It turns out my company has signed the CNCF CLA, so I might end up closing this PR and creating a new one from a fork from my organization once I get that setup |
@@ -3,7 +3,7 @@ | |||
kind: RoleBinding | |||
apiVersion: rbac.authorization.k8s.io/v1 | |||
metadata: | |||
name: snapshot-controller-leaderelection | |||
name: ebs-snapshot-controller-leaderelection |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should I also bump the Helm chart version from 0.6.1 to 0.6.2? Similarly, what's the release process like for the Helm chart? I see it's attached to the GitHub release, but is it released independently as well or do I need to wait for a GitHub release to be cut for this change to be officially available?
Pull Request Test Coverage Report for Build 1277
💛 - Coveralls |
Closing in favor of #601 |
Is this a bug fix or adding new feature?
Bug fix
What is this PR about? / Why do we need it?
Before, the RoleBinding was named snapshot-controller-leaderelection. It
also incorrectly referenced the snapshot-controller-leaderelection Role
instead of the ebs-snapshot-controller-leaderelection Role defined by
the Helm chart
By default, the objects for the snapshot controller from
kubernetes-csi/external-snapshotter [1] are installed into the default
namespace. The manifests, however, recommend installing the objects in
the kube-system namespace instead. One of the objects is a RoleBinding
named snapshot-controller-leaderelection. When installing the snapshot
controller from the external-snapshotter repo into the kube-system
namespace, the AWS EBS CSI driver Helm chart fails to install with the
following error because the RoleBindings conflict:
Error: rendered manifests contain a resource that already exists. Unable to continue with install: RoleBinding "snapshot-controller-leaderelection" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "aws-ebs-csi-driver"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kube-system"
If the snapshot controller from the external-snapshotter repo is
installed into the default namespace, the AWS EBS CSI driver Helm chart
installs (and seems to work) without issue even though the
snapshot-controller-leaderelection Role in the kube-system namespace
does not exist
[1] https://github.com/kubernetes-csi/external-snapshotter/tree/v3.0.1/deploy/kubernetes/snapshot-controller
What testing is done?
Installed the modified Helm chart and validated that the snapshot controller is able to take snapshot volumes, etc