Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the name of the snapshot controller leader election RoleBinding #599

Closed
wants to merge 1 commit into from
Closed

Fix the name of the snapshot controller leader election RoleBinding #599

wants to merge 1 commit into from

Conversation

robbie-demuth
Copy link
Contributor

Is this a bug fix or adding new feature?

Bug fix

What is this PR about? / Why do we need it?

Before, the RoleBinding was named snapshot-controller-leaderelection. It
also incorrectly referenced the snapshot-controller-leaderelection Role
instead of the ebs-snapshot-controller-leaderelection Role defined by
the Helm chart

By default, the objects for the snapshot controller from
kubernetes-csi/external-snapshotter [1] are installed into the default
namespace. The manifests, however, recommend installing the objects in
the kube-system namespace instead. One of the objects is a RoleBinding
named snapshot-controller-leaderelection. When installing the snapshot
controller from the external-snapshotter repo into the kube-system
namespace, the AWS EBS CSI driver Helm chart fails to install with the
following error because the RoleBindings conflict:

Error: rendered manifests contain a resource that already exists. Unable to continue with install: RoleBinding "snapshot-controller-leaderelection" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "aws-ebs-csi-driver"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kube-system"

If the snapshot controller from the external-snapshotter repo is
installed into the default namespace, the AWS EBS CSI driver Helm chart
installs (and seems to work) without issue even though the
snapshot-controller-leaderelection Role in the kube-system namespace
does not exist

[1] https://github.com/kubernetes-csi/external-snapshotter/tree/v3.0.1/deploy/kubernetes/snapshot-controller

What testing is done?

Installed the modified Helm chart and validated that the snapshot controller is able to take snapshot volumes, etc

@k8s-ci-robot
Copy link
Contributor

Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA.

It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address. Check your existing CLA data and verify that your email is set on your git commits.
  • If you signed the CLA as a corporation, please sign in with your organization's credentials at https://identity.linuxfoundation.org/projects/cncf to be authorized.
  • If you have done the above and are still having issues with the CLA being reported as unsigned, please log a ticket with the Linux Foundation Helpdesk: https://support.linuxfoundation.org/
  • Should you encounter any issues with the Linux Foundation Helpdesk, send a message to the backup e-mail support address at: [email protected]

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Oct 23, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @robbie-demuth!

It looks like this is your first PR to kubernetes-sigs/aws-ebs-csi-driver 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/aws-ebs-csi-driver has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @robbie-demuth. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Oct 23, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: robbie-demuth
To complete the pull request process, please assign bertinatto after the PR has been reviewed.
You can assign the PR to them by writing /assign @bertinatto in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Oct 23, 2020
@@ -13,7 +13,7 @@ subjects:
namespace: kube-system
roleRef:
kind: Role
name: snapshot-controller-leaderelection
name: ebs-snapshot-controller-leaderelection
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned this briefly in the commit message / PR description, but I'm unsure how this repository's snapshot controller was working if the snapshot controller from the external-snapshotter repository was installed into a namespace other than kube-system. In that case, the role previously referenced here would not have existed. Are we sure that this role / role binding are needed?

Before, the RoleBinding was named snapshot-controller-leaderelection. It
also incorrectly referenced the snapshot-controller-leaderelection Role
instead of the ebs-snapshot-controller-leaderelection Role defined by
the Helm chart

By default, the objects for the snapshot controller from
kubernetes-csi/external-snapshotter [1] are installed into the default
namespace. The manifests, however, recommend installing the objects in
the kube-system namespace instead. One of the objects is a RoleBinding
named snapshot-controller-leaderelection. When installing the snapshot
controller from the external-snapshotter repo into the kube-system
namespace, the AWS EBS CSI driver Helm chart fails to install with the
following error because the RoleBindings conflict:

`Error: rendered manifests contain a resource that already exists. Unable to continue with install: RoleBinding "snapshot-controller-leaderelection" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "aws-ebs-csi-driver"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "kube-system"`

If the snapshot controller from the external-snapshotter repo is
installed into the default namespace, the AWS EBS CSI driver Helm chart
installs (and seems to work) without issue even though the
snapshot-controller-leaderelection Role in the kube-system namespace
does not exist

[1] https://github.com/kubernetes-csi/external-snapshotter/tree/v3.0.1/deploy/kubernetes/snapshot-controller
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Oct 23, 2020
@@ -3,7 +3,7 @@
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: snapshot-controller-leaderelection
name: ebs-snapshot-controller-leaderelection
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Running make generate-kustomize as instructed by https://github.com/kubernetes-sigs/aws-ebs-csi-driver#helm-and-manifests resulted in some other changes, but they're not relevant to this PR, so I didn't commit them

@robbie-demuth
Copy link
Contributor Author

It turns out my company has signed the CNCF CLA, so I might end up closing this PR and creating a new one from a fork from my organization once I get that setup

@@ -3,7 +3,7 @@
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: snapshot-controller-leaderelection
name: ebs-snapshot-controller-leaderelection
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I also bump the Helm chart version from 0.6.1 to 0.6.2? Similarly, what's the release process like for the Helm chart? I see it's attached to the GitHub release, but is it released independently as well or do I need to wait for a GitHub release to be cut for this change to be officially available?

@coveralls
Copy link

Pull Request Test Coverage Report for Build 1277

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 81.253%

Totals Coverage Status
Change from base Build 1269: 0.0%
Covered Lines: 1595
Relevant Lines: 1963

💛 - Coveralls

@robbie-demuth
Copy link
Contributor Author

Closing in favor of #601

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants