You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened?
When upgrading to 1.1.0 from 1.0.0 all ClusterRoleBinding's and RoleBinding's were shifted in to the kube-system namespace while our deployment which references the upstream overlays is deployed to a namespace other than kube-system. This lead to relevant RBAC permissions being inaccessible to the ServiceAccount's that rely on them.
What you expected to happen?
Kustomize should be able to decide which namespace it deploys the aws-ebs-csi-driver to ensuring resources are not split between multiple namespaces.
How to reproduce it (as minimally and precisely as possible)?
Deploy aws-ebs-csi-driver to a non-default namespace using the base below.
/kind bug
What happened?
When upgrading to 1.1.0 from 1.0.0 all ClusterRoleBinding's and RoleBinding's were shifted in to the kube-system namespace while our deployment which references the upstream overlays is deployed to a namespace other than kube-system. This lead to relevant RBAC permissions being inaccessible to the ServiceAccount's that rely on them.
What you expected to happen?
Kustomize should be able to decide which namespace it deploys the aws-ebs-csi-driver to ensuring resources are not split between multiple namespaces.
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?:
Environment
kubectl version
):Client Version: v1.21.0
Server Version: v1.20.4-eks-6b7464
1.0.0 upgrading to 1.1.0
The text was updated successfully, but these errors were encountered: