Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update from version 0.4 to 0.5 #514

Closed
AhmadMS1988 opened this issue May 28, 2020 · 11 comments
Closed

Update from version 0.4 to 0.5 #514

AhmadMS1988 opened this issue May 28, 2020 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@AhmadMS1988
Copy link

AhmadMS1988 commented May 28, 2020

/kind bug

What happened?
When I issued kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master" I got the following error:

serviceaccount/ebs-csi-controller-sa unchanged
clusterrole.rbac.authorization.k8s.io/ebs-external-attacher-role unchanged
clusterrole.rbac.authorization.k8s.io/ebs-external-provisioner-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/ebs-csi-attacher-binding unchanged
clusterrolebinding.rbac.authorization.k8s.io/ebs-csi-provisioner-binding unchanged
csidriver.storage.k8s.io/ebs.csi.aws.com unchanged
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"aws-ebs-csi-driver\"},\"name\":\"ebs-csi-controller\",\"namespace\":\"kube-system\"},\"spec\":{\"replicas\":2,\"selector\":{\"matchLabels\":{\"app\":\"ebs-csi-controller\",\"app.kubernetes.io/name\":\"aws-ebs-csi-driver\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"ebs-csi-controller\",\"app.kubernetes.io/name\":\"aws-ebs-csi-driver\"}},\"spec\":{\"containers\":[{\"args\":[\"--endpoint=$(CSI_ENDPOINT)\",\"--logtostderr\",\"--v=5\"],\"env\":[{\"name\":\"CSI_ENDPOINT\",\"value\":\"unix:///var/lib/csi/sockets/pluginproxy/csi.sock\"},{\"name\":\"AWS_ACCESS_KEY_ID\",\"valueFrom\":{\"secretKeyRef\":{\"key\":\"key_id\",\"name\":\"aws-secret\",\"optional\":true}}},{\"name\":\"AWS_SECRET_ACCESS_KEY\",\"valueFrom\":{\"secretKeyRef\":{\"key\":\"access_key\",\"name\":\"aws-secret\",\"optional\":true}}}],\"image\":\"amazon/aws-ebs-csi-driver:v0.5.0\",\"imagePullPolicy\":\"IfNotPresent\",\"livenessProbe\":{\"failureThreshold\":5,\"httpGet\":{\"path\":\"/healthz\",\"port\":\"healthz\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"timeoutSeconds\":3},\"name\":\"ebs-plugin\",\"ports\":[{\"containerPort\":9808,\"name\":\"healthz\",\"protocol\":\"TCP\"}],\"volumeMounts\":[{\"mountPath\":\"/var/lib/csi/sockets/pluginproxy/\",\"name\":\"socket-dir\"}]},{\"args\":[\"--csi-address=$(ADDRESS)\",\"--v=5\",\"--feature-gates=Topology=true\",\"--enable-leader-election\",\"--leader-election-type=leases\"],\"env\":[{\"name\":\"ADDRESS\",\"value\":\"/var/lib/csi/sockets/pluginproxy/csi.sock\"}],\"image\":\"quay.io/k8scsi/csi-provisioner:v1.3.0\",\"name\":\"csi-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/var/lib/csi/sockets/pluginproxy/\",\"name\":\"socket-dir\"}]},{\"args\":[\"--csi-address=$(ADDRESS)\",\"--v=5\",\"--leader-election=true\",\"--leader-election-type=leases\"],\"env\":[{\"name\":\"ADDRESS\",\"value\":\"/var/lib/csi/sockets/pluginproxy/csi.sock\"}],\"image\":\"quay.io/k8scsi/csi-attacher:v1.2.0\",\"name\":\"csi-attacher\",\"volumeMounts\":[{\"mountPath\":\"/var/lib/csi/sockets/pluginproxy/\",\"name\":\"socket-dir\"}]},{\"args\":[\"--csi-address=/csi/csi.sock\"],\"image\":\"quay.io/k8scsi/livenessprobe:v1.1.0\",\"name\":\"liveness-probe\",\"volumeMounts\":[{\"mountPath\":\"/csi\",\"name\":\"socket-dir\"}]}],\"nodeSelector\":{\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/os\":\"linux\"},\"priorityClassName\":\"system-cluster-critical\",\"serviceAccountName\":\"ebs-csi-controller-sa\",\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"emptyDir\":{},\"name\":\"socket-dir\"}]}}}}\n"},"labels":{"app.kubernetes.io/name":"aws-ebs-csi-driver"}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/name":"aws-ebs-csi-driver"}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"aws-ebs-csi-driver"}},"spec":{"$setElementOrder/containers":[{"name":"ebs-plugin"},{"name":"csi-provisioner"},{"name":"csi-attacher"},{"name":"liveness-probe"}],"containers":[{"image":"amazon/aws-ebs-csi-driver:v0.5.0","name":"ebs-plugin"},{"image":"quay.io/k8scsi/csi-provisioner:v1.3.0","name":"csi-provisioner"},{"image":"quay.io/k8scsi/csi-attacher:v1.2.0","name":"csi-attacher"},{"image":"quay.io/k8scsi/livenessprobe:v1.1.0","name":"liveness-probe"}],"nodeSelector":{"beta.kubernetes.io/arch":null,"beta.kubernetes.io/os":null,"kubernetes.io/arch":"amd64","kubernetes.io/os":"linux"},"serviceAccount":null,"tolerations":[{"operator":"Exists"}]}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "ebs-csi-controller", Namespace: "kube-system"
for: "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master": Deployment.apps "ebs-csi-controller" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"ebs-csi-controller", "app.kubernetes.io/name":"aws-ebs-csi-driver"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
Error from server (Invalid): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app.kubernetes.io/name\":\"aws-ebs-csi-driver\"},\"name\":\"ebs-csi-node\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"ebs-csi-node\",\"app.kubernetes.io/name\":\"aws-ebs-csi-driver\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"ebs-csi-node\",\"app.kubernetes.io/name\":\"aws-ebs-csi-driver\"}},\"spec\":{\"affinity\":{\"nodeAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":{\"nodeSelectorTerms\":[{\"matchExpressions\":[{\"key\":\"eks.amazonaws.com/compute-type\",\"operator\":\"NotIn\",\"values\":[\"fargate\"]}]}]}}},\"containers\":[{\"args\":[\"node\",\"--endpoint=$(CSI_ENDPOINT)\",\"--logtostderr\",\"--v=5\"],\"env\":[{\"name\":\"CSI_ENDPOINT\",\"value\":\"unix:/csi/csi.sock\"}],\"image\":\"amazon/aws-ebs-csi-driver:v0.5.0\",\"livenessProbe\":{\"failureThreshold\":5,\"httpGet\":{\"path\":\"/healthz\",\"port\":\"healthz\"},\"initialDelaySeconds\":10,\"periodSeconds\":10,\"timeoutSeconds\":3},\"name\":\"ebs-plugin\",\"ports\":[{\"containerPort\":9808,\"name\":\"healthz\",\"protocol\":\"TCP\"}],\"securityContext\":{\"privileged\":true},\"volumeMounts\":[{\"mountPath\":\"/var/lib/kubelet\",\"mountPropagation\":\"Bidirectional\",\"name\":\"kubelet-dir\"},{\"mountPath\":\"/csi\",\"name\":\"plugin-dir\"},{\"mountPath\":\"/dev\",\"name\":\"device-dir\"}]},{\"args\":[\"--csi-address=$(ADDRESS)\",\"--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)\",\"--v=5\"],\"env\":[{\"name\":\"ADDRESS\",\"value\":\"/csi/csi.sock\"},{\"name\":\"DRIVER_REG_SOCK_PATH\",\"value\":\"/var/lib/kubelet/plugins/ebs.csi.aws.com/csi.sock\"}],\"image\":\"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0\",\"lifecycle\":{\"preStop\":{\"exec\":{\"command\":[\"/bin/sh\",\"-c\",\"rm -rf /registration/ebs.csi.aws.com-reg.sock /csi/csi.sock\"]}}},\"name\":\"node-driver-registrar\",\"volumeMounts\":[{\"mountPath\":\"/csi\",\"name\":\"plugin-dir\"},{\"mountPath\":\"/registration\",\"name\":\"registration-dir\"}]},{\"args\":[\"--csi-address=/csi/csi.sock\"],\"image\":\"quay.io/k8scsi/livenessprobe:v1.1.0\",\"name\":\"liveness-probe\",\"volumeMounts\":[{\"mountPath\":\"/csi\",\"name\":\"plugin-dir\"}]}],\"hostNetwork\":true,\"nodeSelector\":{\"kubernetes.io/arch\":\"amd64\",\"kubernetes.io/os\":\"linux\"},\"priorityClassName\":\"system-node-critical\",\"tolerations\":[{\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/var/lib/kubelet\",\"type\":\"Directory\"},\"name\":\"kubelet-dir\"},{\"hostPath\":{\"path\":\"/var/lib/kubelet/plugins/ebs.csi.aws.com/\",\"type\":\"DirectoryOrCreate\"},\"name\":\"plugin-dir\"},{\"hostPath\":{\"path\":\"/var/lib/kubelet/plugins_registry/\",\"type\":\"Directory\"},\"name\":\"registration-dir\"},{\"hostPath\":{\"path\":\"/dev\",\"type\":\"Directory\"},\"name\":\"device-dir\"}]}}}}\n"},"labels":{"app.kubernetes.io/name":"aws-ebs-csi-driver"}},"spec":{"selector":{"matchLabels":{"app.kubernetes.io/name":"aws-ebs-csi-driver"}},"template":{"metadata":{"labels":{"app.kubernetes.io/name":"aws-ebs-csi-driver"}},"spec":{"$setElementOrder/containers":[{"name":"ebs-plugin"},{"name":"node-driver-registrar"},{"name":"liveness-probe"}],"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"eks.amazonaws.com/compute-type","operator":"NotIn","values":["fargate"]}]}]}}},"containers":[{"args":["node","--endpoint=$(CSI_ENDPOINT)","--logtostderr","--v=5"],"image":"amazon/aws-ebs-csi-driver:v0.5.0","name":"ebs-plugin"},{"image":"quay.io/k8scsi/csi-node-driver-registrar:v1.1.0","name":"node-driver-registrar"},{"image":"quay.io/k8scsi/livenessprobe:v1.1.0","name":"liveness-probe"}],"nodeSelector":{"beta.kubernetes.io/arch":null,"beta.kubernetes.io/os":null,"kubernetes.io/arch":"amd64","kubernetes.io/os":"linux"}}}}}
to:
Resource: "apps/v1, Resource=daemonsets", GroupVersionKind: "apps/v1, Kind=DaemonSet"
Name: "ebs-csi-node", Namespace: "kube-system"
for: "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master": DaemonSet.apps "ebs-csi-node" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"ebs-csi-node", "app.kubernetes.io/name":"aws-ebs-csi-driver"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

What you expected to happen?
Upgrade successfully

How to reproduce it (as minimally and precisely as possible)?
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master"
when u have 0.4 on EKS v1.16.8-eks-e16311

Environment

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-21T14:51:23Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
    Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.8-eks-e16311", GitCommit:"e163110a04dcb2f39c3325af96d019b4925419eb", GitTreeState:"clean", BuildDate:"2020-03-27T22:37:12Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}

  • Driver version:
    Current: 0.4

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 28, 2020
@pandvan
Copy link

pandvan commented Jun 3, 2020

I'm facing the same issue on an EKS cluster at version 1.15.11 when trying to upgrade from version 0.4 to 0.5 of EBS driver.
The only way I've found so far to proceed is delete che kustomization with kubectl delete -k and then re-apply again.

@TBBle
Copy link

TBBle commented Jun 16, 2020

You're actually upgrading past 0.5. The issue is that 46235c5 (part of 0.6) changed the matchLabels in deploy/kubernetes/base/node.yaml and deploy/kubernetes/base/node.yaml and you can't actually modify the a live Deployment or Daemonset that way.

Upgrading to 0.5 would probably work (github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=v0.5.0 I guess? I haven't used kubectl apply -k) but the same problem will appear moving to 0.6.

Delete and reinstall is really your only option. You can probably just delete the Deployment and Daemonset though, the rest of the updates should apply over the top okay.

It's slightly odd, because 0.4 didn't have a Deployment. It did have a Daemonset though, so the problem would still occur. I suspect you were on a version between 0.4 and 0.5 already.

@leakingtapan
Copy link
Contributor

The issue is that 46235c5 (part of 0.6) changed the matchLabels in deploy/kubernetes/base/node.yaml and deploy/kubernetes/base/node.yaml and you can't actually modify the a live Deployment or Daemonset that way.

If no strong reason, we need to fix this by keeping the old matchLabels. @krmichel any reason to change the matchLabels in #475?

@krmichel
Copy link
Contributor

The match label being added (app.kubernetes.io/name) is a recommended kubernetes label. It already existed in the resources created by the helm chart (links below) which we started to use to generate the kustomize files. I would have added more of the recommended labels but I was trying to maintain as much parity with the previous chart and files. A change could certainly be made to leave them out of the generated files. One reason to keep them is that starting with helm 3.2.0 if you add the correct labels/annotations to resources you can get helm to adopt them so a user could switch to using the chart if they wanted to. I don't think that the adoption would work if the selector labels didn't match those in the chart. It would cause the same issue that is currently being seen. I guess the real question is whether the owners want the app.kubernetes.io/name to be a matchlabel for the kustomize resources. If so then it will be mildly painful whenever it is added. If the owners decide they would like it removed they can assign this issue to me and I will take care of it.

@leakingtapan do you want to decide what you and the other owners would like to do and let me know?

https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/475/files#diff-114310c3d7c89a06be5088b7b4a127c9L12
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/475/files#diff-14c5280c272adbc32e1744a1034e5d58L11

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 24, 2020
@TBBle
Copy link

TBBle commented Sep 24, 2020

It looks like this problem will still exist when upgrading an 0.5.0 or older installation to 0.6.0 or later, unless the old version is uninstalled (or at least the ebs-csi-node Daemonset is deleted) before the new version is installed.

It's possible other things in the system will have the same matchLabels changes, so a full uninstall-old/install-new cycle is probably better, being careful to maintain any customisations applied.

@krmichel
Copy link
Contributor

I think this can be worked around using kubectl replace. I haven't tested this but it should be something like this for the daemon set:
kubectl get daemonset ebs-csi-node -o yaml | sed 's/((\s{1,})app: ebs-csi-node)/\1\n\2app.kubernetes.io/name: aws-ebs-csi-driver/' | kubectl replace -f -
and this for the deployment:
kubectl get deployment ebs-csi-controller -o yaml | sed 's/((\s{1,})app: ebs-csi-controller)/\1\n\2app.kubernetes.io/name: aws-ebs-csi-driver/' | kubectl replace -f -

This should "update" (by replacing) the resources currently in the cluster to have the additional labels which should allow an upgrade.

@TBBle
Copy link

TBBle commented Sep 26, 2020

I think to change the immutable field, you need kubectl replace --force, which forces a delete-and-create instead of an overwrite. I think that will cause the existing Pods to be deleted, and new Pods created.

At which point you might as well kubectl delete the aws-node Daemonset, and then kubectl apply -k the new version anyway.

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 26, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants