Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws-ebs-csi-driver v1.1.3 is defined in kustomization release-1.1 branch but doesn't exist in registry #993

Closed
RCarretta opened this issue Jul 26, 2021 · 5 comments · Fixed by kubernetes/k8s.io#2394
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@RCarretta
Copy link

/kind bug

What happened?
aws-ebs-csi-driver was updated to v1.1.3, but that version doesn't exist in the registry. This is causing deployments using the release-1.1 branch kustomization.yaml to fail.

What you expected to happen?
Either aws-ebs-csi-driver:v1.1.3 should be pushed or the kustomization.yaml reverted to express v1.1.2

How to reproduce it (as minimally and precisely as possible)?
kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.1"

Anything else we need to know?:
I'm not exactly clear procedurally what happened here, but @vdhanan and @wongma7 committed updates to the kustomization.yaml on July 23. Both incremented the aws-ebs-csi-driver minor version, but v1.1.3 doesn't exist in the registry.
v1.1.2 -> 061fcd8#diff-aed32151527b406d85a51ac1f2dfec15c02812f3b0b907caca8e15c130195135
v1.1.3 -> b104754#diff-aed32151527b406d85a51ac1f2dfec15c02812f3b0b907caca8e15c130195135

There are 6 failing checks expressed in the repo for the v1.1.3 version

Environment

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
  • Driver version: release-1.1
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 26, 2021
@RCarretta
Copy link
Author

$ kubectl get pods --all-namespaces | grep ebs-csi
kube-system   ebs-csi-controller-dcdfc8989-grhvw                                   5/6     ImagePullBackOff   0          129m
kube-system   ebs-csi-controller-dcdfc8989-m7bpb                                   5/6     ImagePullBackOff   0          129m
kube-system   ebs-csi-node-2vdmw                                                   2/3     ImagePullBackOff   0          18m
kube-system   ebs-csi-node-8fs26                                                   2/3     ImagePullBackOff   0          125m
kube-system   ebs-csi-node-8rhkg                                                   2/3     ImagePullBackOff   0          126m
kube-system   ebs-csi-node-9vwpt                                                   2/3     ImagePullBackOff   0          124m
kube-system   ebs-csi-node-c79dh                                                   2/3     ImagePullBackOff   0          127m
kube-system   ebs-csi-node-dm4fg                                                   2/3     ImagePullBackOff   0          123m
kube-system   ebs-csi-node-g64g2                                                   2/3     ImagePullBackOff   0          126m
kube-system   ebs-csi-node-jrmtb                                                   2/3     ImagePullBackOff   0          127m
kube-system   ebs-csi-node-lbh4s                                                   2/3     ImagePullBackOff   0          125m
kube-system   ebs-csi-node-mhxdk                                                   2/3     ImagePullBackOff   0          126m
kube-system   ebs-csi-node-t7hld                                                   2/3     ImagePullBackOff   0          127m
$ kubectl describe pod -n kube-system ebs-csi-controller-dcdfc8989-grhvw | grep Image:
    Image:         k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.3
    Image:         k8s.gcr.io/sig-storage/csi-provisioner:v2.1.1
    Image:         k8s.gcr.io/sig-storage/csi-attacher:v3.1.0
    Image:         k8s.gcr.io/sig-storage/csi-snapshotter:v3.0.3
    Image:         k8s.gcr.io/sig-storage/csi-resizer:v1.0.0
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.2.0
$ kubectl describe pod -n kube-system ebs-csi-node-2vdmw | grep Image:
    Image:         k8s.gcr.io/provider-aws/aws-ebs-csi-driver:v1.1.3
    Image:         k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.1.0
    Image:         k8s.gcr.io/sig-storage/livenessprobe:v2.2.0

@therealdwright
Copy link
Contributor

Looks like an issue with this release

@SaltedEggIndomee
Copy link

SaltedEggIndomee commented Jul 26, 2021

Maybe update the instruction on the Wiki first to use the older version.

I'm surprised that even a simple test for the new release, to pull the images, was not done when the instructions was updated.

@wongma7
Copy link
Contributor

wongma7 commented Jul 26, 2021

no, there is no test. we're pushing kubernetes/k8s.io#2394.

edit: kubernetes/k8s.io#2394 has merged, the image will be in gcr shortly

@wongma7
Copy link
Contributor

wongma7 commented Jul 26, 2021

I'll work on updating the ordering of the process so that this is impossible (for real) to occur again.

https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/RELEASE.md

We already fixed it for the helm chart such that it doesn't get released until after the image exist, but the same guarantee doesn't exist for kustomize if the release branch already exists and is already documented in README. It's an oversight, i'l lcreate separate issue to track.

edit: created issue #994

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants