-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-add handling of rename from official image "k8s.gcr.io/coredns" to "k8s.gcr.io/coredns/coredns" to support deprecated installations #114978
Conversation
… "k8s.gcr.io/coredns/coredns" to support deprecated installations This was accidentally replaced in kubernetes@50bea1d. By re-adding it, `kubeadm join` will no longer fail with `failed to pull image k8s.gcr.io/coredns:v1.8.6` for clusters that still have `k8s.gcr.io` explicitly configured as image repository, such as CAPI-managed clusters.
Welcome @AndiDog! |
@AndiDog: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @AndiDog. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: AndiDog The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cluster got created via CAPI
capi has specific handling of the migration that they were working on recently. please contact the capi maintainers about your use case.
@@ -48,8 +48,9 @@ func GetDNSImage(cfg *kubeadmapi.ClusterConfiguration) string { | |||
if cfg.DNS.ImageRepository != "" { | |||
dnsImageRepository = cfg.DNS.ImageRepository | |||
} | |||
// Handle the renaming of the official image from "registry.k8s.io/coredns" to "registry.k8s.io/coredns/coredns | |||
if dnsImageRepository == kubeadmapiv1beta3.DefaultImageRepository { | |||
// Handle the renaming of the official image from "registry.k8s.io/coredns" to "registry.k8s.io/coredns/coredns" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is actually not valid. the official image remains with a /coredns subpath. if the user has a custom repo the subpath is always absent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand what's not valid here.
$ docker pull k8s.gcr.io/coredns:v1.8.6
Trying to pull k8s.gcr.io/coredns:v1.8.6...
Error: initializing source docker://k8s.gcr.io/coredns:v1.8.6: reading manifest v1.8.6 in k8s.gcr.io/coredns: manifest unknown: Failed to fetch "v1.8.6" from request "/v2/coredns/manifests/v1.8.6".
$ docker pull k8s.gcr.io/coredns/coredns:v1.8.6
Trying to pull k8s.gcr.io/coredns/coredns:v1.8.6...
Getting image source signatures
Copying blob sha256:88efb86cbcab356a7db0ba9ea09694d394f2e088f3e5119aa331025c1e6cb7fa
[...]
$ docker pull registry.k8s.io/coredns:v1.8.6
Trying to pull registry.k8s.io/coredns:v1.8.6...
Error: initializing source docker://registry.k8s.io/coredns:v1.8.6: reading manifest v1.8.6 in registry.k8s.io/coredns: manifest unknown: Failed to fetch "v1.8.6"
$ docker pull registry.k8s.io/coredns/coredns:v1.8.6
Trying to pull registry.k8s.io/coredns/coredns:v1.8.6...
Getting image source signatures
Copying blob sha256:88efb86cbcab356a7db0ba9ea09694d394f2e088f3e5119aa331025c1e6cb7fa
[...]
So k8s.gcr.io
and registry.k8s.io
seem to behave the same, and therefore require the same workaround. Before 50bea1d, this exact code location did the rewrite for k8s.gcr.io
. Or did I mistake something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code:
50bea1d#diff-54f02c1435b3faae5f85c0507e8ccb2d3986f080287d9de55473b6587e126fa9R51
is about appending /coredns to registry.k8s.io so that the location ends up as registry.k8s.io/coredns/coredns:foo and is the valid one.
for newer versions of kubeadm we should not track the old registry k8s.gcr.io.
at that point kubeadm considers k8s.gcr.io a "custom" registry and the coredns image location is k8s.gcr.io/coredns:foo
this is by design and was done for backwards compatibility once coredns maintainers introduced the /coredns subpath.
this should not be done as it will trip kubeadm upgrade. you should remove the explicit setting and let kubeadm swap to registry.k8s.io. this applies to kubeadm standalone, not sure about capi. |
Yes, I had digged through all migration code related to
That would mean manual intervention on every managed cluster that has the explicit setting. If Kubernetes allows smooth, automatic upgrades of existing Kubernetes clusters with old versions, users will be more likely to upgrade. Then with a later major version, Kubernetes could automatically replace any image pulls for |
….15 tries to pull the official image incorrectly While kubeadm is buggy (kubernetes/kubernetes#114978) and tries to download the CoreDNS image despite us skipping that addon (kubernetes/kubeadm#2603), we try to use the new official repository already. This fixes `kubeadm join` for new Kubernetes versions and therefore avoids stuck node upgrades.
…s to pull the official image incorrectly While kubeadm is buggy (kubernetes/kubernetes#114978) and tries to download the CoreDNS image despite us skipping that addon (kubernetes/kubeadm#2603), we try to use the new official repository already. This fixes `kubeadm join` for new Kubernetes versions and therefore avoids stuck node upgrades. CAPI propagates this value before attempting the node upgrade.
that is actually a complicated problem and all solutions had trade-offs, IIRC.
kubeadm (no capi) users had the following problem:
we want users to stop using k8s.gcr.io ASAP which means that the transition had to be immediate and not track the old registry at all. k8s.gcr.io is currently still generating traffic that costs a lot of $$. |
Fully agree with all the explanations! I didn't want to immediately suggest a more drastic change as first-time contributor. Then would it make sense to enforce the migration by replacing |
this branch now tracks .27. i am generally not in favour of keeping track of the old registry in kubeadm code for .27 or backporting more changes to .26. of course, the other reviewers might have a different opinion. |
…s to pull the official image incorrectly While kubeadm is buggy (kubernetes/kubernetes#114978) and tries to download the CoreDNS image despite us skipping that addon (kubernetes/kubeadm#2603), we try to use the new official repository already. This fixes `kubeadm join` for new Kubernetes versions and therefore avoids stuck node upgrades. CAPI propagates this value before attempting the node upgrade.
…s to pull the official image incorrectly (#197) While kubeadm is buggy (kubernetes/kubernetes#114978) and tries to download the CoreDNS image despite us skipping that addon (kubernetes/kubeadm#2603), we try to use the new official repository already. This fixes `kubeadm join` for new Kubernetes versions and therefore avoids stuck node upgrades. CAPI propagates this value before attempting the node upgrade.
kubernetes/kubeadm#2714 (comment)
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
What type of PR is this?
/kind bug
/kind regression
What this PR does / why we need it:
This was accidentally replaced in 50bea1d (PR #109938). By re-adding it,
kubeadm join
will no longer fail withfailed to pull image k8s.gcr.io/coredns:v1.8.6
for clusters that still havek8s.gcr.io
explicitly configured as image repository, such as CAPI-managed clusters.Example failure that I'm facing:
k8s.gcr.io
gets stored in the cluster (kubectl get configmap -n kube-system kubeadm-config -o yaml | grep imageRepository: # prints k8s.gcr.io value
)registry.k8s.io
, is performed by CAPI. This means starting a new cloud server as node, and runningkubeadm join
on it.ClusterConfiguration.imageRepository=k8s.gcr.io
and then fails withfailed to pull image k8s.gcr.io/coredns:v1.8.6
because the renaming logic for official images is goneWhich issue(s) this PR fixes:
Did not create an issue
Context:
Special notes for your reviewer:
Maybe @dims who authored the related migration wants to have a look 😉
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: