Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upgrade plan with a version has stopped working on 1.19.0 #2274

Closed
NeilW opened this issue Sep 1, 2020 · 3 comments · Fixed by kubernetes/kubernetes#94421
Closed

upgrade plan with a version has stopped working on 1.19.0 #2274

NeilW opened this issue Sep 1, 2020 · 3 comments · Fixed by kubernetes/kubernetes#94421
Assignees
Labels
area/upgrades kind/regression Categorizes issue or PR as related to a regression from a prior release. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@NeilW
Copy link

NeilW commented Sep 1, 2020

BUG REPORT

Versions

kubeadm version (use kubeadm version):
1.19.0

Environment:

  • Kubernetes version (use kubectl version):
    1.18.8
  • Cloud provider or hardware configuration:
    Brightbox
  • OS (e.g. from /etc/os-release):
    Ubuntu 20.04.01
  • Kernel (e.g. uname -a):
    Linux srv-ms9ld 5.4.0-42-generic kubeadm should have custom flags for net.IP slice #46-Ubuntu SMP Fri Jul 10 00:24:02 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Others:

What happened?

The upgrade plan failed when you specify a version.

$ sudo kubeadm upgrade plan 1.19.0
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.8
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/versions] Latest stable version: 1.19.0
[upgrade/versions] Latest version in the v1.18 series: 1.19.0

[upgrade/versions] FATAL: configmaps "kubelet-config-1.19" not found
To see the stack trace of this error execute with --v=5 or higher

What you expected to happen?

upgrade plan should work with a specified version as it does without a version.

How to reproduce it (as minimally and precisely as possible)?

As above.

Anything else we need to know?

upgrade plan without a version picks up the latest version in the 1.18 series correctly and generates a plan as expected.

@neolit123
Copy link
Member

neolit123 commented Sep 1, 2020

/assign @rosti

this seems to happen around:

I0901 18:29:52.632452 10068 plan.go:88] [upgrade/plan] analysing component config version states
[upgrade/versions] FATAL: configmaps "kubelet-config-1.19" not found

i think the problem is that it should just fetch the existing CM version instead of the one passed to plan?
this seems like a regression around the component config changes.


EDIT:

ok, what seems to happen here is that the user provided argument overrides the ClusterConfiguration from the cluster:
https://github.com/kubernetes/kubernetes/blob/a463b25c9d170b14b5c3183c8604ebb906a97509/cmd/kubeadm/app/cmd/upgrade/common.go#L188

this version reaches all the way to fetching the kubelet-x.yy config map:
https://github.com/kubernetes/kubernetes/blob/f2e3154a140dc192760a5cf8c01f8044e1aa867b/cmd/kubeadm/app/componentconfigs/kubelet.go#L77

i think what should be done instead is to pass the unmodified clusterconfiguration (from the cluster) or just the k8s version, to the component config handlers, because we want to fetch the existing configmaps as they are, so that we print what is "current" in the "plan" output.

@neolit123 neolit123 added kind/regression Categorizes issue or PR as related to a regression from a prior release. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. area/upgrades labels Sep 1, 2020
@neolit123 neolit123 added this to the v1.20 milestone Sep 1, 2020
@rosti
Copy link

rosti commented Sep 2, 2020

@neolit123 the problem here is the inconsistency of argument handling in the case of upgrade plan. We have a couple of different cases with respect to what ClusterConfiguration is returned by enforceRequirements:

  1. If there is no argument, the returned ClusterConfiguration.KubernetesVersion is the old (currently installed) one
  2. If there is an overwrite, the returned ClusterConfiguration.KubernetesVersion is the new (argument supplied) one

This bug was not exposed until now, because nobody was looking in ClusterConfiguration.KubernetesVersion. Luckily, the requirement to always supply a version to upgrade apply spares it from the same fate and ClusterConfiguration.KubernetesVersion always contains the version to upgrade to.

I'll try to make a more "cherry-pick-able" PR and then follow that with a more elaborate thing for 1.20.

@neolit123
Copy link
Member

neolit123 commented Sep 2, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/upgrades kind/regression Categorizes issue or PR as related to a regression from a prior release. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants