-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[2.12] Add 1.16.14 and 1.16.15 support #6583
[2.12] Add 1.16.14 and 1.16.15 support #6583
Conversation
* Use k8s.gcr.io for kubernetes related images * Use k8s.gcr.io in inventory sample
Hi @linkvt. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test /hold |
@floryut Thanks I didn't know that! I understand you want to wait for 1.16.15 for the last 2.12.X release, but I think it doesn't hurt to also have 1.16.14 support in release-2.12 branch. |
Well we can merge it, but I'm not sure maintainers wants to trigger a new tag and everything, but up to them indeed |
/lgtm |
This is a backport PR, therefore we nee to merge it first in master, and release-2.13. |
1.16.14 hashes are already in master :) |
1.16.14 hashes are in master since #6532, but not in |
Oh indeed, I was lazy 😄 |
@linkvt Could you please add 1.16.15 and set it as default ? |
@linkvt Are you available ? Otherwise we can close and reopen a PR |
Hi @floryut, sorry for the late reply, I'm unavailable till Sunday. If you think that's too late feel free to modify this or reopen it. Thanks! |
No worries that can wait for a few days, thank you |
@floryut done, I didn't test it myself but I guess the CI should setup a cluster with v1.16.15. |
It does set it up and test it, all good for me, thank you |
/assign @Miouge1 |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: linkvt, Miouge1 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
When will this change result in a new v2.12.X-tag? |
When will this change result in a new v2.12.X-tag? @Miouge1 |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Add hashes for K8s 1.16.14, fixes a bug with kubeadm setting incorrect livenessProbes to insecure controller-manager and scheduler ports (10252/10251) instead of the secure ones (10259/10257).
Also switch to k8s.gcr.io as in the master branch.
Edit also adds 1.16.15 hashes now.
See #6506, this is maybe also needed for 1.17 and 1.18?
Which issue(s) this PR fixes:
Only partially solves #6506 for 1.16.x
Fixes #6653
Special notes for your reviewer:
I did the same changes as with the 1.16.13 versions in #6477
I used the
download_hash.py
script from the master branch.Besides that, the 1.16.14 image seems to be available only in k8s.gcr.io: https://console.cloud.google.com/gcr/images/google-containers/GLOBAL/kube-apiserver?tag=v1.16.14&gcrImageListsize=30
Will test this in our DEV cluster on the next working day but the changes should work in theory as they are the same as in the referenced PR for 1.16.13.
Does this PR introduce a user-facing change?: