Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release 2.17-rc #58

Open
wants to merge 945 commits into
base: master
Choose a base branch
from
Open

Release 2.17-rc #58

wants to merge 945 commits into from

Conversation

bodanc
Copy link

@bodanc bodanc commented Mar 29, 2022

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


hari-hud and others added 30 commits April 27, 2021 15:41
* calico: upgrade from v3.17.3 to v3.17.4

* calico: upgrade from v3.18.1 to v3.18.2
… version (kubernetes-sigs#7562)

* crio: add supported versions 1.20 and 1.21 and align default with k8s version

* cri-o: drop versions 1.17 and 1.18 from version matrix

* update note on cri-o version alignment
* rename ansible groups to use _ instead of -

k8s-cluster -> k8s_cluster
k8s-node -> k8s_node
calico-rr -> calico_rr
no-floating -> no_floating

Note: kube-node,k8s-cluster groups in upgrade CI
      need clean-up after v2.16 is tagged

* ensure old groups are mapped to the new ones
* Add image_arch variable when download flannel image

* Fix flannel image tag typo with image arch
… with v3.18.x (see projectcalico/calico#4557).  Specifically, granted watch to custom resources blockaffinities, ipamblocks & ipamhandles (kubernetes-sigs#7575)
…ubernetes-sigs#7570)

follow new naming conventions for gcr's coredns image.
starting from 1.21 kubeadm assumes it to be `coredns/coredns`:
this causes the kubeadm deployment being unable to pull image, beacuse `v`
was also added in image tag, until the role `kubernetes-apps` ovverides
it with the old name, which is only compatible with <=1.7.

Backward comptability with kubeadm <=1.20 is mantained checking
kubernetes version and falling back to old names (`coredns:1.xx`) when
the version is less than 1.21
* Upgrade cilium roles

* Del old test result

* Add hubble ui examples

* Refactor hubble metrics

* Markdown fix pipeline errors

* yamllint check and fix

* refactor install from kubernetes-sigs#7520

* Docs syntax change (fix)

* Cilium set default 1.8.9

* Update cilium version in Readme
* Add krew support

* Add reset for krew

* Update install krew(local)

* ansible lint

* yamllint

* fix krew default vars

* fix kubectl_localhost mode

* replace include

* fix e206
* Remove the duplicate task in etcd role

* Remove inessential delegate_to
…/net.d/calico-kubeconfig if need be. (kubernetes-sigs#7586)

Since K8S 1.21, BoundServiceAccountTokenVolume feature gate is in beta stage, thus activated by default (anyone who follows CSI guidelines has enabled AllAlpha and faced the issue before 1.21).
With this feature, SA tokens are regenerated every hour.
As a consequence for Calico CNI, token in /etc/cni/net.d/calico-kubeconfig copied from /var/run/secrets/kubernetes.io/serviceaccount in install-cni initContainer expires after one hour and any pod creation fails due to unauthorization.
Calico pods need to be restarted so that /etc/cni/net.d/calico-kubeconfig is updated with the new SA token.
…sigs#7593)

* add initial MetalLB docs

* metallb allow disabling the deployment of the metallb speaker

* calico>=3.18 allow using calico to advertise service loadbalancer IPs

* Document the use of MetalLB and Calico

* clean MetalLB docs
VladMasarik and others added 30 commits September 10, 2021 12:21
…bernetes-sigs#7583)

* Fix: adding new ips with inventory builder (kubernetes-sigs#7577)

* moved conflig loading logic
to after checking whether the config
should be loaded, and added check for
whether the config should be loaded

* added check for removing nodes from config
if the user wants to remove a node, we
need to load the config

* Fix tox errors
…netes-sigs#7964)

On Debian 11, `ipset` just recommend `iptables` so on the system that apt is configured with `APT::Install-Recommends "0";` iptables will not install automatically.
…sigs#7966)

Modify connection_strings_etcd to only return etcd nodes - not master nodes - since this results in duplicate hosts in the generated Ansible inventory and is unnecessary.
… during control plane upgrade (kubernetes-sigs#7976)

* Add option to kubeadm upgrade command to control certificates renewal during control plane upgrade

* Remove training whitespace
…8014) (kubernetes-sigs#8025)

"allowPrivilegeEscalation: false" blocks deploying metrics-server
on CentOS7. In addition, the original metrics-server manifest doesn't
contain it as [1]. This removes it.

[1]: https://github.com/kubernetes-sigs/metrics-server/blob/527679e5e8a103919c935d0575c20741796bc25d/manifests/base/deployment.yaml
…sigs#8031)

The addon-resizer container can reduce resource limits of cpu and
memory of metrics-server container in the pod, and that caused
OOMKilled.
In addition, the original metrics-server manifest doesn't contain
the addon-resizer container as [1].
So this adds metrics_server_resizer option to control the addon-resizer
container deployment and the default value is false to make it stable
for most environments.

This is a cherry-pick of 8d3961e

[1]: https://github.com/kubernetes-sigs/metrics-server/blob/527679e5e8a103919c935d0575c20741796bc25d/manifests/base/deployment.yaml
)

This is a cherry-pick of 2211504

Signed-off-by: Wang Zhen <[email protected]>

Co-authored-by: Wang Zhen <[email protected]>
…igs#8035)

The typha prometheus settings were in the `volumeMounts` section of the
spec and not in the `envs` section. This was cauing the deployment to
fail because it was looking for a volumeMount.

```
failed: [controller-001.a2.da.dev.logdna.net] (item=calico-typha.yml) => {"ansible_loop_var": "item", "changed": false, "item": {"ansible_loop_var": "item", "changed": true, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "dest": "/etc/kubernetes/calico-typha.yml", "diff": [], "failed": false, "gid": 0, "group": "root", "invocation": {"module_args": {"_original_basename": "calico-typha.yml.j2", "attributes": null, "backup": false, "checksum": "598ac79530749e8e2110793b53fc49ac208e7130", "content": null, "delimiter": null, "dest": "/etc/kubernetes/calico-typha.yml", "directory_mode": null, "follow": false, "force": true, "group": null, "local_follow": null, "mode": null, "owner": null, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "unsafe_writes": null, "validate": null}}, "item": {"file": "calico-typha.yml", "name": "calico", "type": "typha"}, "md5sum": "53c00ac7f562cf9ecbbfd27899ea066d", "mode": "0644", "owner": "root", "size": 5378, "src": "/home/core/.ansible/tmp/ansible-tmp-1632349768.56-75434-32452975679246/source", "state": "file", "uid": 0}, "msg": "error running kubectl (/opt/bin/kubectl --namespace=kube-system apply --force --filename=/etc/kubernetes/calico-typha.yml) command (rc=1), out='service/calico-typha unchanged\n', err='error: error validating \"/etc/kubernetes/calico-typha.yml\": error validating data: [ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[2]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): unknown field \"value\" in io.k8s.api.core.v1.VolumeMount, ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[3]): missing required field \"mountPath\" in io.k8s.api.core.v1.VolumeMount]; if you choose to ignore these errors, turn validation off with --validate=false\n'"}
```

Co-authored-by: Eric Lake <[email protected]>
…-sigs#8037)

The path of kubeconfig should be configurable, and its default value
is /etc/kubernetes/admin.conf. Most paths of the file are configurable
but some were not. This make those configurable.
…-sigs#7717) (kubernetes-sigs#8040)

* check if 'plugins' key exists in calico_cni_config object

* fix whitespace linting error

* fixed when list indentation

Co-authored-by: David Louks <[email protected]>
…-sigs#8039)

If using proxy, it is necessary to configure it before running
"subscription-manager status" command.
This adds the step.
…ubernetes-sigs#8042)

* Ensure apparmor is installed (kubernetes-sigs#8011)

Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.

(cherry picked from commit 4bace24)

* Ensure apparmor is installed (kubernetes-sigs#8036)

Kubespray deployment failed when using containerd backend on nodes that apparmor was not installed or previously removed. This PR ensure apparmor is installed by adding it into required_pkgs var.

(cherry picked from commit af04906)

Co-authored-by: rtsp <[email protected]>
* Fix-CI: python was upgraded in CI to 3.10 and pathlib is now included in python base making this dependency break the CI (kubernetes-sigs#8153)

* Upgrade ruamel.yaml.clib to work with Python 3.10 (kubernetes-sigs#8034)

ruamel.yaml.clib did not build with the upcoming Python 3.10.

Cf. https://sourceforge.net/p/ruamel-yaml-clib/tickets/5/

ruamel.yaml.clib==0.2.4 fixes the issue. It does not work
with Python 3.7 (cf https://sourceforge.net/p/ruamel-yaml-clib/tickets/6/)
but currently Kubespray requires Python >= 3.9.

Co-authored-by: Cristian Calin <[email protected]>
Co-authored-by: Olivier Lemasle <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.