-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migrate DNS manifests to kubernetes.core.k8s #10701
Migrate DNS manifests to kubernetes.core.k8s #10701
Conversation
Hi @VannTen. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
0191d66
to
27cd111
Compare
9cb26f4
to
463d321
Compare
/ok-to-test |
Does this PR need any help from outside contributors? |
I'm in the final steps of some needed refactoring to make this easy (converting to kubernetes.core.k8s is not hard, the problem is that the module requires to install the python k8s client on the hosts where the tasks is executed, and the infrastructure to make that happen is lacking in kubespray).
Those steps are basically:
- refactor the packages installation steps in kubernetes/preinstall (-> structure more like the download role, where individual entries are included based on groups) -> this is #11131 which should be nearly ready.
- do the same (reusing the same structure and queries) for installing python wheels in a kubespray venv and use that
- adapt this PR to use this.
|
463d321
to
ae6c9d0
Compare
Some of coredns templates depend on variables which are defined at the `template` task level. The consequence is than using the template in another way (in particular, we want to use the kubernetes.core.k8s template list feature, see following commits) is difficult. Loop inside the template rather than doing a separate task. This makes the template more self-contained and has the added benefits of deduplicating code.
Put intermediate templates vars in vars/ rather than in facts
Kubectl client-side apply breaks coredns on upgrade because the old and the new version are not resolved correctly (see kubernetes/kubernetes#39188 (comment)) TL;DR: the merge key for the port array is only the port number, not port+protocol. Server-side apply solves this issue, but our custom kube module is not server-side apply ready. While we could improve our custom kube module, it will be a lesser maintenance load going forward to progressively drop it and switch to kubernetes.core.k8s, which has more features and is maintained by upstream. Do that now for coredns manifests. Add the python k8s client on the control plane (no need for it elsewhere), and the python-venv on distributions which need it to create a virtualenv.
CentOS 7 ships with python2 and does not handle well the infrastructure introduced in 625ef33 (Add install infra for python_pkgs, 2024-05-02) Include a workaround for this. This is kept as a separate commit to be easily revertible, as CentOS 7 EOL is 30/06/2024 (= in less than two months)
Besides the problem with client-side apply explained in previous commit, we reduce ansible overhead by using the template feature of kubernetes.core.k8s, which let us supply a list of templates directly, which are applied all at once. This considerably reduces the ansible overhead (task scheduling)
/remove-kind cleanup |
4cf9c08
to
6913611
Compare
So. Looks like python>=3.8 (or 3.7, not sure) on the host is mostly required for kubernetes.core.k8s (trying with 3.6 I'm going into endless battle with pip. @mzaian @floryut @MrFreezeex @yankay thought ? |
Do you want this to be included in the 2.25 release? If the answer is no, I would say yes for sure since 2.26 will be released after CentOS 7 eol. If we do want this in 2.25, I guess this could be more nuanced, but IMO it could still be fine to drop CentOS 7, although maybe we want to have more folks testing this before putting it in a release this close? |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
when: | ||
- to_install | length != 0 | ||
ansible.builtin.pip: | ||
requirements: "{{ kubespray_virtualenvs_base }}/{{ item }}/requirements.txt" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The task pulls and installs Python packages on the fly from the Internet. But what should we do to design for airgap env?
We have made efforts on offline. It now introduces a new resource. That would be nice if you could provide a solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought to handle this with extra_args and passes --finds-links
with either a server containing the necessary wheels in the local network, or a local folder copied over the machine.
I think adding the possibility to inject extra_args would be enough to make the first case possible, correct ? That + adding instructions in docs/operations/offline-environment.md.
Wdyt ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But first I need to make it work in the common case ^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A Wheelhouse also looks like a solution.
The target design should ideally be implementable by only altering pip extra args, with no other changes (additional preparatory steps to make the resources available in the offline envs are ok, in my book)
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This converts coredns and nodelocaldns to use kubernetes.core.k8s
Along the way, some template modifications and ansible dark magic in variables to work the same way with --tags/--skip-tags.
Server-side apply with force-conflicts seems approriate for kubespray, as that what is recommended for controller. Happy to hear arguments on that though.
Which issue(s) this PR fixes:
Part of #10696
Should fix #7113
Fix #10860
This is also serve as a concrete example of what I'm thinking with the above issue.
Special notes for your reviewer:
Depend on #11158
Does this PR introduce a user-facing change?: