-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kind, vgpu: Bump vgpu kind to k8s-1.25 #979
Conversation
Skipping CI for Draft Pull Request. |
/test check-up-kind-1.23-sriov lets see it didn't affect sriov (since a common file was changed) Note - didn't try to just bump to 1.25 and do config cpu manager, but without using it |
/test check-up-kind-1.23-sriov |
It breaks SRIOV tests that run on kubevirt |
Do not config cpu manager for vgpu, because kind 1.24+ has this bug for cpu manager: kubernetes-sigs/kind#2999 Since we don't use cpu manager on vgpu lane we can bump to k8s-1.25 and remove cpu manager. Rename lane. Signed-off-by: Or Shoval <[email protected]>
The functions can add extra mounts / cpu manager to non worker nodes, it depends where it is called. If it is called before the worker snippet in the manifest it will configure it for the control-plane, otherwise for the worker node. Rename it to reflect it. Signed-off-by: Or Shoval <[email protected]>
@vladikr @brianmcarey @dhiller @xpivarc this allows bumping vgpu lane to k8s-1.25 |
@oshoval: The following test failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirtci/979/check-up-kind-1.23-vgpu/1633477265524264960 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/lgtm |
Please note that the issues with kind 1.24+ could still occur w/o CPU manager see kubernetes-sigs/kind#2999 (comment). On the the SRIOV take, it turned out to be flaky, tests were failing from time to time due to the missing permissions to /dev/null. |
Interesting
I ran at least 4 times already and it was stable, I believe that as long as we don't touch the /dev/null we are fine Note that kind SR-IOV was not affected by this PR, kind SR-IOV still has cpu manager and k8s-1.23. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dhiller The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Btw the lane of vpgu on kubevirtci is optional, this is why it was merged even that the old lane failed Thanks |
[3e52bb0 kind, vgpu: Bump vgpu kind to k8s-1.25](kubevirt/kubevirtci#979) [7e486e5 k3d: Introduce k3d SR-IOV provider](kubevirt/kubevirtci#972) [42c3f70 Fix some typos](kubevirt/kubevirtci#971) [e37ca14 Remove the centos8 based k8s-1.26 provider](kubevirt/kubevirtci#969) [46a9824 Run bazelisk run //robots/cmd/kubevirtci-bumper:kubevirtci-bumper -- -ensure-last-three-minor-of v1 --k8s-provider-dir /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirtci/cluster-provision/k8s](kubevirt/kubevirtci#974) ```release-note NONE ``` Signed-off-by: kubevirt-bot <[email protected]>
[3e52bb0 kind, vgpu: Bump vgpu kind to k8s-1.25](kubevirt/kubevirtci#979) [7e486e5 k3d: Introduce k3d SR-IOV provider](kubevirt/kubevirtci#972) [42c3f70 Fix some typos](kubevirt/kubevirtci#971) [e37ca14 Remove the centos8 based k8s-1.26 provider](kubevirt/kubevirtci#969) [46a9824 Run bazelisk run //robots/cmd/kubevirtci-bumper:kubevirtci-bumper -- -ensure-last-three-minor-of v1 --k8s-provider-dir /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirtci/cluster-provision/k8s](kubevirt/kubevirtci#974) ```release-note NONE ``` Signed-off-by: kubevirt-bot <[email protected]>
Since we don't use CPU manager on vgpu lane we can bump to k8s-1.25
and remove the cpu manager configuration due to
kubernetes-sigs/kind#2999 (affects k8s-1.24+ with cpu manager)
Rename vgpu lane to reflect the k8s version bump.
Kubevirtci Infra: kubevirt/project-infra#2653
Kubvirt infra: kubevirt/project-infra#2654