-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k3d: Introduce k3d SR-IOV provider #972
Conversation
Skipping CI for Draft Pull Request. |
Basic sanity checks passed (docker only) |
c4183bd
to
7626450
Compare
0738126
to
4586936
Compare
acad0a4
to
1447c12
Compare
/uncc |
ddf34f4
to
faf635d
Compare
Signed-off-by: Or Shoval <[email protected]>
Added info about bumping moving parts |
Since kubernetes-sigs/kind#2999 blocks us from updating to newer k8s versions using kind, we are introducing k3d. Changes: * Support of local multi instances was removed, we are not using it, and it shouldn't affect multi instances on CI once we want to introduce it. * Added gracefully releasing of the SR-IOV nics. It reduces the downtime between cluster-down and cluster-up nicely, as the nics disappear for few minutes otherwise. * Only one PF per node is supported, we don't need more for now. * Use the k3d local registry instead one of our own. * The provider is hardcoded with 1 server (master node) and 2 agents (workers). If we will need other configuration it can be done on follow PR, for now there is no reason to support other config. Signed-off-by: Or Shoval <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you!
It will be nice as a follow up to drop the use of Calico, as we do not really need it.
It is bit problematic, there is an internal flannel, that even if you want flannel, it need to be disabled first and deployed manually (with some changes possibly), because we are using multus and the internal's settings doesn't go well with our multus. EDIT - managed to advance, not sure if all perfect yet though. |
Update: |
I do not know why Callico will be better. |
dual stack ? single stack ipv6 ? Cons of flannel: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass, in general can we move the sriov things to cluster-up/cluster/k3d/sriov.sh
?
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: qinqon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[7e486e5 k3d: Introduce k3d SR-IOV provider](kubevirt/kubevirtci#972) [42c3f70 Fix some typos](kubevirt/kubevirtci#971) [e37ca14 Remove the centos8 based k8s-1.26 provider](kubevirt/kubevirtci#969) [46a9824 Run bazelisk run //robots/cmd/kubevirtci-bumper:kubevirtci-bumper -- -ensure-last-three-minor-of v1 --k8s-provider-dir /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirtci/cluster-provision/k8s](kubevirt/kubevirtci#974) ```release-note NONE ``` Signed-off-by: kubevirt-bot <[email protected]>
[3e52bb0 kind, vgpu: Bump vgpu kind to k8s-1.25](kubevirt/kubevirtci#979) [7e486e5 k3d: Introduce k3d SR-IOV provider](kubevirt/kubevirtci#972) [42c3f70 Fix some typos](kubevirt/kubevirtci#971) [e37ca14 Remove the centos8 based k8s-1.26 provider](kubevirt/kubevirtci#969) [46a9824 Run bazelisk run //robots/cmd/kubevirtci-bumper:kubevirtci-bumper -- -ensure-last-three-minor-of v1 --k8s-provider-dir /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirtci/cluster-provision/k8s](kubevirt/kubevirtci#974) ```release-note NONE ``` Signed-off-by: kubevirt-bot <[email protected]>
[3e52bb0 kind, vgpu: Bump vgpu kind to k8s-1.25](kubevirt/kubevirtci#979) [7e486e5 k3d: Introduce k3d SR-IOV provider](kubevirt/kubevirtci#972) [42c3f70 Fix some typos](kubevirt/kubevirtci#971) [e37ca14 Remove the centos8 based k8s-1.26 provider](kubevirt/kubevirtci#969) [46a9824 Run bazelisk run //robots/cmd/kubevirtci-bumper:kubevirtci-bumper -- -ensure-last-three-minor-of v1 --k8s-provider-dir /home/prow/go/src/github.com/kubevirt/project-infra/../kubevirtci/cluster-provision/k8s](kubevirt/kubevirtci#974) ```release-note NONE ``` Signed-off-by: kubevirt-bot <[email protected]>
What this PR does / why we need it:
Since kubernetes-sigs/kind#2999 blocks us from updating
to newer k8s versions using kind (because we use cpu manager),
we are introducing k3d.
Current k8s version:
v1.25.6+k3s1
(K3D_TAG=v5.4.7
)Changes:
we are not using it, and it shouldn't affect multi instances on CI,
once we want to introduce it.
It reduces the downtime between
cluster-down
andcluster-up
nicely,as the nics disappear for few minutes otherwise.
1 server (control-plane node) and 2 agents (workers).
If we will need other configuration it can be done on follow PR,
for now there is no reason to support other config.
Notes for reviewers:
In order to ease reviews, the first commit is a pure copy of the current kind folders.
The rest are the changes themselves.
Main files are
provider.sh
andcommon.sh
, maybe better to look on them as a wholeinstead of the diff because there are a lot of changes for them.
Project infra PR:
Tested on kubevirt itself as well:
Action items on follow PRs:
Will be done according severity.
Potential nice to have follow PRs:
cluster.yaml
for the cluster creationcluster-up
(present verbose/debug mode maybe).Can be extracted to an issue if needed.