Skip to content

Commit

Permalink
k3d: Duplicate kind folders as is
Browse files Browse the repository at this point in the history
Signed-off-by: Or Shoval <[email protected]>
  • Loading branch information
oshoval committed Mar 1, 2023
1 parent e1cf770 commit a4f8e8a
Show file tree
Hide file tree
Showing 26 changed files with 1,980 additions and 0 deletions.
10 changes: 10 additions & 0 deletions cluster-up/cluster/k3d-1.25-sriov/OWNERS
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
filters:
".*":
reviewers:
- qinqon
- oshoval
- phoracek
- ormergi
approvers:
- qinqon
- phoracek
101 changes: 101 additions & 0 deletions cluster-up/cluster/k3d-1.25-sriov/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# K8S 1.23.13 with SR-IOV in a Kind cluster

Provides a pre-deployed containerized k8s cluster with version 1.23.13 that runs
using [KinD](https://github.com/kubernetes-sigs/kind)
The cluster is completely ephemeral and is recreated on every cluster restart. The KubeVirt containers are built on the
local machine and are then pushed to a registry which is exposed at
`localhost:5000`.

This version also expects to have SR-IOV enabled nics (SR-IOV Physical Function) on the current host, and will move
physical interfaces into the `KinD`'s cluster worker node(s) so that they can be used through multus and SR-IOV
components.

This providers also deploys [multus](https://github.com/k8snetworkplumbingwg/multus-cni)
, [sriov-cni](https://github.com/k8snetworkplumbingwg/sriov-cni)
and [sriov-device-plugin](https://github.com/k8snetworkplumbingwg/sriov-network-device-plugin).

## Bringing the cluster up

```bash
export KUBEVIRT_PROVIDER=kind-1.23-sriov
export KUBEVIRT_NUM_NODES=3
make cluster-up

$ cluster-up/kubectl.sh get nodes
NAME STATUS ROLES AGE VERSION
sriov-control-plane Ready control-plane,master 20h v1.23.13
sriov-worker Ready worker 20h v1.23.13
sriov-worker2 Ready worker 20h v1.23.13

$ cluster-up/kubectl.sh get pods -n kube-system -l app=multus
NAME READY STATUS RESTARTS AGE
kube-multus-ds-amd64-d45n4 1/1 Running 0 20h
kube-multus-ds-amd64-g26xh 1/1 Running 0 20h
kube-multus-ds-amd64-mfh7c 1/1 Running 0 20h

$ cluster-up/kubectl.sh get pods -n sriov -l app=sriov-cni
NAME READY STATUS RESTARTS AGE
kube-sriov-cni-ds-amd64-fv5cr 1/1 Running 0 20h
kube-sriov-cni-ds-amd64-q95q9 1/1 Running 0 20h

$ cluster-up/kubectl.sh get pods -n sriov -l app=sriovdp
NAME READY STATUS RESTARTS AGE
kube-sriov-device-plugin-amd64-h7h84 1/1 Running 0 20h
kube-sriov-device-plugin-amd64-xrr5z 1/1 Running 0 20h
```

## Bringing the cluster down

```bash
export KUBEVIRT_PROVIDER=kind-1.23-sriov
make cluster-down
```

This destroys the whole cluster, and moves the SR-IOV nics to the root network namespace.

## Setting a custom kind version

In order to use a custom kind image / kind version, export `KIND_NODE_IMAGE`, `KIND_VERSION`, `KUBECTL_PATH` before
running cluster-up. For example in order to use kind 0.9.0 (which is based on k8s-1.19.1) use:

```bash
export KIND_NODE_IMAGE="kindest/node:v1.19.1@sha256:98cf5288864662e37115e362b23e4369c8c4a408f99cbc06e58ac30ddc721600"
export KIND_VERSION="0.9.0"
export KUBECTL_PATH="/usr/bin/kubectl"
```

This allows users to test or use custom images / different kind versions before making them official.
See https://github.com/kubernetes-sigs/kind/releases for details about node images according to the kind version.

## Running multi SR-IOV clusters locally

Kubevirtci SR-IOV provider supports running two clusters side by side with few known limitations.

General considerations:

- A SR-IOV PF must be available for each cluster. In order to achieve that, there are two options:

1. Assign just one PF for each worker node of each cluster by using `export PF_COUNT_PER_NODE=1` (this is the default
value).
2. Optional method: `export PF_BLACKLIST=<PF names>` the non used PFs, in order to prevent them from being allocated to
the current cluster. The user can list the PFs that should not be allocated to the current cluster, keeping in mind
that at least one (or 2 in case of migration), should not be listed, so they would be allocated for the current
cluster. Note: another reason to blacklist a PF, is in case its has a defect or should be kept for other operations (
for example sniffing).

- Clusters should be created one by another and not in parallel (to avoid races over SR-IOV PF's).
- The cluster names must be different. This can be achieved by setting `export CLUSTER_NAME=sriov2` on the 2nd cluster.
The default `CLUSTER_NAME` is `sriov`. The 2nd cluster registry would be exposed at `localhost:5001` automatically,
once the `CLUSTER_NAME`
is set to a non default value.
- Each cluster should be created on its own git clone folder, i.e:
`/root/project/kubevirtci1`
`/root/project/kubevirtci2`
In order to switch between them, change dir to that folder and set the env variables `KUBECONFIG`
and `KUBEVIRT_PROVIDER`.
- In case only one PF exists, for example if running on prow which will assign only one PF per job in its own DinD,
Kubevirtci is agnostic and nothing needs to be done, since all conditions above are met.
- Upper limit of the number of clusters that can be run on the same time equals number of PFs / number of PFs per
cluster, therefore, in case there is only one PF, only one cluster can be created. Locally the actual limit currently
supported is two clusters.
- In order to use `make cluster-down` please make sure the right `CLUSTER_NAME` is exported.
60 changes: 60 additions & 0 deletions cluster-up/cluster/k3d-1.25-sriov/TROUBLESHOOTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# How to troubleshoot a failing kind job

If logging and output artifacts are not enough, there is a way to connect to a running CI pod and troubleshoot directly from there.

## Pre-requisites

- A working (enabled) account on the [CI cluster](shift.ovirt.org), specifically enabled to the `kubevirt-prow-jobs` project.
- The [mkpj tool](https://github.com/kubernetes/test-infra/tree/master/prow/cmd/mkpj) installed

## Launching a custom job

Through the `mkpj` tool, it's possible to craft a custom Prow Job that can be executed on the CI cluster.

Just `go get` it by running `go get k8s.io/test-infra/prow/cmd/mkpj`

Then run the following command from a checkout of the [project-infra repo](https://github.com/kubevirt/project-infra):

```bash
mkpj --pull-number $KUBEVIRTPRNUMBER -job pull-kubevirt-e2e-kind-k8s-sriov-1.17.0 -job-config-path github/ci/prow/files/jobs/kubevirt/kubevirt-presubmits.yaml --config-path github/ci/prow/files/config.yaml > debugkind.yaml
```

You will end up having a ProwJob manifest in the `debugkind.yaml` file.

It's strongly recommended to replace the job's name, as it will be easier to find and debug the relative pod, by replacing `metadata.name` with something more recognizeable.

The $KUBEVIRTPRNUMBER can be an actual PR on the [kubevirt repo](https://github.com/kubevirt/kubevirt).

In case we just want to debug the cluster provided by the CI, it's recommended to override the entry point, either in the test PR we are instrumenting (a good sample can be found [here](https://github.com/kubevirt/kubevirt/pull/3022)), or by overriding the entry point directly in the prow job's manifest.

Remember that we want the cluster long living, so a long sleep must be provided as part of the entry point.

Make sure you switch to the `kubevirt-prow-jobs` project, and apply the manifest:

```bash
kubectl apply -f debugkind.yaml
```

You will end up with a ProwJob object, and a pod with the same name you gave to the ProwJob.

Once the pod is up & running, connect to it via bash:

```bash
kubectl exec -it debugprowjobpod bash
```

### Logistics

Once you are in the pod, you'll be able to troubleshoot what's happening in the environment CI is running its tests.

Run the follow to bring up a [kind](https://github.com/kubernetes-sigs/kind) cluster with a single node setup and the SR-IOV operator already setup to go (if it wasn't already done by the job itself).

```bash
KUBEVIRT_PROVIDER=kind-k8s-sriov-1.17.0 make cluster-up
```

The kubeconfig file will be available under `/root/.kube/kind-config-sriov`.

The `kubectl` binary is already on board and in `$PATH`.

The container acting as node is the one named `sriov-control-plane`. You can even see what's in there by running `docker exec -it sriov-control-plane bash`.
73 changes: 73 additions & 0 deletions cluster-up/cluster/k3d-1.25-sriov/config_sriov_cluster.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
#!/bin/bash

[ $(id -u) -ne 0 ] && echo "FATAL: this script requires sudo privileges" >&2 && exit 1

set -xe

PF_COUNT_PER_NODE=${PF_COUNT_PER_NODE:-1}
[ $PF_COUNT_PER_NODE -le 0 ] && echo "FATAL: PF_COUNT_PER_NODE must be a positive integer" >&2 && exit 1

SCRIPT_PATH=$(dirname "$(realpath "$0")")

source ${SCRIPT_PATH}/sriov-node/node.sh
source ${SCRIPT_PATH}/sriov-components/sriov_components.sh

CONFIGURE_VFS_SCRIPT_PATH="$SCRIPT_PATH/sriov-node/configure_vfs.sh"

SRIOV_COMPONENTS_NAMESPACE="sriov"
SRIOV_NODE_LABEL_KEY="sriov_capable"
SRIOV_NODE_LABEL_VALUE="true"
SRIOV_NODE_LABEL="$SRIOV_NODE_LABEL_KEY=$SRIOV_NODE_LABEL_VALUE"
SRIOVDP_RESOURCE_PREFIX="kubevirt.io"
SRIOVDP_RESOURCE_NAME="sriov_net"
VFS_DRIVER="vfio-pci"
VFS_DRIVER_KMODULE="vfio_pci"
VFS_COUNT="6"

function validate_nodes_sriov_allocatable_resource() {
local -r resource_name="$SRIOVDP_RESOURCE_PREFIX/$SRIOVDP_RESOURCE_NAME"
local -r sriov_nodes=$(_kubectl get nodes -l $SRIOV_NODE_LABEL -o custom-columns=:.metadata.name --no-headers)

local num_vfs
for sriov_node in $sriov_nodes; do
num_vfs=$(node::total_vfs_count "$sriov_node")
sriov_components::wait_allocatable_resource "$sriov_node" "$resource_name" "$num_vfs"
done
}

worker_nodes=($(_kubectl get nodes -l node-role.kubernetes.io/worker -o custom-columns=:.metadata.name --no-headers))
worker_nodes_count=${#worker_nodes[@]}
[ "$worker_nodes_count" -eq 0 ] && echo "FATAL: no worker nodes found" >&2 && exit 1

pfs_names=($(node::discover_host_pfs))
pf_count="${#pfs_names[@]}"
[ "$pf_count" -eq 0 ] && echo "FATAL: Could not find available sriov PF's" >&2 && exit 1

total_pf_required=$((worker_nodes_count*PF_COUNT_PER_NODE))
[ "$pf_count" -lt "$total_pf_required" ] && \
echo "FATAL: there are not enough PF's on the host, try to reduce PF_COUNT_PER_NODE
Worker nodes count: $worker_nodes_count
PF per node count: $PF_COUNT_PER_NODE
Total PF count required: $total_pf_required" >&2 && exit 1

## Move SR-IOV Physical Functions to worker nodes
PFS_IN_USE=""
node::configure_sriov_pfs "${worker_nodes[*]}" "${pfs_names[*]}" "$PF_COUNT_PER_NODE" "PFS_IN_USE"

## Create VFs and configure their drivers on each SR-IOV node
node::configure_sriov_vfs "${worker_nodes[*]}" "$VFS_DRIVER" "$VFS_DRIVER_KMODULE" "$VFS_COUNT"

## Deploy Multus and SRIOV components
sriov_components::deploy_multus
sriov_components::deploy \
"$PFS_IN_USE" \
"$VFS_DRIVER" \
"$SRIOVDP_RESOURCE_PREFIX" "$SRIOVDP_RESOURCE_NAME" \
"$SRIOV_NODE_LABEL_KEY" "$SRIOV_NODE_LABEL_VALUE"

# Verify that each sriov capable node has sriov VFs allocatable resource
validate_nodes_sriov_allocatable_resource
sriov_components::wait_pods_ready

_kubectl get nodes
_kubectl get pods -n $SRIOV_COMPONENTS_NAMESPACE
47 changes: 47 additions & 0 deletions cluster-up/cluster/k3d-1.25-sriov/conformance.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
{
"Description": "DEFAULT",
"UUID": "",
"Version": "v0.56.9",
"ResultsDir": "/tmp/sonobuoy/results",
"Resources": null,
"Filters": {
"Namespaces": ".*",
"LabelSelector": ""
},
"Limits": {
"PodLogs": {
"Namespaces": "kube-system",
"SonobuoyNamespace": true,
"FieldSelectors": [],
"LabelSelector": "",
"Previous": false,
"SinceSeconds": null,
"SinceTime": null,
"Timestamps": false,
"TailLines": null,
"LimitBytes": null
}
},
"QPS": 30,
"Burst": 50,
"Server": {
"bindaddress": "0.0.0.0",
"bindport": 8080,
"advertiseaddress": "",
"timeoutseconds": 21600
},
"Plugins": null,
"PluginSearchPath": [
"./plugins.d",
"/etc/sonobuoy/plugins.d",
"~/sonobuoy/plugins.d"
],
"Namespace": "sonobuoy",
"WorkerImage": "sonobuoy/sonobuoy:v0.56.9",
"ImagePullPolicy": "IfNotPresent",
"ImagePullSecrets": "",
"AggregatorPermissions": "clusterAdmin",
"ServiceAccountName": "sonobuoy-serviceaccount",
"ProgressUpdatesPort": "8099",
"SecurityContextMode": "nonroot"
}
69 changes: 69 additions & 0 deletions cluster-up/cluster/k3d-1.25-sriov/provider.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#!/usr/bin/env bash

set -e

DEFAULT_CLUSTER_NAME="sriov"
DEFAULT_HOST_PORT=5000
ALTERNATE_HOST_PORT=5001
export CLUSTER_NAME=${CLUSTER_NAME:-$DEFAULT_CLUSTER_NAME}

if [ $CLUSTER_NAME == $DEFAULT_CLUSTER_NAME ]; then
export HOST_PORT=$DEFAULT_HOST_PORT
else
export HOST_PORT=$ALTERNATE_HOST_PORT
fi

function set_kind_params() {
export KIND_VERSION="${KIND_VERSION:-0.17.0}"
export KIND_NODE_IMAGE="${KIND_NODE_IMAGE:-quay.io/kubevirtci/kindest-node:v1.23.13@sha256:ef453bb7c79f0e3caba88d2067d4196f427794086a7d0df8df4f019d5e336b61}"
export KUBECTL_PATH="${KUBECTL_PATH:-/bin/kubectl}"
}

function print_sriov_data() {
nodes=$(_kubectl get nodes -o=custom-columns=:.metadata.name | awk NF)
for node in $nodes; do
if [[ ! "$node" =~ .*"control-plane".* ]]; then
echo "Node: $node"
echo "VFs:"
${CRI_BIN} exec $node bash -c "ls -l /sys/class/net/*/device/virtfn*"
echo "PFs PCI Addresses:"
${CRI_BIN} exec $node bash -c "grep PCI_SLOT_NAME /sys/class/net/*/device/uevent"
fi
done
}

function configure_registry_proxy() {
[ "$CI" != "true" ] && return

echo "Configuring cluster nodes to work with CI mirror-proxy..."

local -r ci_proxy_hostname="docker-mirror-proxy.kubevirt-prow.svc"
local -r kind_binary_path="${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/.kind"
local -r configure_registry_proxy_script="${KUBEVIRTCI_PATH}/cluster/kind/configure-registry-proxy.sh"

KIND_BIN="$kind_binary_path" PROXY_HOSTNAME="$ci_proxy_hostname" $configure_registry_proxy_script
}

function up() {
# print hardware info for easier debugging based on logs
echo 'Available NICs'
${CRI_BIN} run --rm --cap-add=SYS_RAWIO quay.io/phoracek/lspci@sha256:0f3cacf7098202ef284308c64e3fc0ba441871a846022bb87d65ff130c79adb1 sh -c "lspci | egrep -i 'network|ethernet'"
echo ""

cp $KIND_MANIFESTS_DIR/kind.yaml ${KUBEVIRTCI_CONFIG_PATH}/$KUBEVIRT_PROVIDER/kind.yaml
kind_up

configure_registry_proxy

# remove the rancher.io kind default storageClass
_kubectl delete sc standard

${KUBEVIRTCI_PATH}/cluster/$KUBEVIRT_PROVIDER/config_sriov_cluster.sh

print_sriov_data
echo "$KUBEVIRT_PROVIDER cluster '$CLUSTER_NAME' is ready"
}

set_kind_params

source ${KUBEVIRTCI_PATH}/cluster/kind/common.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: sriov
resources:
- sriov-ns.yaml
- sriov-cni-daemonset.yaml
- sriovdp-daemonset.yaml
- sriovdp-config.yaml
patchesJson6902:
- target:
group: apps
version: v1
kind: DaemonSet
name: kube-sriov-cni-ds-amd64
path: patch-node-selector.yaml
- target:
group: apps
version: v1
kind: DaemonSet
name: kube-sriov-device-plugin-amd64
path: patch-node-selector.yaml
- target:
group: apps
version: v1
kind: DaemonSet
name: kube-sriov-device-plugin-amd64
path: patch-sriovdp-resource-prefix.yaml
Loading

0 comments on commit a4f8e8a

Please sign in to comment.