-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release-4.13] Backport of [sriov] NUMA
ExcludeTopology test cases
#1629
[release-4.13] Backport of [sriov] NUMA
ExcludeTopology test cases
#1629
Conversation
[sriov] NUMA
ExcludeTopology test cases[sriov] NUMA
ExcludeTopology test cases
d5d0d3a
to
72f670f
Compare
/retest |
2 similar comments
/retest |
/retest |
SR-IOV Network Operator dependency bumped with commands: ``` go mod edit -replace \ github.com/k8snetworkplumbingwg/sriov-network-operator=\ github.com/openshift/[email protected] go mod tidy go mod vendor ``` Signed-off-by: Andrea Panattoni <[email protected]>
Test cases uses a set SriovNetworkNodePolicies that targets at least two NIC, placed on two different NUMA nodes. Playing with the `excludeTopology` field, is it possible to create workload pod that uses multiple or a single NUMA node. Signed-off-by: Andrea Panattoni <[email protected]>
Test cases uses a set SriovNetworkNodePolicies and performance profile to control NUMA node placement in test. This test creates a pod with sriov interface on NUMA node 1 with the sriov interface in NUMA node 0. The excludeTopology is set to True. THe pod is expected to be deployed.
Add a test case where all Virtual Function that belongs to an SriovNetworkNodePolicy with `ExcludeTopology = true` are consumed. Test logic verifies further pods deployments get "Insufficient resource" errors, then frees some VFs and pod can be scheduled again. Signed-off-by: Andrea Panattoni <[email protected]>
Guaranteed workload pod can be in a different cgroup if on the node there have been applied a PerformanceProfile. The Cluster Node Tuning Operator configures the cpuset cgroup on specific condition and it doesn't come back to the original configuration when the node is moved to a differenent MachineConfigPool. Refs: https://github.com/openshift/cluster-node-tuning-operator/blob/a4c70abb71036341dfaf0cac30dab0d166e55cbd/assets/performanceprofile/scripts/cpuset-configure.sh#L9 Signed-off-by: Andrea Panattoni <[email protected]>
Signed-off-by: Andrea Panattoni <[email protected]>
Make NUMA/SR-IOV integration tests creating their own KubeletConfig that sets single-numa-node policy and reserve an enitre NUMA node to system. Add functions to manipulate `node-role.kubernetes.io/x` labels on node to apply the performacen profile to arbitrary nodes. Signed-off-by: Andrea Panattoni <[email protected]>
72f670f
to
306f510
Compare
@zeeke: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/retest |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: SchSeba, zeeke The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Needs
Cherry pick of
cc @gregkopels