Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DNM] spike: Run e2e tests in paralel #76

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

RamLavi
Copy link
Contributor

@RamLavi RamLavi commented Nov 21, 2024

What this PR does / why we need it:
This PR is running the e2e tests in parallel, as tests should be independent of each other.
The amount of threads depends on the number of CPUs available in the machine running the CI.
Running it on the PR should increase CI time, and potentially expose bugs.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:

Release note:

NONE

@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label Nov 21, 2024
@kubevirt-bot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign maiqueb for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@RamLavi RamLavi changed the title spike: Run e2e tests in paralel [DNM] spike: Run e2e tests in paralel Nov 21, 2024
@kubevirt-bot kubevirt-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Dec 17, 2024
Moving to use ginkgo tool instead of go test.
This is done in order to use the parallel ginkgo parameter that is not
supported on "go test" tool.

Signed-off-by: Ram Lavi <[email protected]>
In order to manage failed tests artifacts, the process# is added to the
log file name.

Signed-off-by: Ram Lavi <[email protected]>
@kubevirt-bot kubevirt-bot added size/XXL and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/S labels Dec 18, 2024
@RamLavi
Copy link
Contributor Author

RamLavi commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

@maiqueb
Copy link
Collaborator

maiqueb commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

It could be.

You'll need to check the logs of the ovnkube-control plane and ensure that for that pod it failed to find the primary UDN network to be sure.

@RamLavi
Copy link
Contributor Author

RamLavi commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

It could be.

You'll need to check the logs of the ovnkube-control plane and ensure that for that pod it failed to find the primary UDN network to be sure.

I don't see any pod fails, but are you sure we should find a "pod fail"? I mean, it did find a primary network, it was just the wrong one.. In any case, this is very helpful, I've also asked on the race bug to know more information..

@maiqueb
Copy link
Collaborator

maiqueb commented Dec 19, 2024

@maiqueb looking at the e2e test failing, it seems like prior (and also after) to migration the primary UDN virt-launcher pod is not getting the appropriate network-status (logs):

2024-12-18T09:08:33.5592345Z   RAM B4 Migration vmi alpine-908896f1e virtLauncherPod virt-launcher-alpine-908896f1e-h2nb9 Annotations map[descheduler.alpha.kubernetes.io/request-evict-only: k8s.ovn.org/pod-networks:{"default":{"ip_addresses":["10.244.1.15/24","fd00:10:244:2::f/64"],"mac_address":"0a:58:0a:f4:01:0f","gateway_ips":["10.244.1.1","fd00:10:244:2::1"],"routes":[{"dest":"10.244.0.0/16","nextHop":"10.244.1.1"},{"dest":"10.96.0.0/16","nextHop":"10.244.1.1"},{"dest":"169.254.0.5/32","nextHop":"10.244.1.1"},{"dest":"100.64.0.0/16","nextHop":"10.244.1.1"},{"dest":"fd00:10:244::/48","nextHop":"fd00:10:244:2::1"},{"dest":"fd00:10:96::/112","nextHop":"fd00:10:244:2::1"},{"dest":"fd69::5/128","nextHop":"fd00:10:244:2::1"},{"dest":"fd98::/64","nextHop":"fd00:10:244:2::1"}],"role":"primary"},"testns-30ce1396d/l2-net-attach-def":{"ip_addresses":["10.100.200.3/24"],"mac_address":"0a:58:0a:64:c8:03","gateway_ips":["10.100.200.1"],"routes":[{"dest":"10.96.0.0/16","nextHop":"10.100.200.1"},{"dest":"100.65.0.0/16","nextHop":"10.100.200.1"}],"ip_address":"10.100.200.3/24","gateway_ip":"10.100.200.1","tunnel_id":4,"role":"primary"}} k8s.ovn.org/primary-udn-ipamclaim:alpine-908896f1e.pod k8s.v1.cni.cncf.io/network-status:[{
2024-12-18T09:08:33.5597745Z       "name": "ovn-kubernetes",
2024-12-18T09:08:33.5598207Z       "interface": "eth0",
2024-12-18T09:08:33.5598567Z       "ips": [
2024-12-18T09:08:33.5598959Z           "10.244.1.15",
2024-12-18T09:08:33.5599436Z           "fd00:10:244:2::f"
2024-12-18T09:08:33.5599819Z       ],
2024-12-18T09:08:33.5600197Z       "mac": "0a:58:0a:f4:01:0f",
2024-12-18T09:08:33.5600463Z       "default": true,
2024-12-18T09:08:33.5600668Z       "dns": {}
2024-12-18T09:08:33.5603199Z   }] kubectl.kubernetes.io/default-container:compute kubevirt.io/domain:alpine-908896f1e kubevirt.io/migrationTransportUnix:true kubevirt.io/vm-generation:1 post.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--unfreeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] post.hook.backup.velero.io/container:compute pre.hook.backup.velero.io/command:["/usr/bin/virt-freezer", "--freeze", "--name", "alpine-908896f1e", "--namespace", "testns-30ce1396d"] pre.hook.backup.velero.io/container:compute]
2024-12-18T09:08:33.5606414Z 

Can we attribute this issue to the OVNK race you talked about? OR is it a new issue we need to check?

It could be.
You'll need to check the logs of the ovnkube-control plane and ensure that for that pod it failed to find the primary UDN network to be sure.

I don't see any pod fails, but are you sure we should find a "pod fail"? I mean, it did find a primary network, it was just the wrong one.. In any case, this is very helpful, I've also asked on the race bug to know more information..

So that's what you need to find: when it returns the active network for the namespace, which one did it find ?

It should have found the primary UDN, not the default one. Can you print the relevant log snippet ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dco-signoff: yes Indicates the PR's author has DCO signed all their commits. size/XXL
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants