-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: unable to find network with name or ID podman-default-kube-network #17946
Comments
...but then again, there's this flake:
The string "3638" does not appear anywhere else in this log. And it's generated via |
There is no |
Yet another possibly-similar failure
|
I think it is time to go with the big hammer and make every test case use its own config dir just likes --root and --runroot. |
That sounds very reasonable, @Luap99. |
The e2e test are isolated and have their own --root/--runroot arguments. However networks were always shared, this causes problem with tests that do a prune or reset because they can effect other parallel running tests. Over the time I fixed some of of these cases to use their own config dir but containers#17946 suggests that this is not enough. Instead of trying to find and fix these tests just go with the big hammer and make every test use a new clean network config directory. This will also make the use of `defer podmanTest.removeNetwork(...)` unnecessary. This is required at the moment for every test which creates a network. However to keep the diff small and to see if it is even working I will do it later in a follow up commit. Fixes containers#17946 Signed-off-by: Paul Holzinger <[email protected]>
Just linking #17975 (comment) here again, my change will not work so we actually have to go through all test which do prune or reset. |
Flakes in the past six days, am reporting in case it's helpful to see which tests are failing so you can at least target those:
|
Since commit f250560 the play kube command uses its own network. this is racy be design because we create the network followed by creating/running pod/containers. This means in the meantime another prune or reset process could wipe out the network config because we have to share the network config directory by design in the test. The problem is we only have one host netns which is shared between tests. If the network config dir is not shared we cannot make conflict checks for interface names and ip address. This results in different tests trying to use the same interface and/or ip address which will cause runtime failures in CNI and netavark. The only solution I see is to make sure only the reset/prune tests are using a custom network dir. This makes sure they do not wipe configs that are otherwise required by other parallel running tests. Fixes containers#17946 Signed-off-by: Paul Holzinger <[email protected]>
...in "built using Dockerfile" test and "play kube fail with custom selinux label" test. The latter, since it's in a test file with lots of other kube tests, I just put into BeforeEach(). References: Issue containers#17946, PR containers#18085 Signed-off-by: Ed Santiago <[email protected]>
Seen yesterday, in a fully-rebased PR, f36 root. Reopening. |
I found two prune tests which were missing the custom network dir. |
|
Adds two custom config dirs to tests that were missed in commit dc9a65e. Fixes containers#17946 (hopefully finally) Signed-off-by: Paul Holzinger <[email protected]>
In e2e tests:
Probably a collision between multiple tests. Predicted solution: rewrite tests to stop using default network, or at least so there's at most one test that does so.
The text was updated successfully, but these errors were encountered: