-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to create pod cgroup: slice was already loaded or has a fragment file #24010
Comments
just started looking into this. Is it safe to run multiple |
There's one part that I'm suspicious of and need to fix: the global |
The bug reproduces even with the most carefulest parallel-safe-paranoia I can write. And, still, even with only one test in the |
Mostly just switch to safename. Rewrite setup() to guarantee unique service file names, atomically created. * IMPORTANT NOTE: enabling parallelization on these tests triggers containers#24010 ("fragment file" flake), but only on my f40 laptop. I have never seen the flake in Cirrus despite many many runs in containers#23275. I am submitting this for review and merging because even though _something_ is broken, this breakage is unlikely to affect our CI. Signed-off-by: Ed Santiago <[email protected]>
A friendly reminder that this issue had no activity for 30 days. |
Based on a tip from the interwebz I ran |
[ Copy of https://github.com/containers/crun/issues/1560 ]
This is now the number one flake in parallel-podman-testing land. It is not manifesting in actual CI, only on my f40 laptop, and it's preventing me from parallelizing
250-systemd.bats
:The trigger is enabling parallel tests in
250-systemd.bats
. It reproduces very quickly (80-90%) withfile_tags=ci:parallel
, but also reproduces (~40%) if I just dotest_tags
on theenvar
orsystemd template
tests. I have never seen this failure before adding tags to250.bats
, and have never seen it in any of the runs where I've removed the parallel tags from250.bats
. It is possible thatservice_setup()
(which runs a bunch of systemctls) is to blame, but I am baffled as to how.Kagi search finds containers/crun#1138 but that's OOM-related and I'm pretty sure nothing is OOMing.
Workaround is easy, don't parallelize
250.bats
.The text was updated successfully, but these errors were encountered: