You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
run_podman pod create --name $pod_name --infra-name $infra_name
pid="$output"
run_podman run --pod $pod_name --name $con1_name$IMAGE cat /etc/hosts
is "$output"".*\s$pod_name$infra_name.*""Pod hostname in /etc/hosts"
is "$output"".*127.0.0.1\s$con1_name.*""Container1 name in /etc/hosts"
# get the length of the hosts file
old_lines=${#lines[@]}
# since the first container should be cleaned up now we should only see the
# new host entry and the old one should be removed (lines check)
run_podman run --pod $pod_name --name $con2_name$IMAGE cat /etc/hosts
is "$output"".*\s$pod_name$infra_name.*""Pod hostname in /etc/hosts"
is "$output"".*127.0.0.1\s$con2_name.*""Container2 name in /etc/hosts"
is "${#lines[@]}""$old_lines""Number of hosts lines is equal"
(I can't link to my parallel version).
Test is failing in parallel mode, only on my laptop. Haven't seen it fail in CI. Failure is in the last two lines shown above, basically, container1 is still showing up in /etc/hosts.
Is this expected? I'm going to try adding --rm to the first podman run and see if the failure vanishes, but I'm not sure if that's the right thing to do.
The text was updated successfully, but these errors were encountered:
Adding --rm will fix it because the run will remove the container at the end which means cleanup will be done before,
Without it podman run ... waits for the container to exit, it does not wait for the container to be fully cleaned up. The actual cleanup happens via podman container cleanup process in the background which is causing the race here. In practise it is a bit more complicated.
For a long running process it will always work because we wait for conmon to exit and conmon waits for the podman container cleanup process to finish first so cleanup is done in most of the cases since #23601. However a short running process such as cat might have exited before we call into WaitForExit() so we no longer wait for conmon and exit earlier there which #23646 was about.
We could try doing an explicit cleanup call at the end but this wouldn't really work via remote API so I rather not do it for only local podman. The --rm fix should work.
Almost certainly a test bug:
podman/test/system/500-networking.bats
Lines 160 to 178 in f3db6b1
(I can't link to my parallel version).
Test is failing in parallel mode, only on my laptop. Haven't seen it fail in CI. Failure is in the last two lines shown above, basically, container1 is still showing up in
/etc/hosts
.Is this expected? I'm going to try adding
--rm
to the firstpodman run
and see if the failure vanishes, but I'm not sure if that's the right thing to do.The text was updated successfully, but these errors were encountered: