Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'podman stop' does not thoroughly clean up, resulting in 'the container name \"autotests\" is already in use' #22852

Closed
CendioMartin opened this issue May 30, 2024 · 4 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@CendioMartin
Copy link

CendioMartin commented May 30, 2024

Issue Description

I am not sure if this issue belongs here, or perhaps should be filed to Jenkins. I'm a bit at a loss here as to why this happens, so I'll give it a try here first.

There is a Jenkins job that builds a container, starts it and performs some tests inside the container. Once the tests are done, the container is stopped with podman stop autotests

Occasionally, the container fails to start with the message

Error: creating container storage: the container name "autotests" is already in use by 7550a1c5df788b3c1d15d2596fc2f4b364378b08cb0c7bee7b1835ecfab4c8f0. You have to remove that container to be able to reuse that name: that name is already in use, or use --replace to instruct Podman to do so.

The command to start the container is

podman run --rm --name autotests --detach --privileged autotests

The Jenkins job that that initializes this is a 'Execute Shell' with

./runtests.sh

And the contents of ./runtests.sh

#!/bin/bash

# Build container image
podman build --tag autotests -f container/Containerfile

# Start container (just systemd)
podman run --rm --name autotests --detach --privileged autotests

function finish {
	podman stop autotests
}
trap finish EXIT

podman exec -w /root autotests bash -c 'unzip foo.zip'

# Set up users for tests
podman cp container/users autotests:/root/
podman cp container/add_users autotests:/root/
podman exec autotests /root/add_users

# Transfer test framework

# Run test server

This has been working fine for some years, but since 2024-05-23 it sporadically would fail with

+ podman run --rm --name autotests --detach --privileged autotests
Error: creating container storage: the container name "autotests" is already in use by b57595ecb4137b2bc64f158d070b5db2101b98e9de48edf985582e53e2202e94. You have to remove that container to be able to reuse that name: that name is already in use, or use --replace to instruct Podman to do so.
Build step 'Execute shell' marked build as failure

Output of podman ps for when this happens:

[jenkins@build ~]$ podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
[jenkins@build ~]$

However, podman ps -a shows the container stuck at Removing:

[jenkins@build ~]$ podman ps -a
CONTAINER ID  IMAGE                    COMMAND     CREATED             STATUS      PORTS       NAMES
cbcbe2f2b3e6  localhost/autotests:latest  /sbin/init  About a minute ago  Removing                autotests

(Sometimes it has been in Removing state for over 18 hours)

The only way to recover is to manually doing a podman rm autotests which will make the Job run again for a while, until it happens next time.

I managed to capture some more information with podman stop--log-level=debug autotests for when the previous (Sucessfull) job finished:

++ podman stop --time 1 --log-level=debug autotests
time="2024-05-30T13:49:41+02:00" level=info msg="podman filtering at log level debug"
time="2024-05-30T13:49:41+02:00" level=debug msg="Called stop.PersistentPreRunE(podman stop --time 1 --log-level=debug autotests)"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2024-05-30T13:49:41+02:00" level=info msg="Using sqlite as database backend"
time="2024-05-30T13:49:41+02:00" level=debug msg="systemd-logind: Unknown object '/'."
time="2024-05-30T13:49:41+02:00" level=debug msg="Using graph driver overlay"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using graph root /var/lib/jenkins/.local/share/containers/storage"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using run root /tmp/containers-user-984/containers"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using static dir /var/lib/jenkins/.local/share/containers/storage/libpod"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using tmp dir /run/user/984/libpod/tmp"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using volume path /var/lib/jenkins/.local/share/containers/storage/volumes"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using transient store: false"
time="2024-05-30T13:49:41+02:00" level=debug msg="Not configuring container store"
time="2024-05-30T13:49:41+02:00" level=debug msg="Initializing event backend file"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
time="2024-05-30T13:49:41+02:00" level=info msg="Setting parallel job count to 25"
time="2024-05-30T13:49:41+02:00" level=info msg="podman filtering at log level debug"
time="2024-05-30T13:49:41+02:00" level=debug msg="Called stop.PersistentPreRunE(podman stop --time 1 --log-level=debug autotests)"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2024-05-30T13:49:41+02:00" level=info msg="Using sqlite as database backend"
time="2024-05-30T13:49:41+02:00" level=debug msg="Overriding run root \"/run/user/984/containers\" with \"/tmp/containers-user-984/containers\" from database"
time="2024-05-30T13:49:41+02:00" level=debug msg="systemd-logind: Unknown object '/'."
time="2024-05-30T13:49:41+02:00" level=debug msg="Using graph driver overlay"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using graph root /var/lib/jenkins/.local/share/containers/storage"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using run root /tmp/containers-user-984/containers"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using static dir /var/lib/jenkins/.local/share/containers/storage/libpod"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using tmp dir /run/user/984/libpod/tmp"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using volume path /var/lib/jenkins/.local/share/containers/storage/volumes"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using transient store: false"
time="2024-05-30T13:49:41+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
time="2024-05-30T13:49:41+02:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-05-30T13:49:41+02:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-05-30T13:49:41+02:00" level=debug msg="Cached value indicated that metacopy is not being used"
time="2024-05-30T13:49:41+02:00" level=debug msg="Cached value indicated that native-diff is usable"
time="2024-05-30T13:49:41+02:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
time="2024-05-30T13:49:41+02:00" level=debug msg="Initializing event backend file"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2024-05-30T13:49:41+02:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
time="2024-05-30T13:49:41+02:00" level=info msg="Setting parallel job count to 25"
time="2024-05-30T13:49:41+02:00" level=debug msg="Starting parallel job on container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372"
time="2024-05-30T13:49:41+02:00" level=debug msg="Stopping ctr cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372 (timeout 1)"
time="2024-05-30T13:49:41+02:00" level=debug msg="Stopping container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372 (PID 2877060)"
time="2024-05-30T13:49:41+02:00" level=debug msg="Sending signal 37 to container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372"
time="2024-05-30T13:49:42+02:00" level=debug msg="Timed out stopping container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372 with (37), resorting to SIGKILL: given PID did not die within timeout"
time="2024-05-30T13:49:42+02:00" level=warning msg="StopSignal (37) failed to stop container autotests in 1 seconds, resorting to SIGKILL"
time="2024-05-30T13:49:42+02:00" level=debug msg="Sending signal 9 to container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372"
time="2024-05-30T13:49:42+02:00" level=debug msg="Container \"cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372\" state changed from \"stopping\" to \"stopped\" while waiting for it to be stopped: discontinuing stop procedure as another process interfered"
time="2024-05-30T13:49:42+02:00" level=debug msg="Cleaning up container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372"
time="2024-05-30T13:49:42+02:00" level=debug msg="Tearing down network namespace at /run/user/984/netns/netns-bd2e8ebf-1aca-15ce-28ac-a7ec3b98ade7 for container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372"
time="2024-05-30T13:49:42+02:00" level=debug msg="Successfully cleaned up container cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372"
time="2024-05-30T13:49:42+02:00" level=debug msg="Unmounted container \"cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372\""
autotests
time="2024-05-30T13:49:42+02:00" level=debug msg="Called stop.PersistentPostRunE(podman stop --time 1 --log-level=debug autotests)"
time="2024-05-30T13:49:42+02:00" level=debug msg="Shutting down engines"
Finished: SUCCESS

And here is the log from the next run, that fails:

+ ./build.sh
++ podman build --tag autotests -f autotests/Containerfile
STEP 1/5: FROM centos:stream8
STEP 2/5: RUN sed -i 's/^LANG=.*/LANG="C.UTF-8"/' /etc/locale.conf
--> Using cache 617b24ee5cd7a176f0b54b7f52651191f02187546bdbb5dc9a74dc2b96213b7b
--> 617b24ee5cd7
STEP 3/5: RUN dnf -y install python3 python3-nose policycoreutils
--> Using cache a5bcac3697fd23c47021793693d2e16b81519b58128d3d60f9366d1dd2188a2f
--> a5bcac3697fd
STEP 4/5: RUN dnf -y install unzip python3 ghostscript libX11 xorg-x11-xauth nss nspr postfix iproute procps-ng
--> Using cache 4d79b8c7f0afe3c2819475acb3cc76b827af4cef04f8b421420db9f529f556d2
--> 4d79b8c7f0af
STEP 5/5: CMD ["/sbin/init"]
--> Using cache 6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13
COMMIT autotests
--> 6d0abd13941a
Successfully tagged localhost/autotests:latest
6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13
++ trap finish EXIT
++ podman run --rm --name autotests --log-level=debug --detach --privileged autotests
time="2024-05-30T13:49:49+02:00" level=info msg="podman filtering at log level debug"
time="2024-05-30T13:49:49+02:00" level=debug msg="Called run.PersistentPreRunE(podman run --rm --name autotests --log-level=debug --detach --privileged autotests)"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2024-05-30T13:49:49+02:00" level=info msg="Using sqlite as database backend"
time="2024-05-30T13:49:49+02:00" level=debug msg="systemd-logind: Unknown object '/'."
time="2024-05-30T13:49:49+02:00" level=debug msg="Using graph driver overlay"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using graph root /var/lib/jenkins/.local/share/containers/storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using run root /tmp/containers-user-984/containers"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using static dir /var/lib/jenkins/.local/share/containers/storage/libpod"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using tmp dir /run/user/984/libpod/tmp"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using volume path /var/lib/jenkins/.local/share/containers/storage/volumes"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using transient store: false"
time="2024-05-30T13:49:49+02:00" level=debug msg="Not configuring container store"
time="2024-05-30T13:49:49+02:00" level=debug msg="Initializing event backend file"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
time="2024-05-30T13:49:49+02:00" level=info msg="Setting parallel job count to 25"
time="2024-05-30T13:49:49+02:00" level=info msg="podman filtering at log level debug"
time="2024-05-30T13:49:49+02:00" level=debug msg="Called run.PersistentPreRunE(podman run --rm --name autotests --log-level=debug --detach --privileged autotests)"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2024-05-30T13:49:49+02:00" level=info msg="Using sqlite as database backend"
time="2024-05-30T13:49:49+02:00" level=debug msg="Overriding run root \"/run/user/984/containers\" with \"/tmp/containers-user-984/containers\" from database"
time="2024-05-30T13:49:49+02:00" level=debug msg="systemd-logind: Unknown object '/'."
time="2024-05-30T13:49:49+02:00" level=debug msg="Using graph driver overlay"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using graph root /var/lib/jenkins/.local/share/containers/storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using run root /tmp/containers-user-984/containers"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using static dir /var/lib/jenkins/.local/share/containers/storage/libpod"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using tmp dir /run/user/984/libpod/tmp"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using volume path /var/lib/jenkins/.local/share/containers/storage/volumes"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using transient store: false"
time="2024-05-30T13:49:49+02:00" level=debug msg="[graphdriver] trying provided driver \"overlay\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-05-30T13:49:49+02:00" level=debug msg="Cached value indicated that overlay is supported"
time="2024-05-30T13:49:49+02:00" level=debug msg="Cached value indicated that metacopy is not being used"
time="2024-05-30T13:49:49+02:00" level=debug msg="Cached value indicated that native-diff is usable"
time="2024-05-30T13:49:49+02:00" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false"
time="2024-05-30T13:49:49+02:00" level=debug msg="Initializing event backend file"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument"
time="2024-05-30T13:49:49+02:00" level=debug msg="Using OCI runtime \"/usr/bin/runc\""
time="2024-05-30T13:49:49+02:00" level=info msg="Setting parallel job count to 25"
time="2024-05-30T13:49:49+02:00" level=debug msg="Successfully loaded network podman: &{podman 2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9 bridge cni-podman0 2021-11-10 13:33:47.575404664 +0100 CET [{{{10.88.0.0 ffff0000}} 10.88.0.1 <nil>}] [] false false false [] map[] map[] map[driver:host-local]}"
time="2024-05-30T13:49:49+02:00" level=debug msg="Successfully loaded 1 networks"
time="2024-05-30T13:49:49+02:00" level=debug msg="Pulling image autotests (policy: missing)"
time="2024-05-30T13:49:49+02:00" level=debug msg="Looking up image \"autotests\" in local containers storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
time="2024-05-30T13:49:49+02:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/000-shortnames.conf\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/001-rhel-shortnames.conf\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Loading registries configuration \"/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Trying \"localhost/autotests:latest\" ..."
time="2024-05-30T13:49:49+02:00" level=debug msg="parsed reference into \"[overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Found image \"autotests\" as \"localhost/autotests:latest\" in local containers storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Found image \"autotests\" as \"localhost/autotests:latest\" in local containers storage ([overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13)"
time="2024-05-30T13:49:49+02:00" level=debug msg="exporting opaque data as blob \"sha256:6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Looking up image \"localhost/autotests:latest\" in local containers storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
time="2024-05-30T13:49:49+02:00" level=debug msg="Trying \"localhost/autotests:latest\" ..."
time="2024-05-30T13:49:49+02:00" level=debug msg="parsed reference into \"[overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Found image \"localhost/autotests:latest\" as \"localhost/autotests:latest\" in local containers storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Found image \"localhost/autotests:latest\" as \"localhost/autotests:latest\" in local containers storage ([overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13)"
time="2024-05-30T13:49:49+02:00" level=debug msg="exporting opaque data as blob \"sha256:6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Looking up image \"autotests\" in local containers storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Normalized platform linux/amd64 to {amd64 linux  [] }"
time="2024-05-30T13:49:49+02:00" level=debug msg="Trying \"localhost/autotests:latest\" ..."
time="2024-05-30T13:49:49+02:00" level=debug msg="parsed reference into \"[overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Found image \"autotests\" as \"localhost/autotests:latest\" in local containers storage"
time="2024-05-30T13:49:49+02:00" level=debug msg="Found image \"autotests\" as \"localhost/autotests:latest\" in local containers storage ([overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13)"
time="2024-05-30T13:49:49+02:00" level=debug msg="exporting opaque data as blob \"sha256:6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Inspecting image 6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13"
time="2024-05-30T13:49:49+02:00" level=debug msg="exporting opaque data as blob \"sha256:6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Inspecting image 6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13"
time="2024-05-30T13:49:49+02:00" level=debug msg="Inspecting image 6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13"
time="2024-05-30T13:49:49+02:00" level=debug msg="Inspecting image 6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13"
time="2024-05-30T13:49:49+02:00" level=debug msg="using systemd mode: true"
time="2024-05-30T13:49:49+02:00" level=debug msg="setting container name autotests"
time="2024-05-30T13:49:49+02:00" level=debug msg="No hostname set; container's hostname will default to runtime default"
time="2024-05-30T13:49:49+02:00" level=debug msg="Loading seccomp profile from \"/usr/share/containers/seccomp.json\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Allocated lock 1 for container 2a9306a5e237c08400373b74ea10bc75af2610ee0352c88a6424ebc7117accb8"
time="2024-05-30T13:49:49+02:00" level=debug msg="parsed reference into \"[overlay@/var/lib/jenkins/.local/share/containers/storage+/tmp/containers-user-984/containers]@6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="exporting opaque data as blob \"sha256:6d0abd13941abdb047027b3763395b2f8a3fa37f248965fff98c39afb39a7d13\""
time="2024-05-30T13:49:49+02:00" level=debug msg="Cached value indicated that idmapped mounts for overlay are not supported"
time="2024-05-30T13:49:49+02:00" level=debug msg="Check for idmapped mounts support "
time="2024-05-30T13:49:49+02:00" level=debug msg="Failed to create container autotests(2a9306a5e237c08400373b74ea10bc75af2610ee0352c88a6424ebc7117accb8): the container name \"autotests\" is already in use by cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372. You have to remove that container to be able to reuse that name: that name is already in use"
Error: creating container storage: the container name "autotests" is already in use by cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372. You have to remove that container to be able to reuse that name: that name is already in use, or use --replace to instruct Podman to do so.
time="2024-05-30T13:49:49+02:00" level=debug msg="Shutting down engines"
++ finish

I've also tried to reproduce this issue manually from the command-line, executing podman run --rm ... and podman stop, numerous times, but so far, been unable to reproduce it.

As mentioned, this started on and after 2025-05-23, the system received RHEL updates to

    Upgrade       containers-common-2:1-81.module+el8.10.0+21340+c6c7475a.x86_64              @rhel-8-for-x86_64-appstream-rpms
    Upgraded      containers-common-2:1-71.module+el8.9.0+21697+6a5e98e7.x86_64               @@System
    Upgrade       podman-4:4.9.4-1.module+el8.10.0+21632+761e0d34.x86_64                      @rhel-8-for-x86_64-appstream-rpms
    Upgraded      podman-3:4.6.1-9.module+el8.9.0+21697+6a5e98e7.x86_64                       @@System
    Upgrade       podman-catatonit-4:4.9.4-1.module+el8.10.0+21632+761e0d34.x86_64            @rhel-8-for-x86_64-appstream-rpms
    Upgraded      podman-catatonit-3:4.6.1-9.module+el8.9.0+21697+6a5e98e7.x86_64             @@System
    Upgrade       podman-gvproxy-4:4.9.4-1.module+el8.10.0+21632+761e0d34.x86_64              @rhel-8-for-x86_64-appstream-rpms
    Upgraded      podman-gvproxy-3:4.6.1-9.module+el8.9.0+21697+6a5e98e7.x86_64               @@System
    Upgrade       podman-plugins-4:4.9.4-1.module+el8.10.0+21632+761e0d34.x86_64              @rhel-8-for-x86_64-appstream-rpms
    Upgraded      podman-plugins-3:4.6.1-9.module+el8.9.0+21697+6a5e98e7.x86_64               @@System
    Upgrade       systemd-239-82.el8.x86_64                                                   @rhel-8-for-x86_64-baseos-rpms
    Upgraded      systemd-239-78.el8.x86_64                                                   @@System
    Upgrade       systemd-container-239-82.el8.x86_64                                         @rhel-8-for-x86_64-baseos-rpms
    Upgraded      systemd-container-239-78.el8.x86_64                                         @@System
    Upgrade       systemd-libs-239-82.el8.x86_64                                              @rhel-8-for-x86_64-baseos-rpms
    Upgraded      systemd-libs-239-78.el8.x86_64                                              @@System
    Upgrade       systemd-pam-239-82.el8.x86_64                                               @rhel-8-for-x86_64-baseos-rpms
    Upgraded      systemd-pam-239-78.el8.x86_64                                               @@System
    Upgrade       systemd-udev-239-82.el8.x86_64                                              @rhel-8-for-x86_64-baseos-rpms
    Upgraded      systemd-udev-239-78.el8.x86_64                                              @@System

Steps to reproduce the issue

Steps to reproduce the issue
1.
2.
3.

Describe the results you received

Error: creating container storage: the container name "autotests" is already in use by cbcbe2f2b3e60e8292a2e87f655787858fe67b1d912bebee71ef098ad87f4372. You have to remove that container to be able to reuse that name: that name is already in use, or use --replace to instruct Podman to do so.

Describe the results you expected

The container should start successfully

podman info output

host:
  arch: amd64
  buildahVersion: 1.33.7
  cgroupControllers:
  - memory
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.10-1.module+el8.10.0+21077+98b84d8a.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.10, commit: 80c4f656297773fb630a4d966add3242abab39a4'
  cpuUtilization:
    idlePercent: 98.8
    systemPercent: 0.35
    userPercent: 0.85
  cpus: 8
  databaseBackend: sqlite
  distribution:
    distribution: rhel
    version: "8.10"
  eventLogger: file
  freeLocks: 2048
  hostname: build.lkpg.cendio.se
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 980
      size: 1
    - container_id: 1
      host_id: 10000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 984
      size: 1
    - container_id: 1
      host_id: 10000
      size: 65536
  kernel: 4.18.0-553.el8_10.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 572874752
  memTotal: 12297306112
  networkBackend: cni
  networkBackendInfo:
    backend: cni
    dns:
      package: podman-plugins-4.9.4-1.module+el8.10.0+21632+761e0d34.x86_64
      path: /usr/libexec/cni/dnsname
      version: |-
        CNI dnsname plugin
        version: 1.4.0-dev
        commit: unknown
        CNI protocol versions supported: 0.1.0, 0.2.0, 0.3.0, 0.3.1, 0.4.0, 1.0.0
    package: containernetworking-plugins-1.4.0-2.module+el8.10.0+21366+f9cb49f8.x86_64
    path: /usr/libexec/cni
  ociRuntime:
    name: runc
    package: runc-1.1.12-1.module+el8.10.0+21251+62b7388c.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.12
      spec: 1.0.2-dev
      go: go1.21.3
      libseccomp: 2.5.2
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: false
    path: /run/user/984/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.3-1.module+el8.10.0+21306+6be40ce7.x86_64
    version: |-
      slirp4netns version 1.2.3
      commit: c22fde291bb35b354e6ca44d13be181c76a0a432
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 0
  swapTotal: 0
  uptime: 50h 31m 26.00s (Approximately 2.08 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /var/lib/jenkins/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/lib/jenkins/.local/share/containers/storage
  graphRootAllocated: 126901600256
  graphRootUsed: 91665690624
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 5
  runRoot: /tmp/containers-user-984/containers
  transientStore: false
  volumePath: /var/lib/jenkins/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.4-rhel
  Built: 1711986940
  BuiltTime: Mon Apr  1 17:55:40 2024
  GitCommit: ""
  GoVersion: go1.21.7 (Red Hat 1.21.7-1.module+el8.10.0+21318+5ea197f8)
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.4-rhel

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

No

Additional environment details

Podman executed through Jenkins 2.452.1

Additional information

No response

@CendioMartin CendioMartin added the kind/bug Categorizes issue or PR as related to a bug. label May 30, 2024
@CendioMartin
Copy link
Author

I just managed to capture something on the command line while the Jenkins job was running:

[jenkins@build workspace]$ podman ps -a
CONTAINER ID  IMAGE                    COMMAND     CREATED        STATUS      PORTS       NAMES
8a596e0f3675  localhost/autotests:latest  /sbin/init  2 seconds ago  Stopping                autotests
[jenkins@build workspace]$ podman ps -a
CONTAINER ID  IMAGE                    COMMAND     CREATED        STATUS      PORTS       NAMES
8a596e0f3675  localhost/autotests:latest  /sbin/init  3 seconds ago  Removing                autotests
WARN[0000] Found incomplete layer "1679fb7d73228ecdab93ad4cbcb81ff97e7c16090634ef6340768d4998be1188", deleting it 
[jenkins@build workspace]$ podman ps -a
CONTAINER ID  IMAGE                    COMMAND     CREATED        STATUS      PORTS       NAMES
8a596e0f3675  localhost/autotests:latest  /sbin/init  7 seconds ago  Removing                autotests

After that WARN[0000] Found incomplete layer "1679fb7d73228ecdab93ad4cbcb81ff97e7c16090634ef6340768d4998be1188", deleting it it's stuck in Removing state

@mheon
Copy link
Member

mheon commented May 30, 2024

While this is definitely a bug, I will note that you should be able to work around it by adding --replace to your podman run command like.

Also: is there a potential something is killing processes in the background? My read on this is that the cleanup process launched when the container exits is probably being killed by Jenkins before it can completely remove the container.

@Luap99
Copy link
Member

Luap99 commented May 30, 2024

Agreed, also if you want the container gone you can just use podman rm -f instead of doing the podman stop relying on the auto deletion from the --rm arg.

And in general if you have a problem in RHEL you should contact the RHEL support not upstream as we only support the latest version upstream.

@mheon
Copy link
Member

mheon commented Aug 19, 2024

#23670 should fix

TomSweeneyRedHat pushed a commit to TomSweeneyRedHat/podman that referenced this issue Oct 8, 2024
This started off as an attempt to make `podman stop` on a
container started with `--rm` actually remove the container,
instead of just cleaning it up and waiting for the cleanup
process to finish the removal.

In the process, I realized that `podman run --rmi` was rather
broken. It was only done as part of the Podman CLI, not the
cleanup process (meaning it only worked with attached containers)
and the way it was wired meant that I was fairly confident that
it wouldn't work if I did a `podman stop` on an attached
container run with `--rmi`. I rewired it to use the same
mechanism that `podman run --rm` uses, so it should be a lot more
durable now, and I also wired it into `podman inspect` so you can
tell that a container will remove its image.

Tests have been added for the changes to `podman run --rmi`. No
tests for `stop` on a `run --rm` container as that would be racy.

Fixes containers#22852
Fixes RHEL-39513

For this branch
Backport of: [458ba5a](containers@458ba5a)
Fixes: https://issues.redhat.com/browse/RHEL-61667

Signed-off-by: Matt Heon <[email protected]>
Signed-off-by: tomsweeneyredhat <[email protected]>
@stale-locking-app stale-locking-app bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Nov 19, 2024
@stale-locking-app stale-locking-app bot locked as resolved and limited conversation to collaborators Nov 19, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants