-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rootless containers don't work anymore #5291
Comments
Could you check to see if you have any podman processes running on your system and kill them? |
Could this be #5183 |
It is strange. This started few days ago. Yesterday it suddenly worked. Today the problem was there again. I tried to reboot several times but the problem remained. I have now killed all podman processes running as my user, removed all images and containers and started from scratch. Now my rootless containers work again. Let's see how long this time. What I noticed during the last months working with podman is that typically all these problems are related to rootless containers. My root containers are stable all the time. |
I cannot confirm that it works now because now there is another problem: What does this mean? How can I fix it?
This error appears when I run "podman ps" after I boot my PC. Then running it a second time this error is gone and I can start some container.
|
Run 'podman system renumber' - this looks like a lock duplicate allocation,
and that command should resolve it.
…On Sat, Feb 22, 2020, 08:45 Frank Ansari ***@***.***> wrote:
I cannot confirm that it works now because now there is another problem:
What does this mean? How can I fix it?
ERRO[0000] Error refreshing volume pgsql: error acquiring lock 1 for volume pgsql: file exists
This error appears when I run "podman ps" after I boot my PC. Then running
it a second time this error is gone and I can start some container.
***@***.*** ~]$ podman --log-level=debug ps
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf"
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 { [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker:// /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] [] k8s.gcr.io/pause:3.1 /pause true true 2048 shm journald ctrl-p,ctrl-q false false}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf"
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 { [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker:// /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] [] k8s.gcr.io/pause:3.1 /pause true true 2048 shm journald ctrl-p,ctrl-q false false}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] Initialized SHM lock manager at path /libpod_rootless_lock_1000
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Podman detected system restart - performing state refresh
**ERRO[0000] Error refreshing volume pgsql: error acquiring lock 1 for volume pgsql: file exists**
INFO[0000] running as rootless
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf"
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 { [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker:// /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] [] k8s.gcr.io/pause:3.1 /pause true true 2048 shm journald ctrl-p,ctrl-q false false}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] using runtime "/usr/bin/crun"
DEBU[0000] Setting maximum workers to 8
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#5291>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCD3U3YIK4UGI4KBWQ3REEUBHANCNFSM4KZIYIRA>
.
|
Yes - this fixed this issue. But now I found that systemd is not starting my rootless containers anymore. Even with --log-level=debug I get no output.
This is in ~/.config/systemd/user/podman-ldap.service:
On the other hand: starting the container after boot is possible:
|
I have fixed it now. There are two things you have to keep in mind when working with user services: 1.) WantedBy=default.target After changing this systemd starts podman but it is hanging. We have also to consider this: 2.) Here I have presented a workaround with monitor-resolv-conf.service and monitor-resolv-conf.path. This is necessary to start the containers. My conclusion is: The "podman generate systemd" command is OK for root containers but not for rootless containers. The target is wrong for this case and also the necessary wait for the /etc/resolv.conf is not handled. |
@vrothberg PTAL |
This is really annoying. Sometimes it works and sometimes it does not work. I would expect to have my containers in place after booting. This works for root containers - for my rootless containers it works like this: sometimes both are up, sometimes both are hanging and sometimes just one container is up (but not always the same container) and the other hanging. Some days ago my ldap container was hanging - today my pgsql container is hanging: What is this timeout about? What do I have to change? |
I've been having weird issues where sometimes a pod will just stop forwarding a port, sometimes it is midway through operation, other times it is when I restart a container within the pod (the one exporting the port). I'm also on Fedora Silverblue 31, it's doing my head in. Sometimes I need to reboot multiple times for it to start working again. I'm running it rootless also. |
This might be due to missing network dependencies. We have fixed it in master with #5382 and it will be part of the next podman release. |
#5427 may actually be more relevant in the rootless case. |
A friendly reminder that this issue had no activity for 30 days. |
I believe this is fixed, Closing, reopen if I am mistaken. |
(#10655 might be relevant to people reading here as well.) |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
My rootless containers are not accessible anymore. I have changed nothing. The issue started a few days ago.
"podman ps" is hanging on the shell.
Steps to reproduce the issue:
Describe the results you received:
see above
Describe the results you expected:
normal output of "podman ps"
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Fedora 31 Silverblue
The text was updated successfully, but these errors were encountered: