Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rootless containers don't work anymore #5291

Closed
fansari opened this issue Feb 21, 2020 · 15 comments
Closed

rootless containers don't work anymore #5291

fansari opened this issue Feb 21, 2020 · 15 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@fansari
Copy link

fansari commented Feb 21, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

My rootless containers are not accessible anymore. I have changed nothing. The issue started a few days ago.

"podman ps" is hanging on the shell.

[fansari@bat ~]$ podman ps --log-level=debug
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs  /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048 shm journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
INFO[0000] running as rootless                          
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs  /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048 shm journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] Setting maximum workers to 8         

Steps to reproduce the issue:

Describe the results you received:

see above

Describe the results you expected:

normal output of "podman ps"

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.8.0
RemoteAPI Version:  1
Go Version:         go1.13.6
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  git commit: ""
  go version: go1.13.6
  podman version: 1.8.0
host:
  BuildahVersion: 1.13.1
  CgroupVersion: v2
  Conmon:
    package: conmon-2.0.10-2.fc31.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.10, commit: 6b526d9888abb86b9e7de7dfdeec0da98ad32ee0'
  Distribution:
    distribution: fedora
    version: "31"
  IDMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  MemFree: 3666264064
  MemTotal: 8223121408
  OCIRuntime:
    name: crun
    package: crun-0.12.1-1.fc31.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.12.1
      commit: df5f2b2369b3d9f36d175e1183b26e5cee55dd0a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  SwapFree: 8589930496
  SwapTotal: 8589930496
  arch: amd64
  cpus: 8
  eventlogger: journald
  hostname: bat.localdomain
  kernel: 5.4.18-200.fc31.x86_64
  os: linux
  rootless: true
  slirp4netns:
    Executable: /usr/bin/slirp4netns
    Package: slirp4netns-0.4.0-20.1.dev.gitbbd6f25.fc31.x86_64
    Version: |-
      slirp4netns version 0.4.0-beta.3+dev
      commit: bbd6f25c70d5db2a1cd3bfb0416a8db99a75ed7e
  uptime: 13m 11.52s
registries:
  search:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
store:
  ConfigFile: /var/home/fansari/.config/containers/storage.conf
  ContainerStore:
    number: 3
  GraphDriverName: overlay
  GraphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-0.7.5-2.fc31.x86_64
      Version: |-
        fusermount3 version: 3.6.2
        fuse-overlayfs: version 0.7.5
        FUSE library version 3.6.2
        using FUSE kernel interface version 7.29
  GraphRoot: /var/mnt/data/podman/fansari/containers/storage
  GraphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 6
  RunRoot: /run/user/1000
  VolumePath: /var/home/fansari/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.8.0-2.fc31.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
Fedora 31 Silverblue

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 21, 2020
@rhatdan
Copy link
Member

rhatdan commented Feb 21, 2020

Could you check to see if you have any podman processes running on your system and kill them?

@mheon
Copy link
Member

mheon commented Feb 21, 2020

Could this be #5183

@fansari
Copy link
Author

fansari commented Feb 21, 2020

It is strange. This started few days ago. Yesterday it suddenly worked. Today the problem was there again. I tried to reboot several times but the problem remained.

I have now killed all podman processes running as my user, removed all images and containers and started from scratch.

Now my rootless containers work again. Let's see how long this time.

What I noticed during the last months working with podman is that typically all these problems are related to rootless containers. My root containers are stable all the time.

@fansari
Copy link
Author

fansari commented Feb 22, 2020

I cannot confirm that it works now because now there is another problem:

What does this mean? How can I fix it?

ERRO[0000] Error refreshing volume pgsql: error acquiring lock 1 for volume pgsql: file exists 

This error appears when I run "podman ps" after I boot my PC. Then running it a second time this error is gone and I can start some container.

[fansari@bat ~]$ podman --log-level=debug ps
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs  /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048 shm journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs  /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048 shm journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] Initialized SHM lock manager at path /libpod_rootless_lock_1000 
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Podman detected system restart - performing state refresh 
ERRO[0000] Error refreshing volume pgsql: error acquiring lock 1 for volume pgsql: file exists
INFO[0000] running as rootless                          
DEBU[0000] Reading configuration file "/var/home/fansari/.config/containers/libpod.conf" 
DEBU[0000] Merged system config "/var/home/fansari/.config/containers/libpod.conf": &{{false false false true true true} 0 {   [] [] []} /var/home/fansari/.local/share/containers/storage/volumes docker://  /usr/bin/crun map[crun:[/usr/bin/crun /usr/local/bin/crun] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] [crun runc] [crun] [] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] cgroupfs  /var/home/fansari/.local/share/containers/storage/libpod /run/user/1000/libpod/tmp -1 false /etc/cni/net.d/ [/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin]  []   k8s.gcr.io/pause:3.1 /pause true true  2048 shm journald  ctrl-p,ctrl-q false false} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/home/fansari/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/mnt/data/podman/fansari/containers/storage 
DEBU[0000] Using run root /run/user/1000                
DEBU[0000] Using static dir /var/home/fansari/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /var/home/fansari/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] No store required. Not opening container store. 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
DEBU[0000] Setting maximum workers to 8                 
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES

@mheon
Copy link
Member

mheon commented Feb 22, 2020 via email

@fansari
Copy link
Author

fansari commented Feb 22, 2020

Yes - this fixed this issue.

But now I found that systemd is not starting my rootless containers anymore. Even with --log-level=debug I get no output.

[fansari@bat ~]$ systemctl --user is-enabled podman-ldap
enabled

[fansari@bat ~]$ systemctl --user status podman-ldap
● podman-ldap.service - Podman container-fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a.service
   Loaded: loaded (/var/home/fansari/.config/systemd/user/podman-ldap.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:podman-generate-systemd(1)

[fansari@bat ~]$ journalctl --user -b -u podman-ldap
-- Logs begin at Sat 2020-02-01 15:06:19 CET, end at Sat 2020-02-22 17:59:03 CET. --
-- No entries --

This is in ~/.config/systemd/user/podman-ldap.service:

# container-fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a.service
# autogenerated by Podman 1.8.0
# Sat Feb 22 16:59:52 CET 2020

[Unit]
Description=Podman container-fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a.service
Documentation=man:podman-generate-systemd(1)

[Service]
Restart=on-failure
ExecStart=/usr/bin/podman --log-level=debug start fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a
ExecStop=/usr/bin/podman stop -t 10 fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a
PIDFile=/run/user/1000/overlay-containers/fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a/userdata/conmon.pid
KillMode=none
Type=forking

[Install]
WantedBy=multi-user.target

On the other hand: starting the container after boot is possible:

[fansari@bat user]$ systemctl --user start podman-ldap
[fansari@bat user]$ systemctl --user status podman-ldap
● podman-ldap.service - Podman container-fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a.service
   Loaded: loaded (/var/home/fansari/.config/systemd/user/podman-ldap.service; enabled; vendor preset: enabled)
   Active: active (running) since Sat 2020-02-22 18:03:14 CET; 5s ago
     Docs: man:podman-generate-systemd(1)
  Process: 8396 ExecStart=/usr/bin/podman --log-level=debug start fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a (code=exited, status=0/SUCCESS)
 Main PID: 8447 (conmon)
    Tasks: 32 (limit: 9334)
   Memory: 95.9M
      CPU: 618ms
   CGroup: /user.slice/user-1000.slice/[email protected]/podman-ldap.service
           ├─8416 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-f9a7a7b6-cd0b-8d02-bd1d-f597e638dbc9 tap0
           ├─8418 containers-rootlessport
           ├─8420 /usr/bin/fuse-overlayfs -o lowerdir=/var/mnt/data/podman/fansari/containers/storage/overlay/l/EBGFG4MAYZFKBHHO2ZOIQ3K64M:/var/mnt/data/podman/fansari/containers/storage/overlay/l/FX3CMOLBVH3OIKJMX2SEJ5DYA2:/var/mnt/data>
           ├─8431 containers-rootlessport-child
           ├─8447 /usr/bin/conmon --api-version 1 -c fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a -u fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a -r /usr/bin/crun -b /var/mnt/data/podman/fansari/co>
           ├─8451 /bin/bash /run.sh
           └─8467 /usr/sbin/slapd -d 1 -u ldap -h ldap:/// ldaps:///

Feb 22 18:03:14 bat.localdomain conmon[8447]: conmon fe0caeb33b0c305413d1 <ndebug>: couldn't find cb for pid 8450
Feb 22 18:03:14 bat.localdomain podman[8396]: time="2020-02-22T18:03:14+01:00" level=debug msg="Received: 8451"
Feb 22 18:03:14 bat.localdomain podman[8396]: time="2020-02-22T18:03:14+01:00" level=info msg="Got Conmon PID as 8447"
Feb 22 18:03:14 bat.localdomain podman[8396]: time="2020-02-22T18:03:14+01:00" level=debug msg="Created container fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a in OCI runtime"
Feb 22 18:03:14 bat.localdomain podman[8396]: 2020-02-22 18:03:14.976760655 +0100 CET m=+0.272684925 container init fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a (image=localhost/ldap:latest, name=ldap)
Feb 22 18:03:14 bat.localdomain podman[8396]: time="2020-02-22T18:03:14+01:00" level=debug msg="Starting container fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a with command [/run.sh]"
Feb 22 18:03:14 bat.localdomain podman[8396]: time="2020-02-22T18:03:14+01:00" level=debug msg="Started container fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a"
Feb 22 18:03:14 bat.localdomain podman[8396]: 2020-02-22 18:03:14.982009834 +0100 CET m=+0.277934179 container start fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a (image=localhost/ldap:latest, name=ldap)
Feb 22 18:03:14 bat.localdomain podman[8396]: fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a
Feb 22 18:03:14 bat.localdomain systemd[1621]: Started Podman container-fe0caeb33b0c305413d1ad32b80a1e75139e52b11eb8cc42bd20a22625562c6a.service.

@fansari
Copy link
Author

fansari commented Feb 23, 2020

I have fixed it now. There are two things you have to keep in mind when working with user services:

1.)
WantedBy=multi-user.target is wrong. You have to use:

WantedBy=default.target

After changing this systemd starts podman but it is hanging.

We have also to consider this:

2.)
#4678

Here I have presented a workaround with monitor-resolv-conf.service and monitor-resolv-conf.path. This is necessary to start the containers.

My conclusion is:

The "podman generate systemd" command is OK for root containers but not for rootless containers. The target is wrong for this case and also the necessary wait for the /etc/resolv.conf is not handled.

@rhatdan
Copy link
Member

rhatdan commented Feb 23, 2020

@vrothberg PTAL

@fansari
Copy link
Author

fansari commented Mar 7, 2020

This is really annoying. Sometimes it works and sometimes it does not work. I would expect to have my containers in place after booting. This works for root containers - for my rootless containers it works like this: sometimes both are up, sometimes both are hanging and sometimes just one container is up (but not always the same container) and the other hanging.

Some days ago my ldap container was hanging - today my pgsql container is hanging:

podman-pgsql.log

What is this timeout about? What do I have to change?

@kernel-io
Copy link

I've been having weird issues where sometimes a pod will just stop forwarding a port, sometimes it is midway through operation, other times it is when I restart a container within the pod (the one exporting the port). I'm also on Fedora Silverblue 31, it's doing my head in. Sometimes I need to reboot multiple times for it to start working again. I'm running it rootless also.

@vrothberg
Copy link
Member

This might be due to missing network dependencies. We have fixed it in master with #5382 and it will be part of the next podman release.

@vrothberg
Copy link
Member

#5427 may actually be more relevant in the rootless case.

@github-actions
Copy link

github-actions bot commented Apr 9, 2020

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Apr 9, 2020

I believe this is fixed, Closing, reopen if I am mistaken.

@rhatdan rhatdan closed this as completed Apr 9, 2020
@srd424
Copy link

srd424 commented Jun 11, 2021

(#10655 might be relevant to people reading here as well.)

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests

7 participants