Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman 2.0.5 pod dies after a few days of staying idle #9663

Closed
gczarnocki opened this issue Mar 8, 2021 · 10 comments
Closed

Podman 2.0.5 pod dies after a few days of staying idle #9663

gczarnocki opened this issue Mar 8, 2021 · 10 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@gczarnocki
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Podman pod running for a longer time dies. When trying to get pod status with podman ps -a, the error is:

[system_kafka@server1 ~]$ podman ps -a
ERRO[0000] Error refreshing container 40f592969c19e496385d9ea3072576b96bb650b0c8f289e4a2e23382e8bd9168: error acquiring lock 2 for container 40f592969c19e496385d9ea3072576b96bb650b0c8f289e4a2e23382e8bd9168: file exists
ERRO[0000] Error refreshing container 69abb59b190894e30802a019f06646ee82526bd2f290cf911601dc8e5ec453af: error acquiring lock 1 for container 69abb59b190894e30802a019f06646ee82526bd2f290cf911601dc8e5ec453af: file exists
ERRO[0000] Error refreshing container 74c0ceec2ae0f712f6a880c1555b5e6f204fb9e10ef3bc5a408debebf753cfcc: error acquiring lock 3 for container 74c0ceec2ae0f712f6a880c1555b5e6f204fb9e10ef3bc5a408debebf753cfcc: file exists
ERRO[0000] Error refreshing pod bf444316b153acb081f92514131615905bd412adffc7e1c219dcc4edd60ce01d: error retrieving lock 0 for pod bf444316b153acb081f92514131615905bd412adffc7e1c219dcc4edd60ce01d: file exists
CONTAINER ID  IMAGE                                                         COMMAND               CREATED     STATUS
    PORTS                             NAMES
40f592969c19  bitnami/kafka:2.7.0-debian-10-r67  /opt/bitnami/scri...  6 days ago  Created
    0.0.0.0:9092-9093->9092-9093/tcp  kafka-pod-kafka
69abb59b1908 k8s.gcr.io/pause:3.2                                                6 days ago  Created
    0.0.0.0:9092-9093->9092-9093/tcp  bf444316b153-infra
74c0ceec2ae0  bitnami/kafka:2.7.0-debian-10-r67  --rm /opt/bitnami...  5 days ago  Exited (2) 5 days ago                                    happy_cannon

Pod is running with podman play kube and a simple YAML definition of container, ports mappings and a few volumes mounted to /var partition.

Running a podman pod stop kafka-pod && podman pod rm kafka-pod with Ansible against hosts:

server1 | FAILED | rc=125 >>
545fdecf2f78bdf1ce3c3bdc3917af09945d2a30c3d48dd36930e6321cc93550Error: error removing container 879a0da72469512ba4b25a366a4bccbc227221e9334b0a2b95aca2c927d3b3c7 root filesystem: 2 errors occurred:
        * unlinkat /home/system_kafka/.local/share/containers/storage/overlay-containers/879a0da72469512ba4b25a366a4bccbc227221e9334b0a2b95aca2c927d3b3c7/userdata/shm: device or resource busy
        * unlinkat /home/system_kafka/.local/share/containers/storage/overlay-containers/879a0da72469512ba4b25a366a4bccbc227221e9334b0a2b95aca2c927d3b3c7/userdata/shm: device or resource busynon-zero return code
server2 | FAILED | rc=125 >>
bf444316b153acb081f92514131615905bd412adffc7e1c219dcc4edd60ce01dError: error removing container 40f592969c19e496385d9ea3072576b96bb650b0c8f289e4a2e23382e8bd9168 root filesystem: 2 errors occurred:
        * unlinkat /home/system_kafka/.local/share/containers/storage/overlay-containers/40f592969c19e496385d9ea3072576b96bb650b0c8f289e4a2e23382e8bd9168/userdata/shm: device or resource busy
        * unlinkat /home/system_kafka/.local/share/containers/storage/overlay-containers/40f592969c19e496385d9ea3072576b96bb650b0c8f289e4a2e23382e8bd9168/userdata/shm: device or resource busynon-zero return code
server3 | FAILED | rc=125 >>
9e2a1c8dca9e0287f8fc2fb9df4c303c187f4ac4ef36af6f258dda68e58c92afError: error removing container a7b62265ca3cb1cb9a62dffb82f1a6579fb69b9bedf1ca9510216e9c972f6d82 root filesystem: 2 errors occurred:
        * unlinkat /home/system_kafka/.local/share/containers/storage/overlay-containers/a7b62265ca3cb1cb9a62dffb82f1a6579fb69b9bedf1ca9510216e9c972f6d82/userdata/shm: device or resource busy
        * unlinkat /home/system_kafka/.local/share/containers/storage/overlay-containers/a7b62265ca3cb1cb9a62dffb82f1a6579fb69b9bedf1ca9510216e9c972f6d82/userdata/shm: device or resource busynon-zero return code

Steps to reproduce the issue:

  1. Run Podman 2.0.5.

  2. Run Kafka image via podman play kube, leave it running for > 5 days.

Describe the results you received:

Pod is dying after a few days (> 5 days). After some time, the pod is killed, podman ps doesn't list it but underlying ports are still listening on a ports that have been set up in pod definition.

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp6       0      0 :::9092                 :::*                    LISTEN      1328607/containers-
tcp6       0      0 :::9093                :::*                    LISTEN      1328607/containers-

Describe the results you expected:

Pod is not dying after a few days and is able to live for a longer time w/o being killed.

Additional information you deem important (e.g. issue happens only occasionally):

Issue happens after a few days and it's hard for me to debug why it happens like this.

Output of podman version:

podman version
Version:      2.0.5
API Version:  1
Go Version:   go1.14.7
Built:        Wed Sep 23 16:18:02 2020
OS/Arch:      linux/amd64

Output of podman info --debug:

[system_kafka@server1 ~]$ podman info --debug
host:
  arch: amd64
  buildahVersion: 1.15.1
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.22-3.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.22, commit: a40e3092dbe499ea1d85ab339caea023b74829b9'
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.3"
  eventLogger: file
  hostname: server1
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
  kernel: 4.18.0-240.15.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 4159258624
  memTotal: 8118956032
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /tmp/run-1002/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.3.1+9857+68fb1526.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 2139090944
  swapTotal: 2139090944
  uptime: 293h 4m 17.87s (Approximately 12.21 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/system_kafka/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 2
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-2.module+el8.3.1+9857+68fb1526.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.3
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/system_kafka/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 2
  runRoot: /tmp/run-1002/containers
  volumePath: /home/system_kafka/.local/share/containers/storage/volumes
version:
  APIVersion: 1
  Built: 1600877882
  BuiltTime: Wed Sep 23 16:18:02 2020
  GitCommit: ""
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.0.5

Package info (e.g. output of rpm -q podman or apt list podman):

rpm -q podman
podman-2.0.5-5.module+el8.3.0+8221+97165c3f.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

No, but I've applied the fix after a suggestion on my Stackoverflow post: https://stackoverflow.com/questions/66324225/podman-pod-disappears-after-a-few-days-but-process-is-still-running-and-listeni about tmp files being deleted. I know that Podman 2.2.1 seems to solve some of it, but I still haven't tested that.

File for tmpfiles.d config:

sudo cat /etc/tmpfiles.d/podman.conf
# /tmp/podman-run-* directory can contain content for Podman containers that have run
# for many days. This following line prevents systemd from removing this content.
# Workaround applied for Podman 2.0.5, taken from Podman 2.2.1
x /tmp/podman-run-*
D! /run/podman 0700 root root
D! /var/lib/cni/networks

Additional environment details (AWS, VirtualBox, physical, etc.):

Virtual machines, 3 node cluster set-up of Podman running Kafka.

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 8, 2021
@mheon
Copy link
Member

mheon commented Mar 8, 2021

Have you verified if the tmpfiles.d entry resolves the problem, or is it still occurring?

@gczarnocki
Copy link
Author

I've applied the fix but I cannot fully determine if that is solved or not - after introducing a file /etc/tmpfiles.d/podman.conf with the contents as above, I thought the issue will be resolved but is not and I don't know why. Probably a Podman update should do the trick, 2.0.5 is fairly old version. But still, I'd like to see if I can stay on 2.0.5.

I tried starting systemd-tmpfiles-setup:

systemctl restart systemd-tmpfiles-setup
Failed to restart systemd-tmpfiles-setup.service: Operation refused, unit systemd-tmpfiles-setup.service may be requested by dependency only (it is configured to refuse manual start/stop).
See system logs and 'systemctl status systemd-tmpfiles-setup.service' for details.

The status of this service is as follows (one of the servers):

systemctl status systemd-tmpfiles-setup
● systemd-tmpfiles-setup.service - Create Volatile Files and Directories
   Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-setup.service; static; vendor preset: disabled)
   Active: active (exited) since Wed 2021-02-24 08:38:38 UTC; 1 weeks 5 days ago
     Docs: man:tmpfiles.d(5)
           man:systemd-tmpfiles(8)
 Main PID: 1346 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 48911)
   Memory: 0B
   CGroup: /system.slice/systemd-tmpfiles-setup.service

Feb 24 08:38:38 osm2tktkafka1.dmz23.local systemd[1]: Starting Create Volatile Files and Directories...
Feb 24 08:38:38 osm2tktkafka1.dmz23.local systemd[1]: Started Create Volatile Files and Directories.

@mheon
Copy link
Member

mheon commented Mar 8, 2021

Hm. I note that you're running Podman rootless - can you get a podman info --log-level=debug on the user running the Podman container(s) in question, and provide the output here? I'm curious to see what our temporary files directory is.

@mheon
Copy link
Member

mheon commented Mar 8, 2021

Also - have you enabled lingering on for the user in question (loginctl enable-linger)?

@gczarnocki
Copy link
Author

The user I've created for running Kafka is created by user module using Ansible. This user I am using to run Podman:

system_zookeeper:x:1001:1001:System Account Zookeeper:/home/system_zookeeper:/bin/bash
system_kafka:x:1002:1002:System Account Kafka:/home/system_kafka:/bin/bash

podman info:

INFO[0000] podman filtering at log level debug
DEBU[0000] Called info.PersistentPreRunE(podman info --log-level=debug)
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/home/system_kafka/.config/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] containers-default-0.14.9 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] []  [] [] [] true [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] false false false  private k8s-file -1 slirp4netns false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false cgroupfs [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /tmp/run-1002/libpod/tmp/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm   false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false    map[] [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /home/system_kafka/.local/share/containers/storage/libpod 10 /tmp/run-1002/libpod/tmp /home/system_kafka/.local/share/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/system_kafka/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/system_kafka/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-1002/containers
DEBU[0000] Using static dir /home/system_kafka/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-1002/libpod/tmp
DEBU[0000] Using volume path /home/system_kafka/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
INFO[0000] Setting parallel job count to 13
INFO[0000] podman filtering at log level debug
DEBU[0000] Called info.PersistentPreRunE(podman info --log-level=debug)
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/home/system_kafka/.config/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] containers-default-0.14.9 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] []  [] [] [] true [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] false false false  private k8s-file -1 slirp4netns false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false cgroupfs [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /tmp/run-1002/libpod/tmp/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm   false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false    map[] [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /home/system_kafka/.local/share/containers/storage/libpod 10 /tmp/run-1002/libpod/tmp /home/system_kafka/.local/share/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}}
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/system_kafka/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /home/system_kafka/.local/share/containers/storage
DEBU[0000] Using run root /tmp/run-1002/containers
DEBU[0000] Using static dir /home/system_kafka/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /tmp/run-1002/libpod/tmp
DEBU[0000] Using volume path /home/system_kafka/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
INFO[0000] Setting parallel job count to 13
WARN[0000] Failed to retrieve program version for /usr/bin/slirp4netns: <nil>
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
host:
  arch: amd64
  buildahVersion: 1.15.1
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.22-3.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.22, commit: a40e3092dbe499ea1d85ab339caea023b74829b9'
  cpus: 4
  distribution:
    distribution: '"rhel"'
    version: "8.3"
  eventLogger: file
  hostname: server1.dmz23.local
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1002
      size: 1
    - container_id: 1
      host_id: 231072
      size: 65536
  kernel: 4.18.0-240.15.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 3853398016
  memTotal: 8118956032
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module+el8.3.1+9857+68fb1526.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    path: /tmp/run-1002/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module+el8.3.1+9857+68fb1526.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.4.3
  swapFree: 2139090944
  swapTotal: 2139090944
  uptime: 294h 1m 1.42s (Approximately 12.25 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/system_kafka/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 2
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.3.0-2.module+el8.3.1+9857+68fb1526.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.3
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /home/system_kafka/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 2
  runRoot: /tmp/run-1002/containers
  volumePath: /home/system_kafka/.local/share/containers/storage/volumes
version:
  APIVersion: 1
  Built: 1600877882
  BuiltTime: Wed Sep 23 16:18:02 2020
  GitCommit: ""
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.0.5

DEBU[0000] Called info.PersistentPostRunE(podman info --log-level=debug)

@gczarnocki
Copy link
Author

I haven't used linger option for this user, I can see in documentation that:

This allows users who are not logged in to run long-running services. Takes one or more user names or numeric UIDs as argument. If no argument is specified, enables/disables lingering for the user of the session of the caller.

@mheon
Copy link
Member

mheon commented Mar 8, 2021

Ah, think I found it. Your Podman is using the following temporary directory:
/tmp/run-1002/libpod/tmp

But the tmpfiles.d entry is only blocking:
x /tmp/podman-run-*

I think you need to add another line to handle your temporary dir -
x /tmp/run-1002

@gczarnocki
Copy link
Author

Yes, that's true. I am wondering why I haven't seen that before. Of course the path is wrong. Thank you for spotting this.

@mheon
Copy link
Member

mheon commented Mar 17, 2021

Is this solved? Can I close?

@gczarnocki
Copy link
Author

Is this solved? Can I close?

Yes, I believe it's solved. Thank you.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants