Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman images very slow #6288

Closed
carlpett opened this issue May 20, 2020 · 36 comments
Closed

podman images very slow #6288

carlpett opened this issue May 20, 2020 · 36 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@carlpett
Copy link

/kind bug

Description
The bash_completion for podman run is very slow. Running with set -x, it seems the longest operation is listing all images:

[...]
+++ podman images
+++ awk 'NR>1 && $1 != "<none>" { print $1; print $1":"$2 }'
+++ grep --color=auto -v '<none>$'

I have ~460 images locally at this time (according to podman images | wc -l, so quite a lot of <none> and versions sharing underlying layers), and podman images takes takes 20 seconds:

$ time podman images
[...]
real    0m20.679s
user    0m14.979s
sys     0m9.875s

Note that this happens even when I already supplied the image, eg podman run busybox -- <tab> or when an image is not the correct/relevant completion, eg podman run -v <tab>.

Since the terminal just appears to freeze unless you know what is happening, this is pretty disruptive. I'm not sure if 460 images should be considered excessive? Otherwise it might make sense to either not complete with the images output, or investigate if podman images should be much faster?

Steps to reproduce the issue:

  1. podman run <tab>

Describe the results you received:
Waiting.

Describe the results you expected:
Less waiting.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:            1.9.1
RemoteAPI Version:  1
Go Version:         go1.14.2
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.14.2
  podmanVersion: 1.9.1
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.15-1.fc32.x86_64
    path: /usr/libexec/crio/conmon
    version: 'conmon version 2.0.15, commit: 33da5ef83bf2abc7965fc37980a49d02fdb71826'
  cpus: 8
  distribution:
    distribution: fedora
    version: "32"
  eventLogger: file
  hostname: capelt
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.6.11-300.fc32.x86_64
  memFree: 7246368768
  memTotal: 33539756032
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc32.x86_64
    path: /bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: true
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.0.0-1.fc32.x86_64
    version: |-
      slirp4netns version 1.0.0
      commit: a3be729152a33e692cd28b52f664defbf2e7810a
      libslirp: 4.2.0
  swapFree: 0
  swapTotal: 0
  uptime: 169h 46m 39.2s (Approximately 7.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/cape/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /bin/fuse-overlayfs
      Package: fuse-overlayfs-1.0.0-1.fc32.x86_64
      Version: |-
        fusermount3 version: 3.9.1
        fuse-overlayfs: version 1.0.0
        FUSE library version 3.9.1
        using FUSE kernel interface version 7.31
  graphRoot: /home/cape/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1076
  runRoot: /tmp/1000
  volumePath: /home/cape/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.1-1.fc32.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):
Physical

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 20, 2020
@carlpett
Copy link
Author

Checked du -hs ~/.local/share/containers, and it contained 51G. Ran podman system prune, and the directory is now 32G instead. And podman images | wc -l is now 117, and takes 0.5 seconds.
This installation of podman has been used and upgraded for 18 months or so, could there have been some broken old data lying around that made it very slow, or is there some non-linear algorithm in there to explain the 40x speedup from removing 75% of the images/38% of the stored data?

@mheon
Copy link
Member

mheon commented May 20, 2020

We do have some algorithmic parsing in podman images (to determine if images are dangling, IIRC?) which may significantly add to the amount of processing involved, but that significant of a slowdown does sound unusual

@mheon
Copy link
Member

mheon commented May 20, 2020

@vrothberg Any thoughts here?

@vrothberg
Copy link
Member

Unfortunately, I have nothing at hand. Some users made similar observations over time but such slow downs are seemingly random in the sense that we don't have a reproducer.

@carlpett
Copy link
Author

Yeah, I pretty much regretted not taking a snapshot as soon as I had started the prune :( Should never destroy a repro...
Anyway, I started a synthetic test on my machine now, pretty much just looping over podman build, touch some-file, time podman images, and I can see a pretty rapid rise in the timings. I'll push it up to ~450 and then post a gist with the data.

@carlpett
Copy link
Author

Okay, made a run between 127 images (what was left after the prune) up to 450. I used this script:

while podman build -t bloat-test . >/dev/null; do
  touch config/empty.yaml
  /usr/bin/time -f %e/%U/%S podman images | wc -l
done

With this Dockerfile:

FROM python:alpine
ARG KUBECTL_VERSION=1.16.5
ARG HELM_VERSION=3.2.1
ARG KUBEVAL_VERSION=0.15.0
ARG CONFTEST_VERSION=0.18.2
ARG YAMLLINT_VERSION=1.23.0
WORKDIR /work

RUN apk add bash coreutils && \
	a long string of wget | tar:s

COPY config /config
COPY lint.sh /

ENTRYPOINT ["/lint.sh"]

I can't share the actual files we copy over, but it is ~50k of plain text in /config. The layers above are ~250M, but the only layers which are rebuilt are the bottom three, due to touching a file in /config.
I don't believe the content of the image makes any difference, but here for completeness.
The size of ~/.local/share/containers did not increase noticably from this, and is still at 32G.

From first to last measurement, wall time to run podman images increased from 0.8s to 23.3s. Plotted, there is a slight but noticable super-linearity to the trend.
image

Would probably be a good idea to reproduce this with a simpler image, and make a longer run. This took ~1h40m to run on my laptop, so kicking it off from a clean slate on some machine somewhere and letting it run for longer might show a clearer trend.

Here is a gist with the raw data.

@rhatdan
Copy link
Member

rhatdan commented May 20, 2020

Could be us trying to see the size of some of the images, which we should not be doing by default.

@rhatdan
Copy link
Member

rhatdan commented Jun 9, 2020

@baude PTAL
Get those performance chops going.

@eregon
Copy link

eregon commented Jul 4, 2020

I also see podman images being very slow the first time it's run after a reboot.
For just 35 images:
First run:
podman images 140.73s user 1.93s system 165% cpu 1:25.96 total
Second run:
podman images 1.12s user 0.48s system 160% cpu 0.993 total

@carlpett
Copy link
Author

carlpett commented Jul 8, 2020

I seem to have inadvertently reproduced this again, and now I'll try to not fix it :) It's worse now:

$ time podman images | wc -l
179

real    9m0.983s
user    13m5.841s
sys     0m15.668s

So 179 images, .local/share/containers is ~51G. Let me know what I can test to nail this down!
@vrothberg @mheon @rhatdan

@carlpett carlpett changed the title podman run bash_completion very slow podman images very slow Jul 15, 2020
@carlpett
Copy link
Author

@vrothberg @mheon @rhatdan To add some more observations, this is also slowing down other operations that interact with the image store, such as build and pull. For example, building the below Dockerfile takes 2 seconds if I create a new user and run as that one (so there is absolutely no previous podman state), while it takes 3.5 seconds on my own user. It seems to be somehow proportional to the "complexity" of the image (ie most builds are delayed significantly longer than 1.5 seconds)

FROM alpine
RUN echo hello > /file

This is becoming a problem for my day-to-day work, and I probably can't keep the broken state around indefinitely. Please let me know if I can check something.

@vrothberg vrothberg assigned vrothberg and unassigned baude Jul 15, 2020
@vrothberg
Copy link
Member

Thanks for the ping, @carlpett. I will have a look asap.

@skorhone
Copy link

Out of curiosity, @carlpett how large is your boltdb file created by podman?

@srcshelton
Copy link
Contributor

Out of curiosity, @carlpett how large is your boltdb file created by podman?

My installation just took 18s to list 56 images, with boltdb of 2MB (2097152 bytes).

@mheon
Copy link
Member

mheon commented Jul 16, 2020

How does podman images perform in these cases - is there a notably delay between printing each image/line of output, or is everything printed all at once after a long delay?

@srcshelton
Copy link
Contributor

How does podman images perform in these cases - is there a notably delay between printing each image/line of output, or is everything printed all at once after a long delay?

Lengthy delay (see #6984 for more details - it took over 80s for ~160 images!), and then all output appears simultaneously.

The duration of the delay does not seem to depend on the data being displayed - podman image ls -q to show IDs only takes almost the same amount of time to appear as the default display with image sizes, etc.

@rhatdan
Copy link
Member

rhatdan commented Jul 16, 2020

Could you install buildah and do a buildah images?

This would tell us if the problem is in containers/storage or podman/libpod.

@baude
Copy link
Member

baude commented Jul 16, 2020

do you observe the issue with rootfull and rootless? or just rootless?

@carlpett
Copy link
Author

Curiously, today it is fast - podman lists the 186 images in 2.2s, and buildah in 1.7s. I'm not aware of having made any relevant changes since yesterday either, so not sure why this happened all of a sudden...

do you observe the issue with rootfull and rootless? or just rootless?

I only really use podman rootless, so I don't know if I'd have the same problem with rootful, I'm afraid.

@srcshelton
Copy link
Contributor

I'm running as root, but from a 64-bit chroot on a system with an otherwise 32-bit userland...

It just took podman ~43s to list 60 images, and buildah took 40.5 seconds - so it looks as if it's not just a podman issue.

@vrothberg
Copy link
Member

#7215 will eventually solve the issue. Thanks for everybody involved and for providing the details.

@vrothberg
Copy link
Member

vrothberg commented Aug 7, 2020

I only really use podman rootless, so I don't know if I'd have the same problem with rootful, I'm afraid.

I am currently experiencing a huge performance decrease as root. With #7215 it takes 7 sec to list 510 images (yesterday it was 0.3 sec) while a vanilla Podman took 3 min 31 sec (6 sec yesterday).

@rhatdan @baude ... there's still something yet to be revealed.

@rhatdan
Copy link
Member

rhatdan commented Aug 7, 2020

I'll take 7 seconds.

@carlpett
Copy link
Author

carlpett commented Aug 7, 2020

FWIW my laptop is now back to its' slow self (without doing anything actively). 18m25s for 185 images. Anything I can check, or is the linked PR already certain to have found the underlying cause @vrothberg?

@vrothberg
Copy link
Member

FWIW my laptop is now back to its' slow self (without doing anything actively). 18m25s for 185 images. Anything I can check, or is the linked PR already certain to have found the underlying cause @vrothberg?

#7215 is definitely a huge improvement. Feel free to try it out on your machine.

However, yesterday I experienced something that has already been reported in this issue: seemingly random performance issues. For no apparent reason listing 510 images went up from 0.3 secs to 7 secs (with #7215) and from 6 sec to 3.5 mins with an unpatched Podman.

I suspect there's still something going on, very likely disk-related in c/storage.

@vrothberg
Copy link
Member

Another theory is that another service may consume disk I/O for a certain while. The performance issue on my F32 machine was only temporary and went back to normal after 15 mins (estimated) without any interference.

Maybe, F32 has some periodic disk checks running? @edsantiago do you know?

@carlpett
Copy link
Author

carlpett commented Aug 8, 2020

I downloaded the build from the PR (https://cirrus-ci.com/task/5154042487767040), and here's the result:

$ time Downloads/podman images | wc -l
186

real    0m6.974s
user    0m10.592s
sys     0m0.305s

So quite a bit better than 15-20 minutes, even though not as good as your sub-second timings.

I'm looking at iotop while running the old version, and it's pretty flat at 0 kb/s, so it doesn't seem like an obvious contention issue. I can even run the new build while the old one is processing and get roughly the same timings (I've got an nvme disk as well, so disk operations are normally pretty snappy)

@vrothberg
Copy link
Member

Thanks for testing, @carlpett! That's pretty good news.

I'll run some analyses as well when I hit the issue again. I didn't expect it to disappear so quickly.

@rhatdan
Copy link
Member

rhatdan commented Aug 9, 2020

Can we close this issue now?

@vrothberg
Copy link
Member

Can we close this issue now?

I prefer to keep it open until we've resolved the random performance decrease. We're way faster now but there's still something to unpack.

@vrothberg
Copy link
Member

Couldn't reproduce anymore. Listing 1210 images takes 0.71 seconds which is a decent performance. I am going to close but feel free to comment or create a new issue in case the seemingly random issues occur again.

@TriplEight
Copy link

just faced this problem, but apparently podman images (the same thing happened to podman run) was waiting until buildah push will finish.

$ buildah push --format=v2s2 docker.io/company/test:ubuntu
Getting image source signatures
Copying blob aad9f99a1286 [====================================>-] 1.1GiB / 1.1GiB
Copying blob 9069f84dbbe9 skipped: already exists
Copying blob f6253634dc78 skipped: already exists
Copying blob bacd3af13903 skipped: already exists
 time podman images --log-level debug                                                                                                                                                             Mo 28 Dez 2020 11:17:26 UTC
INFO[0000] podman filtering at log level debug
DEBU[0000] Called images.PersistentPreRunE(podman images --log-level debug)
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/user/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/user/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/user/.config/cni/net.d}}
DEBU[0000] Reading configuration file "/etc/containers/containers.conf"
DEBU[0000] Merged system config "/etc/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/user/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/user/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/user/.config/cni/net.d}}
DEBU[0000] Reading configuration file "/home/user/.config/containers/containers.conf"
DEBU[0000] Merged system config "/home/user/.config/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/user/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/user/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/user/.config/cni/net.d}}
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/user/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/user/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Initializing event backend journald
DEBU[0000] using runtime "/usr/lib/cri-o-runc/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
INFO[0000] Setting parallel job count to 25
INFO[0000] podman filtering at log level debug
DEBU[0000] Called images.PersistentPreRunE(podman images --log-level debug)
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/user/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/user/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/user/.config/cni/net.d}}
DEBU[0000] Reading configuration file "/etc/containers/containers.conf"
DEBU[0000] Merged system config "/etc/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/user/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/user/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/user/.config/cni/net.d}}
DEBU[0000] Reading configuration file "/home/user/.config/containers/containers.conf"
DEBU[0000] Merged system config "/home/user/.config/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/lib/cri-o-runc/sbin/runc /usr/sbin/runc /usr/bin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/user/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/user/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/user/.config/cni/net.d}}
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/user/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/user/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/user/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/user/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend journald
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] using runtime "/usr/lib/cri-o-runc/sbin/runc"
INFO[0000] Setting parallel job count to 25
DEBU[0406] created container "d0a7661ca3937dfcabb9b7a337b1660f3f16c3705530b4e9c1c818403a949805"
DEBU[0316] created container "5955a009c39441b33410eeca0c23d33f1c9db8f42182d44d9e032d1316bda23c"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@e5255be62ac6a3e8418ca5695a6fe88b784dd3df62d2a82c2d2d2ddac82dd332"
DEBU[0135] exporting opaque data as blob "sha256:e5255be62ac6a3e8418ca5695a6fe88b784dd3df62d2a82c2d2d2ddac82dd332"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@b5ea511d0591307c1a0bd03545772241ed59f3d110a07ef373d8aca482c3e91b"
DEBU[0135] exporting opaque data as blob "sha256:b5ea511d0591307c1a0bd03545772241ed59f3d110a07ef373d8aca482c3e91b"
DEBU[0316] container "5955a009c39441b33410eeca0c23d33f1c9db8f42182d44d9e032d1316bda23c" has work directory "/home/user/.local/share/containers/storage/vfs-containers/5955a009c39441b33410eeca0c23d33f1c9db8f42182d44d9e032d1316bda23c/userdata"
DEBU[0316] container "5955a009c39441b33410eeca0c23d33f1c9db8f42182d44d9e032d1316bda23c" has run directory "/run/user/1000/vfs-containers/5955a009c39441b33410eeca0c23d33f1c9db8f42182d44d9e032d1316bda23c/userdata"
DEBU[0407] container "d0a7661ca3937dfcabb9b7a337b1660f3f16c3705530b4e9c1c818403a949805" has work directory "/home/user/.local/share/containers/storage/vfs-containers/d0a7661ca3937dfcabb9b7a337b1660f3f16c3705530b4e9c1c818403a949805/userdata"
DEBU[0407] container "d0a7661ca3937dfcabb9b7a337b1660f3f16c3705530b4e9c1c818403a949805" has run directory "/run/user/1000/vfs-containers/d0a7661ca3937dfcabb9b7a337b1660f3f16c3705530b4e9c1c818403a949805/userdata"
INFO[0316] Invoking shutdown handler libpod
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@668f4ce3c0710983a78d8d93bc7a4f5e1ec02a69279c1871c194136e4825cb4f"
DEBU[0135] exporting opaque data as blob "sha256:668f4ce3c0710983a78d8d93bc7a4f5e1ec02a69279c1871c194136e4825cb4f"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@4e811c583aba949a467797eeac122e4f5cc037dc7e14b4a47dc2be83ce0b86b6"
DEBU[0135] exporting opaque data as blob "sha256:4e811c583aba949a467797eeac122e4f5cc037dc7e14b4a47dc2be83ce0b86b6"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@082235aa6868deaea6fab68fef165c0d4ebb99fcee211674e6bfa7c7049b73a2"
DEBU[0135] exporting opaque data as blob "sha256:082235aa6868deaea6fab68fef165c0d4ebb99fcee211674e6bfa7c7049b73a2"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@e05c33e57001e8fdd72d473e2ae9c4b1c0e10cd8c1cddbae05ff160cdfe9b5fd"
DEBU[0135] exporting opaque data as blob "sha256:e05c33e57001e8fdd72d473e2ae9c4b1c0e10cd8c1cddbae05ff160cdfe9b5fd"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@048050bbeea56c0d242d92af5bfbceab93b9de03be892c83b91293cbaab73737"
DEBU[0135] exporting opaque data as blob "sha256:048050bbeea56c0d242d92af5bfbceab93b9de03be892c83b91293cbaab73737"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@389fef7118515c70fd6c0e0d50bb75669942ea722ccb976507d7b087e54d5a23"
DEBU[0135] exporting opaque data as blob "sha256:389fef7118515c70fd6c0e0d50bb75669942ea722ccb976507d7b087e54d5a23"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@a9ba10a9e26094bb25f1d5793a4565664f509f91600afb76a2c0084fa008e0d5"
DEBU[0135] exporting opaque data as blob "sha256:a9ba10a9e26094bb25f1d5793a4565664f509f91600afb76a2c0084fa008e0d5"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@464fb59710110818377148def1bb717f57ad258273a7a00d54803f8e27df719f"
DEBU[0135] exporting opaque data as blob "sha256:464fb59710110818377148def1bb717f57ad258273a7a00d54803f8e27df719f"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@7d8ea87973c2574e7f8cad825170185b0eb9fe877b32017b4cde0b3c991469e5"
DEBU[0135] exporting opaque data as blob "sha256:7d8ea87973c2574e7f8cad825170185b0eb9fe877b32017b4cde0b3c991469e5"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@235c81aec9d6f1747fa264126e58ea5d9b2adb7aae3ad3b132dbb885a257fe5a"
DEBU[0135] exporting opaque data as blob "sha256:235c81aec9d6f1747fa264126e58ea5d9b2adb7aae3ad3b132dbb885a257fe5a"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@442197e1d2599591bbd600a7900ae5ec16533e82dcf8351f72c1ff1a7b7921ff"
DEBU[0135] exporting opaque data as blob "sha256:442197e1d2599591bbd600a7900ae5ec16533e82dcf8351f72c1ff1a7b7921ff"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@7248c2c6deb38264d7e19da065ba58de56ba3c2f3338c4700c005fa4523ec014"
DEBU[0135] exporting opaque data as blob "sha256:7248c2c6deb38264d7e19da065ba58de56ba3c2f3338c4700c005fa4523ec014"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@4a9cd57610d60c20bc1ed3bc9324cc04356e1f3e15b9619dcc37db1138adf9c6"
DEBU[0135] exporting opaque data as blob "sha256:4a9cd57610d60c20bc1ed3bc9324cc04356e1f3e15b9619dcc37db1138adf9c6"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@29504c3d3234b99de134cc5ed907a9ef0bbf13f985623665bd99117f5a2b973c"
DEBU[0135] exporting opaque data as blob "sha256:29504c3d3234b99de134cc5ed907a9ef0bbf13f985623665bd99117f5a2b973c"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@2850af11560657e2b9c685787ec991f013daf86f71db6369e43cab1af8b9e315"
DEBU[0135] exporting opaque data as blob "sha256:2850af11560657e2b9c685787ec991f013daf86f71db6369e43cab1af8b9e315"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@d2a6ad185aa54d5dce2750eed9fa8e8c406d20545233307d043249a05c62cc51"
INFO[0407] Invoking shutdown handler libpod
DEBU[0135] exporting opaque data as blob "sha256:d2a6ad185aa54d5dce2750eed9fa8e8c406d20545233307d043249a05c62cc51"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@0b01cfe8ed1d6c4a1139aa209c5e3c6eee7be44c2aa7cf0c8b5a23b16fe100c6"
DEBU[0135] exporting opaque data as blob "sha256:0b01cfe8ed1d6c4a1139aa209c5e3c6eee7be44c2aa7cf0c8b5a23b16fe100c6"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@f643c72bc25212974c16f3348b3a898b1ec1eb13ec1539e10a103e6e217eb2f1"
DEBU[0135] exporting opaque data as blob "sha256:f643c72bc25212974c16f3348b3a898b1ec1eb13ec1539e10a103e6e217eb2f1"
DEBU[0135] parsed reference into "[vfs@/home/user/.local/share/containers/storage+/run/user/1000]@339e9e163122fe75c272e95d8dcecf5a083917192a18fb01c083a297420aa668"
DEBU[0135] exporting opaque data as blob "sha256:339e9e163122fe75c272e95d8dcecf5a083917192a18fb01c083a297420aa668"
REPOSITORY                          TAG                IMAGE ID      CREATED         SIZE
docker.io/company/test           ubuntu             339e9e163122  19 minutes ago  1.26 GB
docker.io/company/test           ci-nosquash        0b01cfe8ed1d  4 days ago      2.2 GB
docker.io/company/test           debian             29504c3d3234  4 days ago      1.13 GB
docker.io/company/test           notsquashed        7248c2c6deb3  4 days ago      101 MB
docker.io/company/test           1                  442197e1d259  4 days ago      101 MB
docker.io/company/test           ci-staging         2850af115606  5 days ago      2.19 GB
docker.io/company/ci-linux       staging            2850af115606  5 days ago      2.19 GB
docker.io/company/base-ci-linux  latest             d2a6ad185aa5  5 days ago      1.13 GB
docker.io/company/tools          3                  a9ba10a9e260  7 days ago      101 MB
docker.io/company/test           latest             a9ba10a9e260  7 days ago      101 MB
docker.io/aquasec/trivy             latest             235c81aec9d6  7 days ago      50.9 MB
docker.io/company/ci-linux       production         464fb5971011  9 days ago      2.2 GB
<none>                              <none>             e5255be62ac6  10 days ago     2.2 GB
docker.io/library/alpine            latest             389fef711851  11 days ago     5.85 MB
docker.io/library/debian            buster-slim        4a9cd57610d6  2 weeks ago     72.5 MB
docker.io/library/ubuntu            20.04              f643c72bc252  4 weeks ago     75.3 MB
<none>                              <none>             b5ea511d0591  5 weeks ago     2.16 GB
docker.io/company/ci-linux       d0f2b553-20201118  668f4ce3c071  5 weeks ago     2.14 GB
docker.io/company/ci-linux       3ec7097e-20201117  4e811c583aba  5 weeks ago     2.14 GB
docker.io/company/ci-linux       3ec7097e-20201116  082235aa6868  6 weeks ago     2.14 GB
docker.io/company/ci-linux       4cc65dc0-20201102  e05c33e57001  8 weeks ago     2.14 GB
docker.io/company/ci-linux       974ba3ac-20201001  048050bbeea5  2 months ago    2.2 GB
docker.io/company/ci-commons          v2.0.1             7d8ea87973c2  9 months ago    681 MB
DEBU[0135] Called images.PersistentPostRunE(podman images --log-level debug)

________________________________________________________
Executed in  136,00 secs   fish           external
   usr time  156,86 millis  866,00 micros  155,99 millis
   sys time  101,83 millis   83,00 micros  101,75 millis

@metal3d
Copy link

metal3d commented May 6, 2022

I'm sorry but the problem persists here:

$ time podman images | wc -l
48

real	0m13,713s
user	0m5,608s
sys	0m9,363s
podman version
Version:      3.4.7
API Version:  3.4.7
Go Version:   go1.16.15
Built:        Thu Apr 21 15:14:26 2022
OS/Arch:      linux/amd64

And this is very uncomfortable with bash completion. Have I missed something ?

@vrothberg
Copy link
Member

There was another issue (#13755) which will be fixed with the upcoming Podman v4.1 release.

@tymonx
Copy link

tymonx commented Jul 18, 2023

This issue still exists. I'm using podman version 4.5.1 and listing images using bash completion that relies on the podman images command is slow and painful.

@vrothberg
Copy link
Member

This issue still exists. I'm using podman version 4.5.1 and listing images using bash completion that relies on the podman images command is slow and painful.

Thanks for reaching out. Can you please open a new issue for that?

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Oct 18, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 18, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests