-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stuck on Storing Signatures #3323
Comments
@mtrmac PTAL. My first instinct is that this is related to the VFS driver, but not certain yet. |
@vladkosarev Can you provide the output of “Storing signatures” is a memory-only operation that can’t really hang; afterwards |
After I logged this ticket I set up a bunch of other containers and worked with those. Since it works now we can close the ticket but there might be something funky going on with a fresh install and particular sequence of containers. Still attaching debug log for a successful run -
|
Hm. I'll close this, then. If anyone else hits a similar problem, leave a comment with details, we'll reopen. |
I think I had the same issue when I ran LogsINFO[0000] running as rootless DEBU[0000] using conmon: "/nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon" DEBU[0000] Initializing boltdb state at /home/state/podman/containers/storage/libpod/bolt_state.db DEBU[0005] Using graph driver vfs DEBU[0005] Using graph root /home/state/podman/containers/storage DEBU[0005] Using run root /tmp/1000 DEBU[0005] Using static dir /home/state/podman/containers/storage/libpod DEBU[0005] Using tmp dir /run/user/1000/libpod/tmp DEBU[0005] Using volume path /home/state/podman/containers/storage/volumes DEBU[0005] Set libpod namespace to "" DEBU[0005] [graphdriver] trying provided driver "vfs" DEBU[0005] Initializing event backend journald WARN[0005] The configuration is using `runtime_path`, which is deprecated and will be removed in future. Please use `runtimes` and `runtime` WARN[0005] If you are using both `runtime_path` and `runtime`, the configuration from `runtime_path` is used DEBU[0005] using runtime "/nix/store/s0jqb63yzcysxmb0v2nqvab38jc2fmvg-runc-1.0.0-rc8-bin/bin/runc" DEBU[0005] using runtime "/run/current-system/sw/bin/runc" DEBU[0006] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]docker.io/discourse/discourse_dev:release" Trying to pull docker.io/discourse/discourse_dev:release... DEBU[0007] reference rewritten from 'docker.io/discourse/discourse_dev:release' to 'docker.io/discourse/discourse_dev:release' DEBU[0007] Trying to pull "docker.io/discourse/discourse_dev:release" DEBU[0007] Credentials not found DEBU[0007] Using registries.d directory /etc/containers/registries.d for sigstore configuration DEBU[0007] No signature storage configuration found for docker.io/discourse/discourse_dev:release DEBU[0007] Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io DEBU[0007] GET https://registry-1.docker.io/v2/ DEBU[0007] Ping https://registry-1.docker.io/v2/ status 401 DEBU[0007] GET https://auth.docker.io/token?scope=repository%3Adiscourse%2Fdiscourse_dev%3Apull&service=registry.docker.io DEBU[0008] GET https://registry-1.docker.io/v2/discourse/discourse_dev/manifests/release DEBU[0008] Using blob info cache at /home/state/containers/cache/blob-info-cache-v1.boltdb DEBU[0008] IsRunningImageAllowed for image docker:docker.io/discourse/discourse_dev:release DEBU[0008] Using default policy section DEBU[0008] Requirement 0: allowed DEBU[0008] Overall: allowed DEBU[0008] Downloading /v2/discourse/discourse_dev/blobs/sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a DEBU[0008] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a Getting image source signatures DEBU[0009] Manifest has MIME type application/vnd.docker.distribution.manifest.v2+json, ordered candidate list [application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v1+json] DEBU[0009] ... will first try using the original manifest unmodified DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:f9ea07c2dd645103be1ac73a55ce4f3cffc328aa641517a5926b7d800b78128e DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:f9ea07c2dd645103be1ac73a55ce4f3cffc328aa641517a5926b7d800b78128e DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:83aa6f4d64be3fc2cc0d33c22ce0b0dd9cd64c493dc4ccf52130d2a9d3bc1b37 DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:83aa6f4d64be3fc2cc0d33c22ce0b0dd9cd64c493dc4ccf52130d2a9d3bc1b37 DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:314445a2d62ef3b3bdc2b79317c4bfdef0aeaef2ed82207366ac3f0a0993c32c DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:314445a2d62ef3b3bdc2b79317c4bfdef0aeaef2ed82207366ac3f0a0993c32c DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:259c2faea530ab8aba90a8e15b11b52852a3cabf257b85b5270f82404684720f DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:259c2faea530ab8aba90a8e15b11b52852a3cabf257b85b5270f82404684720f DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:1ab2bdfe97783562315f98f94c0769b1897a05f7b0395ca1520ebee08666703b DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:1ab2bdfe97783562315f98f94c0769b1897a05f7b0395ca1520ebee08666703b DEBU[0009] Downloading /v2/discourse/discourse_dev/blobs/sha256:6d7febef89e821922dcf311f1b47060ded7b30070f634453e27a123d2c95c5da DEBU[0009] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:6d7febef89e821922dcf311f1b47060ded7b30070f634453e27a123d2c95c5da DEBU[0010] Detected compression format gzip DEBU[0010] Using original blob without modification DEBU[0010] Detected compression format gzip DEBU[0010] Using original blob without modification DEBU[0010] Detected compression format gzip DEBU[0010] Using original blob without modification DEBU[0011] Detected compression format gzip DEBU[0011] Using original blob without modification DEBU[0011] Detected compression format gzip DEBU[0011] Using original blob without modification DEBU[0012] Detected compression format gzip DEBU[0012] Using original blob without modification DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:5b045945df239d15149649fe4dcb1f74de6bfeaf6053644197c86c5ef2c74309 DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:5b045945df239d15149649fe4dcb1f74de6bfeaf6053644197c86c5ef2c74309 DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:dd55daacdc5c4263802940a3bab4b7780794310252db221e996644d585dc8f7d DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:dd55daacdc5c4263802940a3bab4b7780794310252db221e996644d585dc8f7d DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:5b50460c132d2c13753b5ecf03ae50861542585cbf3291afba51bc37a1fd2c81 DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:5b50460c132d2c13753b5ecf03ae50861542585cbf3291afba51bc37a1fd2c81 DEBU[0013] Downloading /v2/discourse/discourse_dev/blobs/sha256:d290e7dab349b9355e01ba6d23c850183e963c6a7db02c58c10f66591ee8a1e6 DEBU[0013] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:d290e7dab349b9355e01ba6d23c850183e963c6a7db02c58c10f66591ee8a1e6 DEBU[0014] Detected compression format gzip DEBU[0014] Using original blob without modification DEBU[0015] Detected compression format gzip DEBU[0015] Using original blob without modification DEBU[0016] Detected compression format gzip DEBU[0016] Using original blob without modification DEBU[0019] Detected compression format gzip DEBU[0019] Using original blob without modification DEBU[0019] Downloading /v2/discourse/discourse_dev/blobs/sha256:9f95105f91fcf4e40fc82fa1317f10f5710e68f57456b65134ea5ee6263c684e DEBU[0019] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:9f95105f91fcf4e40fc82fa1317f10f5710e68f57456b65134ea5ee6263c684e DEBU[0019] Downloading /v2/discourse/discourse_dev/blobs/sha256:5717bfc5732cbcb73b547642c8bb77bb1d4b65ac802315d143233ca177fb7e0f DEBU[0019] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:5717bfc5732cbcb73b547642c8bb77bb1d4b65ac802315d143233ca177fb7e0f DEBU[0019] Downloading /v2/discourse/discourse_dev/blobs/sha256:6c161aaf0049976ce1bb7563f77912e5f695f19072c5fc34f1140997228c31f1 DEBU[0019] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:6c161aaf0049976ce1bb7563f77912e5f695f19072c5fc34f1140997228c31f1 DEBU[0020] Detected compression format gzip DEBU[0020] Using original blob without modification DEBU[0020] Detected compression format gzip DEBU[0020] Using original blob without modification DEBU[0024] Detected compression format gzip DEBU[0024] Using original blob without modification DEBU[0025] Downloading /v2/discourse/discourse_dev/blobs/sha256:44aa8de3c334def5166dd666d6eb7d25a14a6753d7b72d4ae06e4b9a96d2ec78 DEBU[0025] GET https://registry-1.docker.io/v2/discourse/discourse_dev/blobs/sha256:44aa8de3c334def5166dd666d6eb7d25a14a6753d7b72d4ae06e4b9a96d2ec78 DEBU[0028] Detected compression format gzip DEBU[0028] Using original blob without modification Coppying blob 5717bfc5732c done DEBU[0189] No compression detected DEBU[0189] Using original blob without modification Copying config aaec2f76b8 [======================================] 18.1KiB / 18.1KiB Copying config aaec2f76b8 done Writing manifest to image destination Storing signatures DEBU[0190] Start untar layer DEBU[0193] Untar time: 2.7715050100000003s DEBU[0202] Start untar layer DEBU[0414] Untar time: 212.281946872s DEBU[0759] Start untar layer DEBU[0764] Untar time: 5.530771338s DEBU[2725] Start untar layer DEBU[2731] Untar time: 5.975134436s DEBU[4712] Start untar layer DEBU[4712] Untar time: 0.269512026s DEBU[5254] Start untar layer DEBU[5255] Untar time: 0.610600993s DEBU[5369] Start untar layer DEBU[5369] Untar time: 0.134516457s DEBU[5576] Start untar layer DEBU[5577] Untar time: 0.708177531s DEBU[5714] Start untar layer DEBU[5719] Untar time: 5.031429221s DEBU[5849] Start untar layer DEBU[5850] Untar time: 1.136541521s DEBU[5987] Start untar layer DEBU[6112] Untar time: 124.770738721s DEBU[7293] Start untar layer DEBU[7305] Untar time: 11.186880398s DEBU[9747] Start untar layer DEBU[9748] Untar time: 1.005790214s DEBU[10190] Start untar layer DEBU[10297] Untar time: 107.175232526s DEBU[10299] setting image creation date to 2019-09-08 23:26:39.039338573 +0000 UTC DEBU[10299] created new image ID "aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[10300] set names of image "aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" to [docker.io/discourse/discourse_dev:release] DEBU[10301] saved image metadata "{}" DEBU[10302] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]docker.io/discourse/discourse_dev:release" aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a INFO[0001] running as rootless DEBU[0002] using conmon: "/nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon" DEBU[0002] Initializing boltdb state at /home/state/podman/containers/storage/libpod/bolt_state.db DEBU[0002] Using graph driver vfs DEBU[0002] Using graph root /home/state/podman/containers/storage DEBU[0002] Using run root /tmp/1000 DEBU[0002] Using static dir /home/state/podman/containers/storage/libpod DEBU[0002] Using tmp dir /run/user/1000/libpod/tmp DEBU[0002] Using volume path /home/state/podman/containers/storage/volumes DEBU[0002] Set libpod namespace to "" DEBU[0002] [graphdriver] trying provided driver "vfs" DEBU[0002] Initializing event backend journald WARN[0002] The configuration is using `runtime_path`, which is deprecated and will be removed in future. Please use `runtimes` and `runtime` WARN[0002] If you are using both `runtime_path` and `runtime`, the configuration from `runtime_path` is used DEBU[0002] using runtime "/nix/store/s0jqb63yzcysxmb0v2nqvab38jc2fmvg-runc-1.0.0-rc8-bin/bin/runc" DEBU[0002] using runtime "/run/current-system/sw/bin/runc" DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]docker.io/discourse/discourse_dev:release" DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0002] exporting opaque data as blob "sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0002] exporting opaque data as blob "sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0002] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0002] User mount /home/src/discourse/data/postgres:/shared/postgres_data options [] DEBU[0002] User mount /home/src/discourse:/src options [] DEBU[0003] Got mounts: [{/src bind /home/src/discourse []} {/shared/postgres_data bind /home/src/discourse/data/postgres []}] DEBU[0003] Got volumes: [] DEBU[0003] Using slirp4netns netmode DEBU[0003] Adding mount /proc DEBU[0003] Adding mount /dev DEBU[0003] Adding mount /dev/pts DEBU[0003] Adding mount /dev/mqueue DEBU[0003] Adding mount /sys DEBU[0003] Adding mount /sys/fs/cgroup DEBU[0003] setting container name discourse_dev DEBU[0003] created OCI spec and options for new container DEBU[0003] Allocated lock 8 for container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a DEBU[0003] parsed reference into "[vfs@/home/state/podman/containers/storage+/tmp/1000]@aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0003] exporting opaque data as blob "sha256:aaec2f76b83a69fee8fa89ffe83570f8c7d1576226ab10eecd1669bb8e87b53a" DEBU[0149] created container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" DEBU[0149] container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" has work directory "/home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata" DEBU[0149] container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" has run directory "/tmp/1000/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata" DEBU[0150] New container created "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" DEBU[0150] container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" has CgroupParent "/libpod_parent/libpod-8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" DEBU[0153] mounted container "8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a" at "/home/state/podman/containers/storage/vfs/dir/6b1bc14b34797b8e973e0f36d36d8ad68195c092a7bff4a44fbcbb424b243068" DEBU[0153] Created root filesystem for container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a at /home/state/podman/containers/storage/vfs/dir/6b1bc14b34797b8e973e0f36d36d8ad68195c092a7bff4a44fbcbb424b243068 DEBU[0153] /etc/system-fips does not exist on host, not mounting FIPS mode secret DEBU[0153] Created OCI spec for container 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a at /home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/config.json DEBU[0153] /nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon messages will be logged to syslog DEBU[0153] running conmon: /nix/store/ijyz1qpfqqwc4qv9vx6s1g7z8ssjv1c0-conmon-2.0.0/bin/conmon args="[--api-version 1 -c 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a -u 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a -r /run/current-system/sw/bin/runc -b /home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata -p /tmp/1000/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/pidfile -l k8s-file:/home/state/podman/containers/storage/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /tmp/1000/vfs-containers/8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a/userdata/conmon.pid --exit-command /nix/store/b6246mx4g59dkrc21wsh3chaivh1651s-podman-1.5.1-bin/bin/podman --exit-command-arg --root --exit-command-arg /home/state/podman/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8fd48ceaabf0683273e420149616e1316d83df3d0af3c726937e937edbc2256a]" WARN[0154] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpu: mkdir /sys/fs/cgroup/cpu/libpod_parent: permission denied [conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied |
This image does take quite a long time to finish committing to disk on my setup (Podman master, running as root, overlay2 driver) - but only a few minutes. I again begin to suspect the VFS graph driver; can you possible try again with fuse-overlayfs? |
This image does take quite a long time to finish committing to disk on
my setup (Podman master, running as root, overlay2 driver) - but only
a few minutes.
I again begin to suspect the VFS graph driver; can you possible try
again with fuse-overlayfs?
Only a few minutes for me too with fuse-overlayfs. Thank you very much!
|
I had about a 10 minute hang at "storing signatures" on a fresh install (1.4.4) when pulling a small |
That, in itself, is not too surprising: the download steps just decompress and write a linear stream of data to disk, the commit step (which actually takes time after “storing signatures”) reads that linear stream and writes it into many small individual files on disk, so filesystem metadata and disk seek performance are rather important for the commit (but completely negligible for the download). |
Interesting. Well, I'm VERY surprised at the poor performance. I've had 5+ minute waits on other commands too (not just a pull). If that's expected, I'll just accept it as normal. If it is not normal, I'd be happy to try to gather any relevant information you might need to troubleshoot the problem. |
I’m not at all saying that long times are expected or acceptable; just that it’s not surprising that the commit step (after “storing signatures”) can take longer than the download. 10 minutes for a 300MB image does seem extremely excessive on modern hardware. |
I reproduced it on my machine without sudo, but everything went smooth with sudo. Note that with strace you will see it is stuck on storage lock: |
I believe I am also having this problem. I'm trying to run It might be able to run if I provided more disk space, but that isn't an option for my use case. Like for zhuguoliang, it works with sudo. However, having root access is also not an option for my desired use case. |
Are you using the VFS driver? This sounds like it could be caused by using the VFS driver with a large image. VFS does not perform file system level deduplication, which causes a massive expansion of storage required. Using the overlay driver (as root) or fuse-overlayfs (rootless) should avert this. |
I was reading #3799 a few days ago and also noticed the "strong" recommendation for using fuse-overlayfs. When I had the problem above, I was using the VFS driver. I can't migrate to fuse-overlayfs yet because my kernel is too old (It is RHEL 7 and I don't control that part of the server). So, hopefully one day eventually I'll be able to give this a try (until then, |
@mheon I'm very new to containers, I don't know what driver I'm using. How would I find out and/or specify? |
@Dulani I believe fuse-overlay should work in 7.8. @Jousboxx |
@mheon I was indeed using vfs. I did Now I receive this error:
Weird that it would say the driver isn't supported when it is the preferred driver to be used in this situation. It's probably worth mentioning that I'm on Ubuntu 19.10. |
@giuseppe PTAL |
Finally solved the problem by doing Podman sees that I now have Specifying the Also, for anyone else googling and finding this, It would be nice if the overlay packages were required dependencies of podman and then overlay was used as the default driver from the beginning. That would save new users a lot of time not having to deal with this difficult to diagnose issue. |
agreed. At least for the Fedora package we also install fuse-overlayfs and it is automatically picked and used. Have you installed Podman from the Ubuntu PPA? |
is this issue resolved? |
Solved for me, OP hasn't responded |
I'm encountering the same issue when running the following, even after removing
|
Can you provide |
My bad, turns out pulling the image exited correctly, it's just that I ran |
/kind bug
Description
Using Void Linux, latest podman gets stuck on sitespeed.io container at "Storing Signatures" step.
Steps to reproduce the issue:
podman run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:9.3.2 https://sitespeed.io
Describe the results you received:
[vlad@void-vm ~]$ podman run --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io:9.3.2 https://sitespeed.io
Trying to pull docker.io/sitespeedio/sitespeed.io:9.3.2...Getting image source signatures
Copying blob edb2b29fa1e2 skipped: already exists
Copying blob c46b5fa4d940 skipped: already exists
Copying blob 6b1eed27cade skipped: already exists
Copying blob 93ae3df89c92 skipped: already exists
Copying blob 76d5ea985833 skipped: already exists
Copying blob 473ede7ed136 skipped: already exists
Copying blob cf82bd0b1aa3 skipped: already exists
Copying blob 7dc6cf341fb3 skipped: already exists
Copying blob 3c9757b8e6c7 skipped: already exists
Copying blob 275861923052 skipped: already exists
Copying blob 4c29465436d4 skipped: already exists
Copying blob 66700b5e3941 skipped: already exists
Copying blob 160f3f39f1b5 skipped: already exists
Copying blob a0507231acd7 skipped: already exists
Copying blob b965ed368ed7 skipped: already exists
Copying blob ad9103b58e2d skipped: already exists
Copying blob 946c4c8160b3 skipped: already exists
Copying blob df426434925b skipped: already exists
Copying blob 8791c156ea54 skipped: already exists
Copying blob b1ac729adf6d skipped: already exists
Copying blob 4d916c8de88f skipped: already exists
Copying blob 67578fe28a3d skipped: already exists
Copying blob 47f6b4d4a060 done
Copying blob 203d22208385 done
Copying blob 671ce6b824ea done
Copying config 681cf2afef done
Writing manifest to image destination
Storing signatures
Describe the results you expected:
I would expect it to not freeze at "Storing signatures" step and actually run the container
Output of
podman version
:Output of
podman info --debug
:Additional environment details (AWS, VirtualBox, physical, etc.):
Hyper-V VM on Windows 10 Ent
The text was updated successfully, but these errors were encountered: