Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman doesn't run image with invalid architecture #10682

Closed
hetzbh opened this issue Jun 15, 2021 · 24 comments · Fixed by #10739
Closed

podman doesn't run image with invalid architecture #10682

hetzbh opened this issue Jun 15, 2021 · 24 comments · Fixed by #10739
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@hetzbh
Copy link

hetzbh commented Jun 15, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When I start a simple systemd service that I created and it needs to pull an image, if an image is already in the local cache, it fails with this message on the journal:

Jun 14 13:36:44 containers.hetzlabs.io podman[15293]: Error: docker.io/pihole/pihole: image not known
Jun 14 13:36:44 containers.hetzlabs.io systemd[1]: pihole.service: Main process exited, code=exited, status=125/n/a
Jun 14 13:36:44 containers.hetzlabs.io podman[15359]: Error: no container with name or ID "pihole" found: no such container

However, if I clean the cache (podman rmi pihole) prior to running the systemd service, it starts without an issue.

Here is my systemd service script (it runs as root, I know about security issues about the ports, it's just temporary):

[Unit]
Description=Pi-Hole Podman Container
Wants=network.target
After=network-online.target
RequiresMountsFor=/var/run/container/storage

[Service]
ExecStart=/usr/bin/podman run --name=pihole --hostname=pi-hole --cap-add=NET_ADMIN --dns=127.0.0.1 --dns=1.1.1.1 -e TZ=Asia/Jerusalem -e SERVERIP=192.168.0.5 -e WEBPASSWORD=supersecret!!! -e DNS1=1.1.1.1 -e DNS2=1.0.0.1 -e DNSSEC=true -e CONDITIONAL_FORWARDING=true -e CONDITIONAL_FORWARDING_IP=192.168.0.1 -e CONDITIONAL_FORWARDING_DOMAIN=lan -e TEMPERATUREUNIT=c -v pihole_pihole:/etc/pihole:Z -v pihole_dnsmasq:/etc/dnsmasq.d:Z -p 80:80/tcp -p 443:443/tcp -p 67:67/udp -p 53:53/tcp -p 53:53/udp docker.io/pihole/pihole
ExecStop=/usr/bin/podman stop -t 2 pihole
ExecStopPost=/usr/bin/podman rm pihole --ignore -f

[Install]
WantedBy=multi-user.target

Steps to reproduce the issue:

  1. pull the image (in my case it's pihole, so podman pull pihole/pihole (as root)
  2. Make sure the image is in the cache (podman images)
  3. Try to start the systemd script (systemd start pihole)
  4. See it fails on the journal
  5. Remove the image (podman rmi pihole)
  6. Start the systemd service again. This time it will work.

Describe the results you received:
When the image is in the cache, it fails to start the service. When it's not, it pulls the image and then it works.

Describe the results you expected:
It should work and skip pulling if it's already in the cache without failing.

Output of podman version:

Version:      3.2.0
API Version:  3.2.0
Go Version:   go1.16.3
Built:        Wed Jun  9 10:23:38 2021
OS/Arch:      linux/arm64

Output of podman info --debug:

host:
  arch: arm64
  buildahVersion: 1.21.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.27-2.fc34.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.27, commit: '
  cpus: 4
  distribution:
    distribution: fedora
    version: "34"
  eventLogger: journald
  hostname: containers.hetzlabs.io
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.12.9-300.fc34.aarch64
  linkmode: dynamic
  memFree: 2276794368
  memTotal: 4002390016
  ociRuntime:
    name: crun
    package: crun-0.20.1-1.fc34.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 0.20.1
      commit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 4001361920
  swapTotal: 4001361920
  uptime: 13h 17m 36.28s (Approximately 0.54 days)
registries:
  search:
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.2.0
  Built: 1623248618
  BuiltTime: Wed Jun  9 10:23:38 2021
  GitCommit: ""
  GoVersion: go1.16.3
  OsArch: linux/arm64
  Version: 3.2.0

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.2.0-5.fc34.aarch64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Raspberry Pi 4B, 4GB version, Fedora 34 (Server)

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 15, 2021
@tobwen
Copy link
Contributor

tobwen commented Jun 15, 2021

Isn't it just run ... pihole/pihole - without docker.io/ like podman pull pihole/pihole?

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

I tried first without the docker.io part and it didn't work, so I thought that if I would add docker.io it would solve the issue. It doesn't.

@tobwen
Copy link
Contributor

tobwen commented Jun 15, 2021

It doesn't.

What does journalctl -r say? Also, why don't you use podman generate systemd to generate "state of the art" service files? :-)

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

Ok, looks like it's not systemd fault. Try to run the following command (as root) twice. At first it will pull the image, and at the 2nd, it will give an error "Error: pihole: image not found"

podman run -d -p 53:53/tcp -p 53:53/udp -p 67:67/udp -p 80:80/tcp -p 443:443/tcp -e DNS1=1.1.1.1 -e DNS2=8.8.8.8 -e TZ=America/New_York --name pi-hole -e WEBPASSWORD=blah pihole

journal doesn't show anything related.

@tobwen
Copy link
Contributor

tobwen commented Jun 15, 2021

Works like a charm here:

root@debian:~# podman run -d -p 53:53/tcp -p 53:53/udp -p 67:67/udp -p 80:80/tcp -p 443:443/tcp -e DNS1=1.1.1.1 -e DNS2=8.8.8.8 -e TZ=America/New_York --name pi-hole -e WEBPASSWORD=blah pihole/pihole
Resolving "pihole/pihole" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/pihole/pihole:latest...
Getting image source signatures
Copying blob 8957fc1c82ea done
Copying blob 2bc8e21f6046 done
Copying blob f240c58bc85b done
Copying blob 8abd4c361c96 done
Copying blob 828637a58d0d done
Copying blob f7ec5a41d630 done
Copying blob 91418c3e6d74 done
Copying blob 038bdc1a2152 [============>-------------------------] 40.1MiB / 113.4MiB
Copying blob ecb3446e4109 done
Copying blob a92ec3210005 done
Copying blob 6d611d392122 done
Copying blob 70ce0dcbcc17 done
Copying blob cb8f2802f8a6 done

root@debian:~# cat /etc/containers/registries.confnf
unqualified-search-registries = ["docker.io"]

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

I said twice. Don't clean the image, run it again.

@tobwen
Copy link
Contributor

tobwen commented Jun 15, 2021

No problem here. Did you forget to stop the container or replace it? You can't run a daemon two times with the same name:

podman run -d --name pi-hole ...
podman run -d --name pi-hole ...
. You have to remove that container to be able to reuse that name.: that name is already in use

To replace it:

podman run -d --name pi-hole --replace ...

Or to stop it:

podman stop pi-hole

or

podman stop --latest

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

You're right, but the error that I get is something else (I think it's related to Fedora's Pi version). Take a look:

[root@containers ~]# podman run -d -p 53:53/tcp -p 53:53/udp -p 67:67/udp   -p 80:80/tcp -p 443:443/tcp -e DNS1=1.1.1.1 -e DNS2=8.8.8.8   -e TZ=America/New_York --name pi-hole -e WEBPASSWORD=blah pihole/pihole
Resolved "pihole/pihole" as an alias (/var/cache/containers/short-name-aliases.conf)
Trying to pull docker.io/pihole/pihole:latest...
Getting image source signatures
Copying blob 113626093de2 done
Copying blob 0d3fe530c66a done
Copying blob 208d49c0abc2 done
Copying blob a9b44cf88df1 done
Copying blob fe9f1fa5a5e0 done
Copying blob aceecb32d1c3 done
Copying blob 52ccd329e28b done
Copying blob 20749e2af08b done
Copying config 14ad311e9e done
Writing manifest to image destination
Storing signatures
930d43e15c0dbbb73a2ee31797c5fed7831b6117e4ce654d44ef004d7560830f
[root@containers ~]# podman run -d -p 53:53/tcp -p 53:53/udp -p 67:67/udp   -p 80:80/tcp -p 443:443/tcp -e DNS1=1.1.1.1 -e DNS2=8.8.8.8   -e TZ=America/New_York --name pi-hole -e WEBPASSWORD=blah pihole/pihole
Error: pihole/pihole: image not known

@tobwen
Copy link
Contributor

tobwen commented Jun 15, 2021

You need to login with the repository's credentials:

root@debian:~# podman login
Authenticating with existing credentials...
Existing credentials are valid. Already logged in to docker.io

root@debian:~# podman run --rm -p 53:53/tcp -p 53:53/udp -p 67:67/udp   -p 80:80/tcp -p 443:443/tcp -e DNS1=1.1.1.1 -e DNS2=8.8.8.8   -e TZ=America/New_York --name pi-hole -e WEBPASSWORD=blah pihole/pihole
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] 01-resolver-resolv: applying...
[fix-attrs.d] 01-resolver-resolv: exited 0.
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] 20-start.sh: executing...
 ::: Starting docker specific checks & setup for docker pihole/pihole

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

This is from docker hub and I'm already logged in. In both cases it tries to pull from the same registry.

[root@containers ~]# podman login
Authenticating with existing credentials...
Existing credentials are valid. Already logged in to docker.io

@Luap99
Copy link
Member

Luap99 commented Jun 15, 2021

I think the pihole image is for amd64 only and not arm64.

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

If it wasn't, it wouldn't be running on my Pi 4B

@tobwen
Copy link
Contributor

tobwen commented Jun 15, 2021

Just to be sure, please try pihole/pihole:nightly-arm64-stretch (or similar). But I don't have a problem on x64 here.

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

Same. It seems to be an aarch64 issue.

@Luap99
Copy link
Member

Luap99 commented Jun 15, 2021

@hetzbh Did this work prior v.3.2?

@hetzbh
Copy link
Author

hetzbh commented Jun 15, 2021

Donno, I just installed Fedora 34 on Pi yesterday.

@Luap99
Copy link
Member

Luap99 commented Jun 15, 2021

@vrothberg PTAL, local image lookup is failing when the image is already downloaded.

@edsantiago
Copy link
Member

Interesting. I'm seeing something similar on my Gentoo laptop after upgrading this morning. Downgrading to 3.1.2 fixed it.

@rhatdan
Copy link
Member

rhatdan commented Jun 15, 2021

@vrothberg PTAL

@jorgeml
Copy link

jorgeml commented Jun 16, 2021

Same issue here but on a Raspberry Pi 2 Model B Rev 1.1 (armv7l) running Fedora 34 IoT.

@edsantiago
Copy link
Member

The problem is with password-protected registries. I can reproduce this on my f34 laptop using master @ b3f61ec:

$ ./bin/podman pull quay.io/edsantiago/acrosslite:0.1
....the usual logs...
$ ./bin/podman run !$ true
Error: quay.io/edsantiago/acrosslite:0.1: image not known

@vrothberg
Copy link
Member

Local image lookup is working as expected but the pihole image is wrong, see below:

$ skopeo inspect --format "{{.Architecture}}" docker://docker.io/pihole/pihole@sha256:4e10b0a5cfa2f36ff29554599664b47e984986f48a713e1c0ad831b1cf88401a
amd64

The arm64 image set its architecture to amd64. I think we need to loosen the platforms check in libimage as I prefer to keep using broken images over breaking workloads.

@edsantiago
Copy link
Member

Oh, blush, yeah, it's Arch, not password-protection.

$ podman image inspect --format "{{.Architecture}}" quay.io/edsantiago/acrosslite:0.1
386
$ uname -m
x86_64

@vrothberg vrothberg changed the title podman fails to pull image with systemd service podman doesn't run image with invalid architecture Jun 18, 2021
@vrothberg
Copy link
Member

Note that a workaround is to use the image ID for podman create/run. That will use whatever the user specified. Referencing by name at the moment will apply architecture checks.

It's tricky to have multi-arch support and be tolerant toward images with an incorrect arch. I am working on a fix for this specific case since the manifest list/image index is correct, just the image config is off.

vrothberg added a commit to vrothberg/common that referenced this issue Jun 18, 2021
We must ignore the platform of a local image when doing lookups.  Some
images set an incorrect or even invalid platform (see
containers/podman/issues/10682).  Doing the lookup while ignoring the
platform checks prevents redundantly downloading the same image.

Note that this has the consequence that a `--pull-never --arch=hurz` may
chose a local image of another architecture.  However, I estimate the
benefit of continuing to allow potentially invalid images higher than
not running them (and breaking workloads).

The changes required to touch the corrupted checks.  I used the occasion
to make the corrupted checks a bit cheaper.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/libpod that referenced this issue Jun 23, 2021
Much to my regret, there is a number of images in the wild with invalid
platforms breaking the platform checks in libimage that want to make
sure that a local image is matching the expected platform.

Imagine a `podman run --arch=arm64 fedora` with a local amd64 fedora
image.  We really shouldn't use the local one in this case and pull down
the arm64 one.

The strict platform checks in libimage in combination with invalid
platforms in images surfaced in Podman being able to pull an image but
failing to look it up in subsequent presence checks.  A `podman run`
would hence pull such an image but fail to create the container.

Support images with invalid platforms by vendoring the latest HEAD from
containers/common.  Also remove the partially implemented pull-policy
logic from Podman and let libimage handle that entirely.  However,
whenever --arch, --os or --platform are specified, the pull policy will
be forced to "newer".  This way, we pessimistically assume that the
local image has an invalid platform and we reach out to the registry.
If there's a newer image (i.e., one with a different digest), we'll pull
it down.

Please note that most of the logic has either already been implemented
in libimage or been moved down which allows for removing some clutter
from Podman.

[NO TESTS NEEDED] since c/common has new tests.  Podman can rely on the
existing tests.

Fixes: containers#10648
Fixes: containers#10682
Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jun 23, 2021
We must ignore the platform of a local image when doing lookups.  Some
images set an incorrect or even invalid platform (see
containers/podman/issues/10682).  Doing the lookup while ignoring the
platform checks prevents redundantly downloading the same image.

Note that this has the consequence that a `--pull-never --arch=hurz` may
chose a local image of another architecture.  However, I estimate the
benefit of continuing to allow potentially invalid images higher than
not running them (and breaking workloads).

The changes required to touch the corrupted checks.  I used the occasion
to make the corrupted checks a bit cheaper.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jun 23, 2021
Enforce the pull policy to always if a custom platform is requested by
the user.  Some images ship with invalid platforms which we must
pessimistically assume, see containers/podman/issues/10682.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jun 23, 2021
We must ignore the platform of a local image when doing lookups.  Some
images set an incorrect or even invalid platform (see
containers/podman/issues/10682).  Doing the lookup while ignoring the
platform checks prevents redundantly downloading the same image.

Note that this has the consequence that a `--pull-never --arch=hurz` may
chose a local image of another architecture.  However, I estimate the
benefit of continuing to allow potentially invalid images higher than
not running them (and breaking workloads).

The changes required to touch the corrupted checks.  I used the occasion
to make the corrupted checks a bit cheaper.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jun 23, 2021
Enforce the pull policy to always if a custom platform is requested by
the user.  Some images ship with invalid platforms which we must
pessimistically assume, see containers/podman/issues/10682.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/libpod that referenced this issue Jun 24, 2021
Much to my regret, there is a number of images in the wild with invalid
platforms breaking the platform checks in libimage that want to make
sure that a local image is matching the expected platform.

Imagine a `podman run --arch=arm64 fedora` with a local amd64 fedora
image.  We really shouldn't use the local one in this case and pull down
the arm64 one.

The strict platform checks in libimage in combination with invalid
platforms in images surfaced in Podman being able to pull an image but
failing to look it up in subsequent presence checks.  A `podman run`
would hence pull such an image but fail to create the container.

Support images with invalid platforms by vendoring the latest HEAD from
containers/common.  Also remove the partially implemented pull-policy
logic from Podman and let libimage handle that entirely.  However,
whenever --arch, --os or --platform are specified, the pull policy will
be forced to "newer".  This way, we pessimistically assume that the
local image has an invalid platform and we reach out to the registry.
If there's a newer image (i.e., one with a different digest), we'll pull
it down.

Please note that most of the logic has either already been implemented
in libimage or been moved down which allows for removing some clutter
from Podman.

[NO TESTS NEEDED] since c/common has new tests.  Podman can rely on the
existing tests.

Fixes: containers#10648
Fixes: containers#10682
Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jun 27, 2021
Do not use the name of the locally resolved image when pulling an image
with a custom platform.  As we recently re-discovered [1], many
multi-arch images in the wild do not adhere to the OCI image spec and
either declare custom or simply wrong platforms (arch, os, variant).

To address such wrong images, we enforce the pull-always policy whenever
a custom arch, os or variant is specified.  We have to do that since we
cannot reliably perform platform matches to any image we would find in
the local containers storage.

To complete the fix, we need to ignore any local image and not use the
locally resolved name which we usually have to do (see [2]).

Let's assume we have a local image "localhost/foo" (arch=amd64).  If we
perform a `pull --arch=arm64`, we would not attempt to be pulling
`localhost/foo` but use the ordinary short-name resolution and look for
a matching alias or walk the unqualified-search registries.

In other words: short-name resolution of multi-arch images is prone to
errors but we should continue supporting images in the wild.

[1] containers/podman/issues/10682
[2] containers/buildah/issues/2904

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jun 27, 2021
Do not use the name of the locally resolved image when pulling an image
with a custom platform.  As we recently re-discovered [1], many
multi-arch images in the wild do not adhere to the OCI image spec and
either declare custom or simply wrong platforms (arch, os, variant).

To address such wrong images, we enforce the pull-always policy whenever
a custom arch, os or variant is specified.  We have to do that since we
cannot reliably perform platform matches to any image we would find in
the local containers storage.

To complete the fix, we need to ignore any local image and not use the
locally resolved name which we usually have to do (see [2]).

Let's assume we have a local image "localhost/foo" (arch=amd64).  If we
perform a `pull --arch=arm64`, we would not attempt to be pulling
`localhost/foo` but use the ordinary short-name resolution and look for
a matching alias or walk the unqualified-search registries.

In other words: short-name resolution of multi-arch images is prone to
errors but we should continue supporting images in the wild.

[1] containers/podman/issues/10682
[2] containers/buildah/issues/2904

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jan 10, 2022
When pulling down an image with a user-specified custom platform, we
try to make sure that user gets what they are asking for.  An inherent
issue with multi-arch images is that there are many images in the wild
which do not get the platform right (see containers/podman/issues/10682).
That means we need to pessimistically assume that the local image is
wrong and pull the "correct" one down from the registry; in the worst case
that is redundant work but we have a guarantee of correctness.

Motivated by containers/podman/issues/12707 I had another look at the
code and found some space for optimizations.  Previously, we enforced
the pull policy to "always" but that may be too aggressive since we may
be running in an airgapped environment and the local image is correct.

With this change, we enforce the pull policy to "newer" which makes
errors non-fatal in case a local image has been found; this seems like a
good middleground between making sure we are serving the "correct" image
and user friendliness.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jan 10, 2022
When pulling down an image with a user-specified custom platform, we
try to make sure that user gets what they are asking for.  An inherent
issue with multi-arch images is that there are many images in the wild
which do not get the platform right (see containers/podman/issues/10682).
That means we need to pessimistically assume that the local image is
wrong and pull the "correct" one down from the registry; in the worst case
that is redundant work but we have a guarantee of correctness.

Motivated by containers/podman/issues/12707 I had another look at the
code and found some space for optimizations.  Previously, we enforced
the pull policy to "always" but that may be too aggressive since we may
be running in an airgapped environment and the local image is correct.

With this change, we enforce the pull policy to "newer" which makes
errors non-fatal in case a local image has been found; this seems like a
good middleground between making sure we are serving the "correct" image
and user friendliness.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jan 10, 2022
When pulling down an image with a user-specified custom platform, we
try to make sure that user gets what they are asking for.  An inherent
issue with multi-arch images is that there are many images in the wild
which do not get the platform right (see containers/podman/issues/10682).
That means we need to pessimistically assume that the local image is
wrong and pull the "correct" one down from the registry; in the worst case
that is redundant work but we have a guarantee of correctness.

Motivated by containers/podman/issues/12707 I had another look at the
code and found some space for optimizations.  Previously, we enforced
the pull policy to "always" but that may be too aggressive since we may
be running in an airgapped environment and the local image is correct.

With this change, we enforce the pull policy to "newer" which makes
errors non-fatal in case a local image has been found; this seems like a
good middleground between making sure we are serving the "correct" image
and user friendliness.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jan 12, 2022
After containers/podman/issues/10682, we decided to always re-pull
images of non-local platforms and match *any* local image. Over time, we
refined this logic to not *always* pull the image but only if there is a
*newer* one. This has slightly changed the semantics and requires to
perform platform checks when looking up a local image. Otherwise, bogus
values would match a local image and mistakenly return it.

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg added a commit to vrothberg/common that referenced this issue Jan 12, 2022
After containers/podman/issues/10682, we decided to always re-pull
images of non-local platforms and match *any* local image. Over time, we
refined this logic to not *always* pull the image but only if there is a
*newer* one. This has slightly changed the semantics and requires to
perform platform checks when looking up a local image. Otherwise, bogus
values would match a local image and mistakenly return it.

Signed-off-by: Valentin Rothberg <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants