Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When using podman in gitlab ci, latest image is never pulled #9232

Closed
rsommer opened this issue Feb 4, 2021 · 6 comments · Fixed by #9397
Closed

When using podman in gitlab ci, latest image is never pulled #9232

rsommer opened this issue Feb 4, 2021 · 6 comments · Fixed by #9397
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@rsommer
Copy link
Contributor

rsommer commented Feb 4, 2021

/kind bug

Description

When using podman as docker replacement in gitlab ci, latest images are never pulled.

Steps to reproduce the issue:

  1. Setup a new docker-runner with docker executor, using podman via podman.socket

  2. Execute a CI run on this runner using a simple image

  3. Update the Image from another host, so that no local version is present

  4. Execute another CI run -> the new image is not pulled

Describe the results you received:

The CI-runs always use the existing local image

Describe the results you expected:

Newer images usig a latest-tag are used.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.14
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.24, commit: '
  cpus: 4
  distribution:
    distribution: debian
    version: "10"
  eventLogger: journald
  hostname:citest01
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.19.0-14-amd64
  linkmode: dynamic
  memFree: 399601664
  memTotal: 4114087936
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.16.3-fd58-dirty
      commit: fd582c529489c0738e7039cbc036781d1d039014
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2046816256
  swapTotal: 2046816256
  uptime: 1h 40m 10.27s (Approximately 0.04 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 0
    stopped: 2
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 4
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 0
  BuiltTime: Thu Jan  1 00:00:00 1970
  GitCommit: ""
  GoVersion: go1.14
  OsArch: linux/amd64
  Version: 2.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman/additional,now 2.2.1~4 amd64 [installed]

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes, latest stable from upstream repo was used.

Additional environment details (AWS, VirtualBox, physical, etc.):
Tested in vagrant/ubuntu and vmware/debian10

I used the following Dockerfile:

FROM busybox:latest
WORKDIR /work
RUN echo "$(date)" > /work/build-date

Testimage build and pushed via

podman build --format=docker --no-cache --tag my.gitlab.instance/podman-test:latest -f Dockerfile ; podman push my.gitlab.instance/podman-test:latest

If I execute a podman pull on the runner-host, I get the correct image, if the the podman service is used via the gitlab runner, it is not.

Already added some infos at https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27270#note_498440333

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 4, 2021
@rsommer
Copy link
Contributor Author

rsommer commented Feb 15, 2021

Just updated to podman 3.0 to test if this may be fixed, but the problem still persists.

@vrothberg
Copy link
Member

Thanks for checking! Do you know which REST endpoint the CI runner is using?

@rsommer
Copy link
Contributor Author

rsommer commented Feb 15, 2021

The image is created via POST /v1.25/images/create?fromImage= with tag=latest. I'm not quite sure on which side the bug resides. If I set the pull_policy in containers.conf to always, everything works as expected. If I'm using the default settings and specify pull_policy=always inside the gitlab runner config, the images never get pulled again when they exist locally. As far as I understand, the latest tag should force a pull_policy of always. At least this is what kubernetes does.

@rhatdan
Copy link
Member

rhatdan commented Feb 15, 2021

We are probably defaulting to pullifmissing?

@vrothberg
Copy link
Member

We are probably defaulting to pullifmissing?

Yes, we are but we should be using PullIfNewer. I'll spin up a PR.

vrothberg added a commit to vrothberg/libpod that referenced this issue Feb 16, 2021
The `images/create` endpoint should always attempt to pull a newer
image.  Previously, the local images was used which is not compatible
with Docker and caused issues in the Gitlab CI.

Fixes: containers#9232
Signed-off-by: Valentin Rothberg <[email protected]>
@vrothberg
Copy link
Member

#9397

mheon pushed a commit to mheon/libpod that referenced this issue Feb 18, 2021
The `images/create` endpoint should always attempt to pull a newer
image.  Previously, the local images was used which is not compatible
with Docker and caused issues in the Gitlab CI.

Fixes: containers#9232
Signed-off-by: Valentin Rothberg <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants