Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image build cache not working through podman listening service #12378

Closed
stac47 opened this issue Nov 21, 2021 · 6 comments · Fixed by #12381
Closed

Image build cache not working through podman listening service #12378

stac47 opened this issue Nov 21, 2021 · 6 comments · Fixed by #12381
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@stac47
Copy link

stac47 commented Nov 21, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Building the same image through the podman API server does not make usage of the build cache

Steps to reproduce the issue:

  1. Write a basic Dockerfile:
% cat Dockerfile
FROM fedora
RUN dnf -y install rpmdevtools m4 && dnf -y install 'dnf-command(config-manager)'
  1. Start the podman API server:
% podman system service -t 0 &
  1. Run a first time the build using the Docker client (or directly with cURL):
 % DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock docker build .
  1. Run the same command a second time. The same amount of time will be needed because the cache will no be used.

Describe the results you received:

Subsequent builds of the same image don't use the build cache

Describe the results you expected:

I would have expected the same behaviour as docker/podman build: it would reuse the build cache.

% docker build .
Sending build context to Docker daemon  2.048kB
Step 1/2 : FROM fedora
 ---> b080de8a4da3
Step 2/2 : RUN dnf -y install rpmdevtools m4 && dnf -y install 'dnf-command(config-manager)'
 ---> Using cache
 ---> d8373a370ed8
Successfully built d8373a370ed8

Or:

% podman build .
STEP 1: FROM fedora
STEP 2: RUN dnf -y install rpmdevtools m4 && dnf -y install 'dnf-command(config-manager)'
--> Using cache d0801f1c09998d347a5b892b8fc80d944e3b946fbd417259130dda9aa502fc41
--> d0801f1c099
d0801f1c09998d347a5b892b8fc80d944e3b946fbd417259130dda9aa502fc41

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      4.0.0-dev
API Version:  4.0.0-dev
Go Version:   go1.17.3
Git Commit:   a6976c9ca8346331001dfade295173ad1482c2f6
Built:        Sat Nov 20 18:36:21 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: 'conmon: /usr/bin/conmon'
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: unknown'
  cpus: 8
  distribution:
    codename: impish
    distribution: ubuntu
    version: "21.10"
  eventLogger: journald
  hostname: lstacul-vm
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.13.0-21-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 3878821888
  memTotal: 16779907072
  networkBackend: cni
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17
      commit: 0e9229ae34caaebcb86f1fde18de3acaf18c6d9a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.0.1
      commit: 6a7b16babc95b6a3056b33fb45b74a6f62262dd4
      libslirp: 4.4.0
  swapFree: 0
  swapTotal: 0
  uptime: 135h 12m 59.63s (Approximately 5.62 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/ubuntu/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /mnt/my-xfs/podman-user-root
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 49
  runRoot: /mnt/my-xfs/podman-user-root
  volumePath: /mnt/my-xfs/podman-user-root/volumes
version:
  APIVersion: 4.0.0-dev
  Built: 1637433381
  BuiltTime: Sat Nov 20 18:36:21 2021
  GitCommit: a6976c9ca8346331001dfade295173ad1482c2f6
  GoVersion: go1.17.3
  OsArch: linux/amd64
  Version: 4.0.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

(paste your output here)

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes. This problem is also valid with podman coming from the ubuntu 21.10

Version:      3.2.1
API Version:  3.2.1
Go Version:   go1.16.7
Built:        Thu Jan  1 00:00:00 1970
OS/Arch:      linux/amd64

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 21, 2021
@flouthoc
Copy link
Collaborator

flouthoc commented Nov 22, 2021

@stac47 I think its because we use additional --layers which is explicitly set true by default from podman client but since docker client does not sets therefore it is false by default when invoked from docker client.

Does it happen same with podman remote ?

@flouthoc
Copy link
Collaborator

@stac47 Could you try above PR please.

@stac47
Copy link
Author

stac47 commented Nov 22, 2021

Question 1: with podman-remote the cache is correctly used
Question 2: the PR works fine

@stac47
Copy link
Author

stac47 commented Nov 22, 2021

By chance, do you know how that problem could be worked around with the podman 3.x series ?
If not, is it possible to backport that fix in the 3.x branches ?
Thanks in advance,

@rhatdan
Copy link
Member

rhatdan commented Nov 22, 2021

We hope to do a 3.5 release at some point, so this might be a candidate for that or a podman 3.4.3 release.

@hashar
Copy link

hashar commented Dec 6, 2022

This got backported in v3.4.3.

I had the issue with Podman 3.0.1 shipped by Debian Bullseye while using docker-compose with DOCKER_HOST set to the running podman service socket. It was extremely confusing ;)

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 8, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 8, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants