Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman build wrongly uses stale cache layer although build-arg changed and, thus, produces incorrect image #2837

Closed
svdHero opened this issue Dec 4, 2020 · 5 comments · Fixed by #2938
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR

Comments

@svdHero
Copy link

svdHero commented Dec 4, 2020

Is this a BUG REPORT or FEATURE REQUEST?

/kind bug

Description

I am using using podman build to create my container images in a Jenkins CI Pipeline. In my Containerfile I bake the Jenkins build number into the software components that later run inside the container.

Yesterday I noticed that Podman keeps using the stale cache layers, instead of creating new container images with every new pipeline run. The reason seems to be that the files that I COPY in my Containerfile have not changed. However, the build-arg that is the Jenkins build number has changed. Podman does not seem to check the build args for modification.

Steps to reproduce the issue:

  1. Create a Containerfile with the following simple content:
FROM busybox:1.32.0

ARG BUILD_NUMBER
ENV BUILD_NUMBER=${BUILD_NUMBER}

ENTRYPOINT printf "This is build number %s.\n\n" ${BUILD_NUMBER}
  1. Run these commands:
echo "Building image with build number 13:"
podman build -f Containerfile -t my-test-image:1.0.0.13 --build-arg BUILD_NUMBER=13 .
echo "Running image with build number 13:"
podman run my-test-image:1.0.0.13

echo "Building image with build number 14:"
podman build -f Containerfile -t my-test-image:1.0.0.14 --build-arg BUILD_NUMBER=14 .
echo "Running image with build number 14:"
podman run my-test-image:1.0.0.14

Describe the results you received:
I get the following output:

Building image with build number 13:
STEP 1: FROM busybox:1.32.0
Completed short name "busybox" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob ea97eb0eb3ec done  
Copying config 219ee5171f done  
Writing manifest to image destination
Storing signatures
STEP 2: ARG BUILD_NUMBER
--> 2ca28c528f3
STEP 3: ENV BUILD_NUMBER=${BUILD_NUMBER}
--> 71d217da2ca
STEP 4: ENTRYPOINT printf "This is build number %s.\n\n" ${BUILD_NUMBER}
STEP 5: COMMIT my-test-image:1.0.0.13
--> 409f2581b67
409f2581b67189237bb15f1f354df0e63ca5bf4948f9be8028de01869cafa6f1
Running image with build number 13:
This is build number 13.

Building image with build number 14:
STEP 1: FROM busybox:1.32.0
STEP 2: ARG BUILD_NUMBER
--> Using cache 2ca28c528f318b548f866bc762168c75ba4f461943d5b6f8acff3b29ba581325
--> 2ca28c528f3
STEP 3: ENV BUILD_NUMBER=${BUILD_NUMBER}
--> Using cache 71d217da2ca87b418e59e352ab507fb142965fa763a66fb3d27d4dde38f0cd59
--> 71d217da2ca
STEP 4: ENTRYPOINT printf "This is build number %s.\n\n" ${BUILD_NUMBER}
--> Using cache 409f2581b67189237bb15f1f354df0e63ca5bf4948f9be8028de01869cafa6f1
STEP 5: COMMIT my-test-image:1.0.0.14
--> 409f2581b67
409f2581b67189237bb15f1f354df0e63ca5bf4948f9be8028de01869cafa6f1
Running image with build number 14:
This is build number 13.

Although the build-arg changed when BUILD_NUMBER is 14, Podman still uses the stale cache image for build number 13. Thus, the wrong image is produced.

Describe the results you expected:
Since build args change the build, Podman should notice that and create a new image with the latest build-arg values.

Additional information you deem important (e.g. issue happens only occasionally):
When I run a

alias podman=docker

and then execute the steps above, I see the expected behaviour, i.e., a new image reflecting the latest build-arg value is created.

Output of podman version:

podman version 2.2.0

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.20, commit: '
  cpus: 1
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: xxxxx
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.4.0-56-generic
  linkmode: dynamic
  memFree: 4186542080
  memTotal: 8349216768
  ociRuntime:
    name: runc
    package: 'containerd.io: /usr/bin/runc'
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
      spec: 1.0.1-dev
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.1.4
      commit: unknown
      libslirp: 4.3.1-git
      SLIRP_CONFIG_VERSION_MAX: 3
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 39m 54.49s
registries:
  search:
  - docker.io
  - harbor.wildbad.berthold.comp
store:
  configFile: /home/schorsch/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 0
    stopped: 2
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/schorsch/.local/share/containers/storage
  graphStatus: {}
  imageStore:
    number: 4
  runRoot: /run/user/1000/containers
  volumePath: /home/schorsch/.local/share/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 2.2.0

Package info (e.g. output of rpm -q podman or apt list podman):

apt list podman
Listing... Done
podman/unknown,now 2.2.0~2 amd64 [installed]
podman/unknown 2.2.0~2 arm64
podman/unknown 2.2.0~2 armhf
podman/unknown 2.2.0~2 s390x

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
This happens both in VirtualBox as well as on bare metall.

@mheon
Copy link
Member

mheon commented Dec 4, 2020

@TomSweeneyRedHat @nalind PTAL

@rhatdan rhatdan transferred this issue from containers/podman Dec 4, 2020
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 4, 2020
@rhatdan
Copy link
Member

rhatdan commented Dec 4, 2020

This would be a buildah issue, when if fixed, it will be revendored into Podman.

@svdHero
Copy link
Author

svdHero commented Dec 5, 2020

So is there an easy way to transfer this issue or do I have to create a new one with the buildah project?

@svdHero
Copy link
Author

svdHero commented Dec 5, 2020

Ah sorry. I've just seen that you transferred it already. Thank you.

jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Dec 11, 2020
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Dec 11, 2020
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Dec 11, 2020
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Dec 11, 2020
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Dec 11, 2020
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
@umohnani8
Copy link
Member

Duplicate of #2848

@umohnani8 umohnani8 marked this as a duplicate of #2848 Jan 26, 2021
@umohnani8 umohnani8 self-assigned this Jan 26, 2021
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Jan 27, 2021
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Jan 27, 2021
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Degraded == True if any of the Tuned Profiles failed to be
    applied cleanly for any of the containerized Tuned daemons managed by
    NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
jmencak added a commit to jmencak/cluster-node-tuning-operator that referenced this issue Jan 29, 2021
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Reason == ProfileDegraded for the Available condition if any of
    the Tuned Profiles failed to be applied cleanly for any of the
    containerized Tuned daemons managed by NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
IlyaTyomkin pushed a commit to IlyaTyomkin/cluster-node-tuning-operator that referenced this issue Jun 5, 2023
Changes:
  - report Tuned profile currently applied for each of the containerized
    Tuned daemon managed by NTO
  - report two Profile status conditions "Applied" and "Degraded"
    in every Profile indicating whether the Tuned profile was applied and
    whether there were issues during the profile application
  - cleanup of the ClusterOperator settings code; ClusterOperator now also
    reports Reason == ProfileDegraded for the Available condition if any of
    the Tuned Profiles failed to be applied cleanly for any of the
    containerized Tuned daemons managed by NTO
  - e2e test added to check the status reporting functionality
  - e2e basic/available test enhanced to check for not Degraded condition
  - using "podman build --no-cache" now.  This works around issues such as:
    containers/buildah#2837
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 8, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants