-
Notifications
You must be signed in to change notification settings - Fork 788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman build wrongly uses stale cache layer although build-arg changed and, thus, produces incorrect image #2837
Labels
Comments
@TomSweeneyRedHat @nalind PTAL |
openshift-ci-robot
added
the
kind/bug
Categorizes issue or PR as related to a bug.
label
Dec 4, 2020
This would be a buildah issue, when if fixed, it will be revendored into Podman. |
So is there an easy way to transfer this issue or do I have to create a new one with the buildah project? |
Ah sorry. I've just seen that you transferred it already. Thank you. |
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Dec 11, 2020
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Dec 11, 2020
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Dec 11, 2020
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Dec 11, 2020
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Dec 11, 2020
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
Duplicate of #2848 |
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Jan 27, 2021
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Jan 27, 2021
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Degraded == True if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
jmencak
added a commit
to jmencak/cluster-node-tuning-operator
that referenced
this issue
Jan 29, 2021
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Reason == ProfileDegraded for the Available condition if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
IlyaTyomkin
pushed a commit
to IlyaTyomkin/cluster-node-tuning-operator
that referenced
this issue
Jun 5, 2023
Changes: - report Tuned profile currently applied for each of the containerized Tuned daemon managed by NTO - report two Profile status conditions "Applied" and "Degraded" in every Profile indicating whether the Tuned profile was applied and whether there were issues during the profile application - cleanup of the ClusterOperator settings code; ClusterOperator now also reports Reason == ProfileDegraded for the Available condition if any of the Tuned Profiles failed to be applied cleanly for any of the containerized Tuned daemons managed by NTO - e2e test added to check the status reporting functionality - e2e basic/available test enhanced to check for not Degraded condition - using "podman build --no-cache" now. This works around issues such as: containers/buildah#2837
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Labels
Is this a BUG REPORT or FEATURE REQUEST?
/kind bug
Description
I am using using
podman build
to create my container images in a Jenkins CI Pipeline. In myContainerfile
I bake the Jenkins build number into the software components that later run inside the container.Yesterday I noticed that Podman keeps using the stale cache layers, instead of creating new container images with every new pipeline run. The reason seems to be that the files that I
COPY
in myContainerfile
have not changed. However, thebuild-arg
that is the Jenkins build number has changed. Podman does not seem to check the build args for modification.Steps to reproduce the issue:
Containerfile
with the following simple content:Describe the results you received:
I get the following output:
Although the
build-arg
changed whenBUILD_NUMBER
is14
, Podman still uses the stale cache image for build number13
. Thus, the wrong image is produced.Describe the results you expected:
Since build args change the build, Podman should notice that and create a new image with the latest
build-arg
values.Additional information you deem important (e.g. issue happens only occasionally):
When I run a
alias podman=docker
and then execute the steps above, I see the expected behaviour, i.e., a new image reflecting the latest
build-arg
value is created.Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
This happens both in VirtualBox as well as on bare metall.
The text was updated successfully, but these errors were encountered: