-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Intermediate layers for tagged images are incorrectly pruned #10832
Comments
@vrothberg PTAL. I think this is definitely the recursive pruning grabbing it, though I'm not sure why Docker would bother retaining it? |
Thanks for reaching out, @Enchufa2! I am not sure that's related to it being recursive. Per Docker docs: "_ A dangling image is one that is not tagged and is not referenced by any container_" Which is the case for the intermediate image. It looks like Docker behavior doesn't match its docs. @Enchufa2, can you elaborate why removing the intermediate image may not be desirable? Note: it predates the libimage work. |
Let me explain my use case. I'm developing an application composed of several microservices closely interrelated, and I have a multistage Dockerfile that looks like this: FROM centos:7 as base
# some basic configuration stuff
# update the base and clean the base image
FROM base as common
# install common software and other stuff
FROM common as service1
# install and configure specific stuff
FROM common as service2
# etc. and I have a compose file with all the targets. Now, suppose I just modify some piece of code, some configuration file... that is added in the last layers of the services. Then, I rebuild, and I end up with something like this $ podman images
REPOSITORY TAG IMAGE ID
localhost/project_service2 latest service2_id_2
localhost/project_service1 latest service1_id_2
<none> <none> service2_id_1
<none> <none> service1_id_1
localhost/project_common latest common_id_1
localhost/project_base latest base_id_1 After several changes, several rebuilds, the list of "none" grows rapidly, as you imagine. What I want is to prune those dangling ones without having to do it by hand, one by one, but without losing all the intermediate steps in my tagged images. Otherwise, the next time I rebuild, I lost cache for BTW, to my understanding, the layer |
Also FWIW, I always assumed that the behaviour was the same as docker, and thus I've been running |
I tried Podman v3.0.1 which is also pruning, so it looks like Podman always did that. |
Weird. What changed then? How was I able to rebuild after pruning without rebuilding everything? |
I only tested the reproducer of this issue. Maybe there is more to it? |
To confirm, do mean that only images with the immediate parent tagged (but not more distant children) would be excluded? “Without a tagged ancestor” would stop pruning almost all images AFAICS. If you have looked into this, can you write down the exact shape of the layer/tag graphs, please? I’m guessing it is something like base-image -> base-config, base-top-layer or do the intermediate images have their own top layers on top of base-top-layer? Either way my first very vague uninformed impression is to view it the other way, that the problem is not really the pruning, but the caching: even if we pruned intermediate-image, it seems possible to me that a build that reuses base-image could after the two or more ENV steps notice that it is equivalent to final-image, and continue using the cache for later build steps. (I haven’t checked, it might be too much code require exponential time to find). If we are not changing the filesystem, we don’t really need to reuse a parent, the config changes alone are pretty cheap. And the current pruning rule seems easier to explain to users (“after a |
Another issue, probably related to this. Consider the Dockerfile in my first comment. Then, we produce a first build: $ podman build . -t test
STEP 1: FROM scratch
STEP 2: ENV test1=test1
--> a6e0cbdfa3e
STEP 3: ENV test2=test2
STEP 4: COMMIT test
--> d82f2c78728
d82f2c78728198673f9ee354ce85d453de5c93e728ba7eb4628b60dd02e217a1 Then, we change the last line of the Dockerfile to produce a second build: $ podman build . -t test
STEP 1: FROM scratch
STEP 2: ENV test1=test1
--> Using cache a6e0cbdfa3eb3649be86041e77b16ffff8d8ad1ecdbc157f6891d6c7eb7d1c6c
--> a6e0cbdfa3e
STEP 3: ENV test2=test3
STEP 4: COMMIT test
--> 53bde127cab
53bde127cab68f853eb438f1c15fd7539853545ababee07cd58e5d48f4b0c5bc Now, I see this, which doesn't look right to me: $ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/test latest 53bde127cab6 2 seconds ago 897 B
<none> <none> a6e0cbdfa3eb 7 seconds ago 769 B
<none> <none> d82f2c787281 7 seconds ago 897 B Why, because I tried the same with docker, and docker
|
I don't think so. The current pruning rule basically implies squashing all tagged images into a single layer. I don't think any user expects that. If I'd like to squash it, I would have added
Exactly. Then, why is it removing things I don't see when I list the images available?
I don't know podman's internals, but it doesn't seem too complicated to keep track of this. In fact, docker does. It's evident that docker treats leaf and intermediate images differently. When you run Also, from the discussions e.g., in the moby repo, it is evident that there are many workflows out there relying a cron job that runs |
ENV doesn’t need to produce layers; it’s just extra overhead. At this point I think we need to see the exact graph shapes to have a specific discussion. |
Aaah. Dangling for Docker implies not having children (see https://github.com/moby/moby/blob/master/daemon/images/image_prune.go#L79). |
Which in Podman terms is an "intermediate" image. |
I think what we need to do is: diff --git a/pkg/domain/infra/abi/images.go b/pkg/domain/infra/abi/images.go
index 5992181d3c01..940783daba3e 100644
--- a/pkg/domain/infra/abi/images.go
+++ b/pkg/domain/infra/abi/images.go
@@ -46,7 +46,7 @@ func (ir *ImageEngine) Prune(ctx context.Context, opts entities.ImagePruneOption
}
if !opts.All {
- pruneOptions.Filters = append(pruneOptions.Filters, "dangling=true")
+ pruneOptions.Filters = append(pruneOptions.Filters, "intermediate=true")
}
var pruneReports []*reports.PruneReport Likely, some tests need to be changed as well. @mtrmac WDYT? |
… or to put this another way, a sequence of 1000 ENV and other config-edits might (in some vague conceptual sense) not need to create any |
For me, as a user, one instruction = one layer that is somehow cached. I used
Apologies, not sure if this is directed to me or to @vrothberg. If you need my help with this, I would need some instructions on how to do such thing. :) |
What about #10832 (comment)? Should I open another issue? |
Yes, that would be great, thank you. |
I can’t see how this should work: if there is a chain of taggedBase → cache1 → cache2 → cache3, with the cache* images untagged descendants, AFAICS with
So Docker’s prune only removes images with no children, and the proposed fix is to… only remove images with children? I’m confused. Can we talk directed acyclic graphs and precise shapes and specific examples, please? Or if this seems obvious, feel free to tell me to do my homework, I have only lazily asked for the layer/tag graphs and I didn’t do the work to reproduce this myself. Right now I certainly don’t know what is going on and it rather feels that the three of us don’t have a shared language, let alone a shared understanding of what the issues and tradeoffs are. |
The “layer” I am talking about is a specific implementation object (
The distinction between ENV and RUN, AFAICS, matters. |
Only untagged leafs should be pruned (recursively)? |
Sounds intuitive. How does that fail in the original issue? |
@mtrmac Ok, let me try this. The example above would be:
A more complicated example from my use case would be something like this. First multistage build:
|
We prune any untagged node, whether it's a leaf or not.
Yes, the intermediate filter is not the correct one since it's an untagged one with children. I begin to think that the dangling filter should actually be extended to perform a children check. |
And solution in pseudo-code:
|
And another argument based on podman's current behaviour. If we have:
Then, if we |
Are the objects here images or layers?
Ahh, so I think I’ve been obtuse. cache1 counts as an |
I think I'm probably talking layers here. But I'm not sure. :) What's the difference? |
That looks consistent with the linked prune code from a quick check, but I didn’t carefully review their definition of parent/child. (The same filters are used for image list, and the code looks a bit more convoluted there, but I think is also consistent with that.) |
Never mind, my mistake; the logic |
BTW unlike the filters, it’s not at all obvious to me that |
I will take a look today or early next week. Likely a regression from libimage. |
Summary from some further investigation: Podman's definition of dangling so far was "untagged". Docker's definition was "untagged without children". Since Podman aims to be compatible, this should be fixed which is fairly easy. Looking at the initial issue/reproducer, I think there's some more thinking required:
Now, let's have a look at the images:
As @mtrmac mentioned above. None of these images has a physical layer, so Podman is unable to detect any parent-child relation. Docker is able to since it's storing these relations directly in the image store and sets them explicitly during build. I am fairly certain that Docker won't list two images as parent/child if they aren't built locally but pulled. My conclusion for now: fixing the dangling filter is easy. Supporting the reproducer may be a tough cookie. |
The reproducer is quite silly. :) I don't think any user cares about an |
Podman’s parent/child heuristic IIRC does try to pair images with the same top layer using history entries. Is that completely broken, or is that only not working in the case of images with no layers at all? (I honestly thought that it isn’t even possible to construct an image with no layers.) If it’s the latter, that looks like something that should be possible to fix without that much effort. |
The latter. The images do not have any (top) layer, so they cannot be matched.
Can you elaborate on that? I had a thought (but didn't check if it would be correct) to do the history checks on all "empty" images.
Note that images and layers are conceptually different. So an image object can be removed while its layers remain in the storage. |
Basically that. If we right now match a.topLayer == b.topLayer || a.topID == b.topLayer.Parent, and treat any |
BTW we should also match a single-layer image against a possible parent with no layers. |
As discussed in github.com/containers/podman/issues/10832 the definition of a "dangling" image in Podman has historically been incorrect. While the Docker docs describe a dangling image as an image without a tag, and Podman implemented the filters as such, Docker actually implemented the filters for images without a tag and without children. Refine the dangling filters and hence `IsDangling()` to only return true if an image is untagged and has no children. Also correct the comments of `IsIntermediate()`. Signed-off-by: Valentin Rothberg <[email protected]>
As discussed in github.com/containers/podman/issues/10832 the definition of a "dangling" image in Podman has historically been incorrect. While the Docker docs describe a dangling image as an image without a tag, and Podman implemented the filters as such, Docker actually implemented the filters for images without a tag and without children. Refine the dangling filters and hence `IsDangling()` to only return true if an image is untagged and has no children. Also correct the comments of `IsIntermediate()`. Signed-off-by: Valentin Rothberg <[email protected]>
By proxy by vendoring containers/common. Previously, a "dangling" image was an untagged image; just a described in the Docker docs. The definition of dangling has now been refined to an untagged image without children to be compatible with Docker. Further update a redundant image-prune test. Fixes: containers#10998 Fixes: containers#10832 Signed-off-by: Valentin Rothberg <[email protected]>
No issues so far with v3.3.1 containing this. Thanks again for fixing this issue. |
Glad it's working. Thanks for letting us know! |
Yep - seems to be working for me too. |
Excellent, thanks for reporting @dustymabe |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
podman-image-prune
incorrectly prunes intermediate layers for tagged images.Steps to reproduce the issue:
Consider the following Dockerfile:
then,
Describe the results you received:
As you can see, intermediate layer
f106d67818d
was removed.Describe the results you expected:
It shouldn't be removed. FWIW, docker doesn't remove it.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
No
The text was updated successfully, but these errors were encountered: