-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rmi: image in use by nonexistent container #12353
Comments
3.4.0 is a few minor versions out of date, can you try 3.4.2? That container is probably a leftover Buildah container from an aborted build. Does |
@mheon Yes, the container is even removed successfully using I btw. have 6 such ghost containers left to test (that I know of thanks to dangling but not pruned images). |
@mheon Tried it on Podman 3.4.2 (its Manjaro package was updated). Same issue as before, only after removing the ghost container with
So it works after removing the container, but it writes the same error message it would write if the container was there after successfully deleting the image. |
If there's any test I could perform to provide more information to fix this, I'd be very interested in doing so! |
Although ignored by |
Because they may be containers you are actually using. If you're on a system with CRI-O or Buildah installed, those containers could have been made by one of those; there's no easy way to determine whether these containers are artifacts of a failed build or intentionally-created containers from another tool. We added a flag to |
That is awesome! I bet CI systems such as the hopefully progressing GitLab Runner Podman executor can really profit from this way of avoiding a buildup of unused files by canceled builds. I am btw. using But should I be worried though that the ghost containers don't show up when doing |
That is very unusual, that should show all containers, even those not created by Podman - sounds like a bug. @rhatdan Any thoughts? |
Yes they should show up. |
Sure! So far, I've only used
One could get the impression that I used Fedora 31 a lot. |
I btw. have no idea where so many containers come from. I do cancel builds from time to time, but that seems like an extremely high amount of containers. Other than that, I start my images with the |
Do a |
I can, but if that works I can then of course no longer help debugging it (Edit: unless I wait for these ghost containers to build up again) - maybe I should add that I also saw this issue on a CentOS 8 Stream machine that I set up a week ago. So this is nothing special to the configuration on one machine. |
I should add that doing a |
Any chance you have a simple repeater. podman images list --external is just supposed to list images in container/storage, which includes Buildah images. I would like to know how the image is invisible to container/storage. |
That was actually quite simple and @mheon is right that one gets this from an aborted build - I do that frequently because I compile code in containers and abort builds as soon as I see a compiler error. Similarly, a CI system like the GitLab Runner also cancels builds regularly depending on its setting, so that's probably "explaining" how ghost containers form in both setups. Here's an example where you can copy-paste each block from empty line to empty line:
(The way of starting podman in podman that supports running Example output:
|
Btw. Still I think this would be good to be fixed - for others who try setting up a CI pipeline with Podman, the discoverability of this workaround is probably significantly lower than if the |
podman continaer ls --all --external Should show them. Currently the code only shows external if you add --all. |
BTW This was documented in the podman ps man page.
|
We currently do not show --external containers when the user specifies it, unless they also specify the --all flag. This has led to confusion. I see no reason not to list them without the --all flag if the user specifies the option. Fixes: containers#12353 Signed-off-by: Daniel J Walsh <[email protected]>
Is there a field in storage we could mark, or just add names based on podman-alpine1 or something like that. |
|
Sure these containers are created by buildah code. The issue is users do not know they are using buildah when they do podman build, and then we sometime leak the buildah containers, which ends up creating images which can not be removed. |
That sounds like a good first step - I would also argue that when the containers are created with Which seems currently not to be the case, from how long it took us to understand this very issue - although most here are developers of podman. Having |
First: If there's anything I can provide info on or test out, I'd be happy to do so in the next days. I plan on resetting podman storage next week, so now would be the time to test anything as long as the issue is visible.
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
On my machine, some untagged images exist that are not removed with
podman image prune
. I have no containers (podman container list -a
gives an empty list), but trying to remove one of the dangling images usingrmi
, an error occurs statingimage is in use by a container
. Trying to do anything with the given container, I getno such container
.Btw.
podman images
takes approximately a minute which seems quite long.Steps to reproduce the issue:
Describe the results you received:
Image is said to be in use by a container that seems to not exist.
Describe the results you expected:
Image is removed using
rmi
or a container appears in the list.Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Manjaro system package
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
At least with the newest version available for Manjaro
Additional environment details (AWS, VirtualBox, physical, etc.):
The text was updated successfully, but these errors were encountered: