-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman rm --storage fails #11207
Comments
Can you do a |
Would you look at that. What should I do about this from here? I'm not quite sure what Update, I see this: http://docs.podman.io/en/latest/markdown/podman-ps.1.html. So I figure |
@nalind PTAL - I think the error messages here are coming out of storage. |
It looks like |
Ack. I'm assuming that error should be non-fatal, and proceeding will successfully remove the image? |
I would expect |
In the interim, is there a workaround? I'm happy to nuke all my images / containers if need be. |
For anyone else who is having this issue, I ran this:
This completely nukes every container and image, so use this with extreme caution, but it fixed my issue. If you're stumbling across this further into the future, you may not want this, since the underlying issue might be fixed by then. |
A friendly reminder that this issue had no activity for 30 days. |
@nalind @mheon Are you suggesting that we do
|
@rhatdan checking for |
@Luap99 noted that this should work - we've been using it in all newly-landed PRs for Podman. |
Oh, you're right, it implements |
Fixes: containers#11207 [NO TESTS NEEDED] Since I don't know how to get into this situation. Signed-off-by: Daniel J Walsh <[email protected]>
Fixes: containers#11207 [NO TESTS NEEDED] Since I don't know how to get into this situation. Signed-off-by: Daniel J Walsh <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Signed-off-by: Sascha Grunert <[email protected]>
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Cherry-pick of: cri-o#6517 Signed-off-by: Sascha Grunert <[email protected]> Cherry-picked: f291de9
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Cherry-pick of: cri-o#6517 Signed-off-by: Sascha Grunert <[email protected]> Cherry-picked: 260dfc0
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Cherry-pick of: cri-o#6517 Signed-off-by: Sascha Grunert <[email protected]> Cherry-picked: 260dfc0
There are cases where the container storage unmount has been already (partially) done. This would cause `StopContainer()` in `server/container_stop.go:76` fail and therefore make containers get stuck in recreation, making their pods stuck in `NotReady`. We now double check the two c/stroage errors `ErrContainerUnknown` and `ErrLayerUnknown` Somehow related to: containers/podman#11207 (comment) Cherry-pick of: cri-o#6517 Signed-off-by: Sascha Grunert <[email protected]> Cherry-picked: 260dfc0
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I have a container that I cannot start because it says the name is already in use, but I can't delete the resources related to the name.
Steps to reproduce the issue:
Try to start container, see that it fails:
The container isn't actually there, but try to remove it:
Describe the results you received:
See above.
Describe the results you expected:
I'd expect the cleanup to work.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
Yes, this issue isn't mentioned.
Additional environment details (AWS, VirtualBox, physical, etc.):
Physical CentOS 8 machine.
I tried restarting the machine to no avail.
The text was updated successfully, but these errors were encountered: