-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman rm --all not working with containers that have dependencies #18180
Comments
Ouch! Can you point out where this fails in the e2e tests? I think this error should be fatal (i.e., let the e2e tests fail). |
I pushed some changes for it to #18163. |
Excellent, nice work! |
I think we need to order the container so that they are removed in the right order, looks like we already handle this for pods (e19e0de) Following the comment it should not be used for containers: podman/libpod/container_graph.go Lines 285 to 291 in 5e6c064
I am not sure why? |
OMG if this gets fixed, can you add like actual error checking to cleanup? This has been a flake nightmare for years. #11597 |
I already have it in my ginkgo update PR, so I could find the problematic ones. |
Only check exit codes last, othwerwise in case of errors it will return early and miss other commands. Also explicitly stop before rm, rm is not working in all cases (containers#18180). Signed-off-by: Paul Holzinger <[email protected]>
We could definitely restructure remove-all to do an ordered removal, but in the meantime, you could just add |
Alternatively, we could make the combination of |
I only see a
|
I checked, and it's already set by default on |
|
Self-assigning, the code here is a bit of a mess and probably ought to be refactored into normal RemoveContainer. |
@mheon any chance you can finish this? I would love to get my ginkgo v2 PR in. |
@Luap99 as a stopgap, WDYT of just running two sequential |
Only check exit codes last, othwerwise in case of errors it will return early and miss other commands. Also explicitly stop before rm, rm is not working in all cases (containers#18180). Signed-off-by: Paul Holzinger <[email protected]>
Add a workaround for containers#18180 so the ginkgo work can be merged without being blocked by the issue. Please revert this commit when the issue is fixed. Signed-off-by: Paul Holzinger <[email protected]>
Ack |
Several tweaks to see if we can track down containers#17216, the unlinkat-ebusy flake: - teardown(): if a cleanup command fails, display it and its output to the debug channel. This should never happen, but it can and does (see containers#18180, dependent containers). We need to know about it. - selinux tests: use unique pod names. This should help when scanning journal logs. - many tests: add "-f -t0" to "pod rm" And, several unrelated changes caught by accident: - images-commit-with-comment test: was leaving a stray image behind. Clean it up, and make a few more readability tweaks - podman-remote-group-add test: add an explicit skip() when not remote. (Otherwise, test passes cleanly on podman local, which is misleading) - lots of container cleanup and/or adding "--rm" to run commands, to avoid leaving stray containers Signed-off-by: Ed Santiago <[email protected]>
This reverts commit c4b9f4b. This was a temporary workaround until a fix for containers#18180 landed. Signed-off-by: Matthew Heon <[email protected]>
Issue Description
Try to use
podman rm -fa
when you have containers with dependencies, e.g.--network container:xxx
or any other namespace flag which supportscontainer:
.Steps to reproduce the issue
podman run --name test1 -d alpine top
podman run --network container:test1 -d alpine top
podman rm -fa
Describe the results you received
Describe the results you expected
All containers removed without errors.
podman info output
tested with latest main
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
This is causing issues in the e2e integration tests because we only do
rm -fa
as cleanup thus some processes are leaked.The text was updated successfully, but these errors were encountered: