-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repeat system pruning until there is nothing removed #8599
Conversation
Fixes: #7990 |
@srcshelton PTAL |
Okay, let me patch this into podman-2.2.0 and I'll check it out. I'll do system rebuild without and check that multiple runs are required, and then again with to confirm it's fixed. This may take a good number of hours… (Although it does look as if it can't fail to resolve the problem!) |
cmd/podman/system/prune.go
Outdated
if pruneOptions.Volume { | ||
fmt.Println("Deleted Volumes") | ||
err = utils.PrintVolumePruneResults(response.VolumePruneReport) | ||
for { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recommendation: We should put a max iteration count on this - if we haven't removed everything in, say, 25 loops, we probably have a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a quick note to agree, but suggest a larger number: I've seen 40-odd manual iterations before… so set the bar at 50?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not at all a fan of a naked for loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If each iteration takes a beat, we might want to add a message like "Working" that adds a dot after each iteration or after some number of iterations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each successful interaction will print purged lines, So no need for heartbeat.
a14a8f6
to
ff0d964
Compare
... a couple of questions:
|
Also, I just did another (It could also be a symptom of another issue, such as the host system locking temporarily due to high I/O? I didn't see any other signs to suggest this being the case, though - other interactive tasks running at the same time weren't affected, and it was a long pause - at least 30s or so once I'd noticed it, but could have been much longer since I was doing other things and only checking the output occasionally) |
Code LGTM, a couple of small doc nits. Any chance to get a test geared towards this? |
@srcshelton As it does with podman volume prune, container prune, pod prune. So there should not be a difference. I don't think the pause was caused by the output printing, but some kind of locking in the storage layer. Is some other operation was running on your system, there is a good chance that the removal of content was blocked on a lock. |
Interesting - would a wait on a lock cause the output to pause half-way through printing an image hash, though? (Even if it is a lock-wait, unbuffered output would at least slow full lines to be printed whilst waiting?) Whilst there were other contains running when the pause in output occurred, these had been running for several days at this point, and there was no other podman activity: nothing was starting or stopping or creating/updating images. My question was just trying to understand how image pruning was fixed, and system pruning uses the same internal function, and yet system pruning has needed fixing separately? And whether there's a recursion limit for image pruning and, if so, whether the two methods to prune images are using the same limit, or whether they differ (potentially causing future confusion!)? |
All of the printing is being done with fmt.Println(). We could change this, but it does seem like a corner case. I did not do anything for image pruning. I have just added a loop on system prune to try to prune content again, since one pass of pruning could free up other pruning. @containers/podman-maintainers PTAL |
@@ -16,7 +16,7 @@ By default, volumes are not removed to prevent important data from being deleted | |||
## OPTIONS | |||
#### **--all**, **-a** | |||
|
|||
Remove all unused images not just dangling ones. | |||
Recursively remove all unused images, not just dangling ones. (Maximum 50 iterations.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"all unused images" ... I think we remove more than just images
cmd/podman/system/prune.go
Outdated
fmt.Println("Deleted Volumes") | ||
err = utils.PrintVolumePruneResults(response.VolumePruneReport) | ||
|
||
const MAX = 50 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems we're now working around the fact that registry.ContainerEngine().SystemPrune(...)
isn't doing it's job correctly. Having the code scattered between cmd/podman
and pkg/domain/...
seems like a recipe for trouble.
Could we move all the logic to registry.ContainerEngine().SystemPrune(...)
? I think it should remove all data, even if there are more than 50 iterations. It's not friendly to use. If I do a rm -rf *
I don't want to ls
afterwards to check if things were really removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: rhatdan, saschagrunert The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Ah alright:
|
Signed-off-by: Daniel J Walsh <[email protected]>
Signed-off-by: Daniel J Walsh [email protected]