-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework pruning to report reclaimed space #8831
Rework pruning to report reclaimed space #8831
Conversation
edf065b
to
0c1971c
Compare
Thansk for contribute |
0c1971c
to
7b647bc
Compare
NOTE: Still working out tests on this PR. Running some locally on my machine had some failures but I'm not sure if those were because of how I was running them 🤷 |
dc3d5cd
to
4ee8fbe
Compare
I think I finished this PR and I am mostly just waiting to merge in #8809 (which is why there are some extra changes currently in this PR. I should have cherry picked the commit from pull 8809 but I didn't notice until it was too late). Also if anyone has any ideas on smart ways to test the |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bblenard, rhatdan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This change adds code to report the reclaimed space after a prune. Reclaimed space from volumes, images, and containers is recorded during the prune call in a PruneReport struct. These structs are collected into a slice during a system prune and processed afterwards to calculate the total reclaimed space. Closes containers#8658 Signed-off-by: Baron Lenardson <[email protected]>
4ee8fbe
to
b90f7f9
Compare
Assuming all of the test pass again after the rebase this PR should be ready for some review. I am concerned about the following two aspects of this PR
Also here is some example output from some manual testing: |
LGTM |
LGTM, though I'm concerned this is going to absolutely murder our performance on pruning - size operations are extremely costly. /lgtm |
#8891 created to track restoring payloads to meet compatibility requirements |
@bblenard Just got a message of PS: If that is useful, I can also create an issue for this! |
That is not expected behavior. I'll review this change again to see if I can figure out why that happened. If you are able to reproduce it let me know :) |
Could something like reflinks help explain it? |
At the moment my hunch is something to do with this pattern where maybe things are getting double counted... That being said I don't know exactly where to start to track this down. If @2xB has more information / is able to reproduce that might help. |
@bblenard I could produce a reproducible inconsistency, although I have no idea whether it comes from the same reason. It is packed into a single bash script, using a podman-in-podman setup to make this reproducible and relatively clean without interfering with any pre-existing containers when executing The script sets up a podman image The script (click to expand)#!/bin/bash
# Build podman container to work in
podman build -t issue8831_1 - << EOF
FROM fedora:34
RUN dnf update -y && dnf install -y podman nano
RUN sed -i -e 's,driver = "overlay",driver = "vfs",g' /etc/containers/storage.conf && \
rm -rf /var/lib/containers
RUN podman pull fedora:34
EOF
# Create podman files in that container, commit as issue8831_2
podman run -it --security-opt seccomp=unconfined --privileged --name issue8831_1c issue8831_1 <<EOF
cat << EOD > Dockerfile
FROM fedora:34 as target1
RUN fallocate -l 0.5GB largefile1
RUN touch smallfile1
FROM fedora:34 as target2
RUN fallocate -l 0.5GB largefile2
RUN touch smallfile2
FROM target1 as result1
COPY --from=target2 smallfile2 .
FROM fedora:34 as result2
COPY --from=target2 smallfile2 .
COPY --from=target1 smallfile1 .
EOD
podman build --target result1 -t r1 .
podman build --target result2 -t r2 .
exit
EOF
podman commit issue8831_1c issue8831_2
# Delete podman files, way 1
podman run -it --security-opt seccomp=unconfined --privileged --rm issue8831_2 <<EOF
echo "Mark 1"
podman system prune -f
echo "Mark 2"
podman system prune -f --all
exit
EOF
# Delete podman files, way 2
podman run -it --security-opt seccomp=unconfined --privileged --rm issue8831_2 <<EOF
echo "Mark 3"
podman system prune -f --all
echo "Mark 4"
podman system prune -f
exit
EOF
# Cleanup
podman rm issue8831_1c
podman rmi issue8831_2
podman rmi issue8831_1 The result - cleaned a bit:
It is clear: |
Can you open a fresh issue with this reproducer, so this doesn't get lost?
…On Tue, May 4, 2021 at 8:11 PM 2xB ***@***.***> wrote:
@bblenard <https://github.com/bblenard> I could produce a reproducible
inconsistency, although I have no idea whether it comes from the same
reason.
It is packed into a single bash script, using a podman-in-podman setup to
make this reproducible and relatively clean without interfering with any
pre-existing containers when executing podman system prune. Just for the
record, I'm using rootless Podman 3.1.0 on Manjaro Linux and inside the
Fedora container is a Podman 3.1.2 during my tests. But theoretically this
script should work on any system without interference with existing podman
containers.
The script sets up a podman image issue8831_2 with some podman image
files inside that are deleted using podman system prune -f and podman
system prune -f --all, first in that order (marks 1 and 2) and then -
rolling back to issue8831_2 - in the other order (marks 3 and 4).
The script (click to expand)
#!/bin/bash
# Build podman container to work in
podman build -t issue8831_1 - << EOFFROM fedora:34RUN dnf update -y && dnf install -y podman nanoRUN sed -i -e 's,driver = "overlay",driver = "vfs",g' /etc/containers/storage.conf && \ rm -rf /var/lib/containersRUN podman pull fedora:34EOF
# Create podman files in that container, commit as issue8831_2
podman run -it --security-opt seccomp=unconfined --privileged --name issue8831_1c issue8831_1 <<EOFcat << EOD > DockerfileFROM fedora:34 as target1RUN fallocate -l 0.5GB largefile1RUN touch smallfile1FROM fedora:34 as target2RUN fallocate -l 0.5GB largefile2RUN touch smallfile2FROM target1 as result1COPY --from=target2 smallfile2 .FROM fedora:34 as result2COPY --from=target2 smallfile2 .COPY --from=target1 smallfile1 .EODpodman build --target result1 -t r1 .podman build --target result2 -t r2 .exitEOF
podman commit issue8831_1c issue8831_2
# Delete podman files, way 1
podman run -it --security-opt seccomp=unconfined --privileged --rm issue8831_2 <<EOFecho "Mark 1"podman system prune -fecho "Mark 2"podman system prune -f --allexitEOF
# Delete podman files, way 2
podman run -it --security-opt seccomp=unconfined --privileged --rm issue8831_2 <<EOFecho "Mark 3"podman system prune -f --allecho "Mark 4"podman system prune -fexitEOF
# Cleanup
podman rm issue8831_1c
podman rmi issue8831_2
podman rmi issue8831_1
The result - cleaned a bit:
Mark 1
***@***.*** /]# podman system prune -f
552c1a027388a2dbc95c6eaccee2e9d95ca0b3c0f17bb61af8950fec38b0c2cf
Deleted Images
0e1e85c3af4edf74c5f4731991789561a2bfabec9e0ee2e797ae33742f9f095f
Total reclaimed space: 686.5MB
Mark 2
***@***.*** /]# podman system prune -f --all
Deleted Imagesregistry.fedoraproject.org/fedora:34
e32001e126b4de19d3436c0358124c9604a8f13f556e944137706f96bffeb15c <http://registry.fedoraproject.org/fedora:34e32001e126b4de19d3436c0358124c9604a8f13f556e944137706f96bffeb15c>
07877da981cceb7bd0064af6563c800c6e91c166265671870026ccd16c663879
localhost/r1:latest
abccf02e337b261341f0fb84d7ebd7273c2b14946e7e917c4ed365fa0516e64d
localhost/r2:latest
Total reclaimed space: 2.619GB
# ---
Mark 3
***@***.*** /]# podman system prune -f --all
Deleted Imagesregistry.fedoraproject.org/fedora:34
e32001e126b4de19d3436c0358124c9604a8f13f556e944137706f96bffeb15c <http://registry.fedoraproject.org/fedora:34e32001e126b4de19d3436c0358124c9604a8f13f556e944137706f96bffeb15c>
07877da981cceb7bd0064af6563c800c6e91c166265671870026ccd16c663879
552c1a027388a2dbc95c6eaccee2e9d95ca0b3c0f17bb61af8950fec38b0c2cf
0e1e85c3af4edf74c5f4731991789561a2bfabec9e0ee2e797ae33742f9f095f
localhost/r1:latest
abccf02e337b261341f0fb84d7ebd7273c2b14946e7e917c4ed365fa0516e64d
localhost/r2:latest
Total reclaimed space: 3.992GB
Mark 4
***@***.*** /]# podman system prune -f
Deleted Images
Total reclaimed space: 0B
It is clear: 686.5MB + 2.619GB != 3.992GB (+ 0B )
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#8831 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCFBDYFIF4BVMZU6E63TMCEMVANCNFSM4VIS4T3Q>
.
|
@2xB Thanks for the POC. I did notice something interesting while playing with it. Although we expect
|
This change adds code to report the reclaimed space after a prune.
Reclaimed space from volumes, images, and containers is recorded
during the prune call in a PruneReport struct. These structs are
collected into a slice during a system prune and processed afterwards
to calculate the total reclaimed space.
Closes #8658