-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume prune command removes used volume #7862
Comments
I am not sure what caused your volume to get removed. I took a quck look at the code and realized we were attempting to pass down the force flag, but actually were never using it. I opened a PR to clean up our handling of the force flag here. But I don't believe this will fix your problem. |
My test shows that this is working properly?
|
I used the docker-compose file attached to this message and ran the following commands: Running the same command like you, I get your result, everything looks fine. |
It looks like the output didn't paste cleanly - can you pastebin it instead? EDIT: I also don't see the output of |
Also, why are you adding an argument to @rhatdan I think we do have a bug here in that |
@mschz I just tried your exact |
@mheon I just checked and master podman volume prune does throw an error when args are given. |
@mheon, sorry for the bad readability of my previous comment. I think you misinterpreted the output. I ran "podman volume prune -f" and the output was the name of the volume deleted (test_upload). |
Just checked master and podman volume prune throws an error now if an arg is passed to it
Looks like this issue has been resolved. @mschz can you please confirm this is working for you now and we can close. |
Closing, speak up if you dissagree. |
Hi, sorry, but I was so busy in my current project that I haven't been able to follow this discussion for a couple of weeks. Today I upgraded to podman v2.1.1 and tried again. I had to change the docker-compose-test.yaml file a bit and attached the new one. Also please have a look at https://pastebin.com/yEwP6Szn to see what happens when creating the POD and pruning my volumes. According to the documentation only "unused volumes" should be removed. As you can see from the "podman container inspect" command the volume "test_upload" is not unused and should therefore not be removed when pruning volumes. |
Are you sure this is not podman-compose causing the problem? |
I am fairly certain this is podman-compose - last I checked (admittedly a while back) they were mounting volumes without using their names, but instead hardcoding data directory, which is incorrect and breaks Podman's in-use detection for named volumes. |
All podman commands executed by podman-compose are in the output. You're right, there's at least one issue with mounting volumes that I reported to the team and where I provided a fix. If I leave all the irrelevant "--name", "--pod", "--label", "--add-host" options, the "podman run" command mounting the volume that's removed looks like this:
If I understood you correctly you're saying that the bind-mount doesn't link the volume created and storing data at /home/jdoe/.local/share/containers/storage/volumes/test_upload/_data to the container, therefore the volume is unused from podman's point of view. Hence it will be removed when pruning volumes. Right? |
Yes. Named volumes are intended to be mounted by their name, not to have
their data directories manually mounted by the user or a tool. Podman wants
to manage the volume itself so we can count the number of users of the
volume and prevent it from being removed while it is in use.
…On Wed, Nov 18, 2020, 02:58 mschz ***@***.***> wrote:
All podman commands executed by podman-compose are in the output. You're
right, there's at least one issue with mounting volumes that I reported to
the team and where I provided a fix. If I leave all the irrelevant
"--name", "--pod", "--label", "--add-host" options, the "podman run"
command mounting the volume that's removed looks like this:
podman run -d --mount
type=bind,source=/home/jdoe/.local/share/containers/storage/volumes/test_upload/_data,destination=/var/upload_area,bind-propagation=Z
centos/centos:8 /bin/bash
If I understood you correctly you're saying that the bind-mount doesn't
link the volume created and storing data at
/home/jdoe/.local/share/containers/storage/volumes/test_upload/_data to the
container, therefore the volume is unused from podman's point of view.
Hence it will be removed when pruning volumes. Right?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7862 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCEW2VVIAIABCACDS3DSQN5BLANCNFSM4SACL4AA>
.
|
@mheon nice find and explanation. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Hi all, I'm currently training myself "podman". In the last week I created a POD of two containers that's started using "podman-compose". Each of the containers uses a volume defined in my "docker-compose.yaml". Everything works properly so far. Since I had a lot of unused volumes, I decided to do some cleanup and get rid of all unused volumes by executing "podman volume prune -f". It turned out that this also removed one of the volumes that were still in use.
Steps to reproduce the issue:
Create a pod with two services using "podman-compose up" where the "docker-compose.yaml" looks like this:
`version: "3.8"
services:
database:
image: centos/mariadb-103-centos8
env_file: ./devel/database/ENV
expose:
restart: no
volumes:
webappl:
image: localhost/my-webappl
ports:
depends_on:
volumes:
Ensure all containers are up an running, list volumes and inspect the container the volume of which will be removed when executing "podman volume prune -f":
$ podman volume ls; podman container inspect test_testlink_1|grep -A13 Mounts DRIVER VOLUME NAME local test_database_51580a171176760f70ad908155d28f6b local test_upload-dir "Mounts": [ { "Type": "bind", "Name": "", "Source": "/home/mschmitz/.local/share/containers/storage/volumes/test_tl-upload/_data", "Destination": "/var/my-webappl/upload_area", "Driver": "", "Mode": "", "Options": [ "rbind" ], "RW": true, "Propagation": "rprivate" }
Execute "podman volume prune -f"
Describe the results you received:
When executing "podman volume prune -f" I received the following output:
$ podman volume prune -f test_upload-dir
As a consequence I cannot restart the container of the "webappl" service.
Describe the results you expected:
The volume "test_upload-dir" should be removed.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
No
Additional environment details (AWS, VirtualBox, physical, etc.):
FC32 on a physical laptop
The text was updated successfully, but these errors were encountered: