Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume prune command removes used volume #7862

Closed
mschz opened this issue Oct 1, 2020 · 16 comments
Closed

Volume prune command removes used volume #7862

mschz opened this issue Oct 1, 2020 · 16 comments
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@mschz
Copy link

mschz commented Oct 1, 2020

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Hi all, I'm currently training myself "podman". In the last week I created a POD of two containers that's started using "podman-compose". Each of the containers uses a volume defined in my "docker-compose.yaml". Everything works properly so far. Since I had a lot of unused volumes, I decided to do some cleanup and get rid of all unused volumes by executing "podman volume prune -f". It turned out that this also removed one of the volumes that were still in use.

Steps to reproduce the issue:

  1. Create a pod with two services using "podman-compose up" where the "docker-compose.yaml" looks like this:
    `version: "3.8"
    services:
    database:
    image: centos/mariadb-103-centos8
    env_file: ./devel/database/ENV
    expose:

    • "3306"
      restart: no
      volumes:
    • /var/lib/mysql/data
      webappl:
      image: localhost/my-webappl
      ports:
    • "8082:8080"
      depends_on:
    • database
      volumes:
    • upload-dir:/var/my-webappl/upload_area`
  2. Ensure all containers are up an running, list volumes and inspect the container the volume of which will be removed when executing "podman volume prune -f":
    $ podman volume ls; podman container inspect test_testlink_1|grep -A13 Mounts DRIVER VOLUME NAME local test_database_51580a171176760f70ad908155d28f6b local test_upload-dir "Mounts": [ { "Type": "bind", "Name": "", "Source": "/home/mschmitz/.local/share/containers/storage/volumes/test_tl-upload/_data", "Destination": "/var/my-webappl/upload_area", "Driver": "", "Mode": "", "Options": [ "rbind" ], "RW": true, "Propagation": "rprivate" }

  3. Execute "podman volume prune -f"

Describe the results you received:

When executing "podman volume prune -f" I received the following output:
$ podman volume prune -f test_upload-dir

As a consequence I cannot restart the container of the "webappl" service.

Describe the results you expected:

The volume "test_upload-dir" should be removed.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

podman version 2.0.6

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.15.1
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.21-2.fc32.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.21, commit: 81d18b6c3ffc266abdef7ca94c1450e669a6a388'
  cpus: 8
  distribution:
    distribution: fedora
    version: "32"
  eventLogger: file
  hostname: fix8.ad.meelogic.com
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.7.11-200.fc32.x86_64
  linkmode: dynamic
  memFree: 5213646848
  memTotal: 16683360256
  ociRuntime:
    name: runc
    package: runc-1.0.0-144.dev.gite6555cc.fc32.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10+dev
      commit: fbdbaf85ecbc0e077f336c03062710435607dbf1
      spec: 1.0.1-dev
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.4-1.fc32.x86_64
    version: |-
      slirp4netns version 1.1.4
      commit: b66ffa8e262507e37fca689822d23430f3357fe8
      libslirp: 4.3.1
      SLIRP_CONFIG_VERSION_MAX: 2
  swapFree: 8409575424
  swapTotal: 8409575424
  uptime: 1000h 46m 51.74s (Approximately 41.67 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/mschmitz/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 2
    stopped: 1
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.1.2-1.fc32.x86_64
      Version: |-
        fusermount3 version: 3.9.1
        fuse-overlayfs: version 1.1.0
        FUSE library version 3.9.1
        using FUSE kernel interface version 7.31
  graphRoot: /home/mschmitz/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 81
  runRoot: /run/user/1000/containers
  volumePath: /home/mschmitz/.local/share/containers/storage/volumes
version:
  APIVersion: 1
  Built: 1598988411
  BuiltTime: Tue Sep  1 21:26:51 2020
  GitCommit: ""
  GoVersion: go1.14.6
  OsArch: linux/amd64
  Version: 2.0.6

Package info (e.g. output of rpm -q podman or apt list podman):

Name        : podman
Epoch       : 2
Version     : 2.0.6
Release     : 1.fc32
Architecture: x86_64
Install Date: Mon 28 Sep 2020 12:23:47 CEST
Group       : Unspecified
Size        : 52877679
License     : ASL 2.0
Signature   : RSA/SHA256, Tue 01 Sep 2020 21:48:57 CEST, Key ID 6c13026d12c944d0
Source RPM  : podman-2.0.6-1.fc32.src.rpm
Build Date  : Tue 01 Sep 2020 21:26:41 CEST
Build Host  : buildhw-x86-11.iad2.fedoraproject.org
Packager    : Fedora Project
Vendor      : Fedora Project
URL         : https://podman.io/
Bug URL     : https://bugz.fedoraproject.org/podman
Summary     : Manage Pods, Containers and Container Images
Description :
podman (Pod Manager) is a fully featured container engine that is a simple
daemonless tool.  podman provides a Docker-CLI comparable command line that
eases the transition from other container engines and allows the management of
pods, containers and images.  Simply put: alias docker=podman.
Most podman commands can be run as a regular user, without requiring
additional privileges.

podman uses Buildah(1) internally to create container images.
Both tools share image (not container) storage, hence each can use or
manipulate images (but not containers) created by the other.

Manage Pods, Containers and Container Images
podman is a simple management tool for pods, containers and images

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

No

Additional environment details (AWS, VirtualBox, physical, etc.):

FC32 on a physical laptop

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 1, 2020
@rhatdan rhatdan added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Oct 1, 2020
@rhatdan
Copy link
Member

rhatdan commented Oct 1, 2020

I am not sure what caused your volume to get removed. I took a quck look at the code and realized we were attempting to pass down the force flag, but actually were never using it. I opened a PR to clean up our handling of the force flag here.

#7864

But I don't believe this will fix your problem.
Do you have a quick recreator of the bug?

@rhatdan
Copy link
Member

rhatdan commented Oct 1, 2020

My test shows that this is working properly?

$ podman --version
podman version 2.1.1
$ podman volume create test1
test1
$ podman volume create test2
test2
$ podman create -v test1:/test1 alpine echo hello
2f78c87066301c99b439a14ffbb2ee09f972438497b9b8e3f9836ec51c28d528
$ podman volume prune -f
test2
$ podman volume list
DRIVER      VOLUME NAME
local       test1

@mschz
Copy link
Author

mschz commented Oct 1, 2020

I used the docker-compose file attached to this message and ran the following commands:
$ podman-compose -f docker-compose-test.yaml -p test up --detach --build 4515b03c23eb2d65d32675e98be4ac7606721b4a682fcf468fbdbd55de19a815 0aa141b0e738a071315562668f3c266386fa3bffe9d33fa527993953c316a6bb Error: error inspecting volume test_upload: no volume with name "test_upload" found: no such volume d8eb12135bc0e941ef3cadc8e09a6a12823279e7448d2c5b142d70fedc0a9278 podman pod create --name=test --share net 0 podman volume inspect test_database_51580a171176760f70ad908155d28f6b || podman volume create test_database_51580a171176760f70ad908155d28f6b podman run --name=test_database_1 -d --pod=test --label io.podman.compose.config-hash=123 --label io.podman.compose.project=test --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=database --env-file /home/mschmitz/podman/pods/testlink/devel/database/ENV --mount type=volume,source=test_database_51580a171176760f70ad908155d28f6b,destination=/var/lib/mysql/data --add-host database:127.0.0.1 --add-host test_database_1:127.0.0.1 --add-host webappl:127.0.0.1 --add-host test_webappl_1:127.0.0.1 --expose 3306 centos/mariadb-103-centos8 0 podman volume inspect test_upload || podman volume create test_upload podman run --name=test_webappl_1 -d --pod=test --label io.podman.compose.config-hash=123 --label io.podman.compose.project=test --label io.podman.compose.version=0.0.1 --label com.docker.compose.container-number=1 --label com.docker.compose.service=webappl --mount type=bind,source=/home/mschmitz/.local/share/containers/storage/volumes/test_upload/_data,destination=/var/upload_area,bind-propagation=Z --add-host database:127.0.0.1 --add-host test_database_1:127.0.0.1 --add-host webappl:127.0.0.1 --add-host test_webappl_1:127.0.0.1 centos/centos:8 0 [mschmitz@fix8 testlink]$ podman volume ls DRIVER VOLUME NAME local test_database_51580a171176760f70ad908155d28f6b local test_upload [mschmitz@fix8 testlink]$ podman volume prune -f test_upload
As you can see the volume got deleted.

Running the same command like you, I get your result, everything looks fine.

docker-compose-test.yaml.txt

@mheon
Copy link
Member

mheon commented Oct 1, 2020

It looks like the output didn't paste cleanly - can you pastebin it instead?

EDIT: I also don't see the output of podman volume prune (or any subsequent attempts to start the pod). Please provide that.

@mheon
Copy link
Member

mheon commented Oct 1, 2020

Also, why are you adding an argument to podman volume prune? It takes no arguments - it simply prunes unused volumes. There are also strong checks in place to ensure that a volume in use by a container will never be removed.

@rhatdan I think we do have a bug here in that podman volume prune is not throwing an error when an argument is specified.

@mheon
Copy link
Member

mheon commented Oct 1, 2020

@mschz I just tried your exact podman volume prune command on my system and it exits with an error complaining that podman volume prune accepts no arguments. I am also on F32, Podman 2.0.6. Can you please provide the full output of the podman volume prune command? I strongly suspect it's doing nothing...

@rhatdan
Copy link
Member

rhatdan commented Oct 1, 2020

@mheon I just checked and master podman volume prune does throw an error when args are given.

@mschz
Copy link
Author

mschz commented Oct 5, 2020

@mheon, sorry for the bad readability of my previous comment. I think you misinterpreted the output. I ran "podman volume prune -f" and the output was the name of the volume deleted (test_upload).

@umohnani8
Copy link
Member

Just checked master and podman volume prune throws an error now if an arg is passed to it

➜  podman git:(master) podman volume prune blah 
Error: `podman volume prune` takes no arguments

Looks like this issue has been resolved. @mschz can you please confirm this is working for you now and we can close.

@rhatdan
Copy link
Member

rhatdan commented Nov 14, 2020

Closing, speak up if you dissagree.

@rhatdan rhatdan closed this as completed Nov 14, 2020
@mschz
Copy link
Author

mschz commented Nov 17, 2020

Hi, sorry, but I was so busy in my current project that I haven't been able to follow this discussion for a couple of weeks. Today I upgraded to podman v2.1.1 and tried again. I had to change the docker-compose-test.yaml file a bit and attached the new one. Also please have a look at https://pastebin.com/yEwP6Szn to see what happens when creating the POD and pruning my volumes.

According to the documentation only "unused volumes" should be removed. As you can see from the "podman container inspect" command the volume "test_upload" is not unused and should therefore not be removed when pruning volumes.

docker-compose-test.yaml.txt

@rhatdan
Copy link
Member

rhatdan commented Nov 17, 2020

Are you sure this is not podman-compose causing the problem?
When I test simply as I stated above, it worked fine.

@mheon
Copy link
Member

mheon commented Nov 17, 2020

I am fairly certain this is podman-compose - last I checked (admittedly a while back) they were mounting volumes without using their names, but instead hardcoding data directory, which is incorrect and breaks Podman's in-use detection for named volumes.

@mschz
Copy link
Author

mschz commented Nov 18, 2020

All podman commands executed by podman-compose are in the output. You're right, there's at least one issue with mounting volumes that I reported to the team and where I provided a fix. If I leave all the irrelevant "--name", "--pod", "--label", "--add-host" options, the "podman run" command mounting the volume that's removed looks like this:

podman run -d --mount type=bind,source=/home/jdoe/.local/share/containers/storage/volumes/test_upload/_data,destination=/var/upload_area,bind-propagation=Z centos/centos:8 /bin/bash

If I understood you correctly you're saying that the bind-mount doesn't link the volume created and storing data at /home/jdoe/.local/share/containers/storage/volumes/test_upload/_data to the container, therefore the volume is unused from podman's point of view. Hence it will be removed when pruning volumes. Right?

@mheon
Copy link
Member

mheon commented Nov 18, 2020 via email

@rhatdan
Copy link
Member

rhatdan commented Nov 18, 2020

@mheon nice find and explanation.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

5 participants