Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected error "Cleanup volume: volume is being used by the following container" #12808

Closed
Romain-Geissler-1A opened this issue Jan 11, 2022 · 5 comments · Fixed by #13232
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@Romain-Geissler-1A
Copy link
Contributor

/kind bug

Description

It seems that the removal of one container which actually uses the volume of another one leads to unexpected "Cleanup volume: volume is being used by the following container" errors.

Steps to reproduce the issue:

  1. Create a dummy dockerfile, using a volume:
[root@4500bc3a2feb bug-report]# cat Dockerfile
FROM fedora

VOLUME /some-volume
  1. Build this dockerfile:
[root@4500bc3a2feb bug-report]# podman build .
STEP 1/2: FROM fedora
STEP 2/2: VOLUME /some-volume
--> Using cache d39eb36dbb4a3bd8e5f04b89f0dc2cd298994e0f0f559934442396482eee829e
--> d39eb36dbb4
d39eb36dbb4a3bd8e5f04b89f0dc2cd298994e0f0f559934442396482eee829e
  1. Run a first container from this dockerfile, that will sleep for a long time:
[root@4500bc3a2feb bug-report]# podman run -d --name some-running-container-with-a-volume d39eb36dbb4a3bd8e5f04b89f0dc2cd298994e0f0f559934442396482eee829e sleep 10000000
252066f8b24cbbc3b5df56f052261042e6832b4bd668517cec66fe56c43b9ef3
  1. Run another container, with "--rm" and using the volumes of the first one, we got an unexpected error at the end during container removal:
[root@4500bc3a2feb bug-report]# podman run --rm --volumes-from some-running-container-with-a-volume fedora /bin/true 
ERRO[0000] Cleanup volume (&{76b605b95535764eaf29af93b691385c405a2feedc20e69d30ecaea05fdf3e93 /some-volume [rprivate rw nodev exec nosuid rbind]}): volume 76b605b95535764eaf29af93b691385c405a2feedc20e69d30ecaea05fdf3e93 is being used by the following container(s): 252066f8b24cbbc3b5df56f052261042e6832b4bd668517cec66fe56c43b9ef3: volume is being used

Describe the results you received:

In step 4, we see an unexpected error "Cleanup volume: volume is being used by the following container".

Describe the results you expected:

In step 4, I would expect that no error is logged.

Additional information you deem important (e.g. issue happens only occasionally):

All this was tested on a x86-64 RHEL 8, using the podman image quay.io/podman/upstream started with --privileged mode.

Output of podman version:

[root@4500bc3a2feb bug-report]# podman version
Client:       Podman Engine
Version:      4.0.0-dev
API Version:  4.0.0-dev
Go Version:   go1.16.13
Git Commit:   ed9ef59e7ea439b670875863132b68fd094501c7
Built:        Tue Jan 11 08:12:18 2022
OS/Arch:      linux/amd64
@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 11, 2022
@mheon
Copy link
Member

mheon commented Jan 12, 2022

I think this is expected; the second container wants to remove any anonymous volumes (the volume mounted in from the first container, in this example), but it can't because the other container is still active.

@rhatdan What do you think?

@Romain-Geissler-1A
Copy link
Contributor Author

Romain-Geissler-1A commented Jan 12, 2022

As a user, when I use "--rm" I intent to remove the second container, not really the volume.

I indeed created the volume as an anonymous one on purpose, because I indeed really wanted to make this volume disappear automatically when all containers using it disappears. However when only one of them disappears, I don't expect podman to write down an error like this. I see this anonymous volume like a std::shared_ptr in C++: it's an annonymous ref-counted resources that disappears when no else references it.

Actually some of my users are now reaching out to me that they have errors being logged on sceen, and I have to tell them that these are expected errors, they can ignore them. It feels a bit confusing IMO.

@Romain-Geissler-1A
Copy link
Contributor Author

Romain-Geissler-1A commented Jan 12, 2022

In other words, I would find this error expected on a command like "podman volume rm" where the intent is really to remove volumes, but not when removing a container (which happens to use shared volumes).

@rhatdan
Copy link
Member

rhatdan commented Jan 13, 2022

I agree this looks like a bug to me. I would not expect a container that uses --volumes-from to attempt to remove the volume.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

rhatdan added a commit to rhatdan/podman that referenced this issue Feb 21, 2022
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists.  Since this is an expected state, we should not complain
about it.

Fixes: containers#12808

Signed-off-by: Daniel J Walsh <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Feb 23, 2022
When removing a container created with a --volumes-from a container
created with a built in volume, we complain if the original container
still exists.  Since this is an expected state, we should not complain
about it.

Fixes: containers#12808

Signed-off-by: Daniel J Walsh <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants