-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading from 1.6 to 1.7 deletes named persistent volumes on podman run --rm
#5009
Comments
I'll take this one
…On Wed, Jan 29, 2020, 03:07 William Lieurance ***@***.***> wrote:
*Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)*
/kind bug
*Description*
Named volumes automatically created by version 1.6 of podman are retained
through multiple invocations of podman run --rm. For example, postgres
databases that specified a named volume the first time that they ran will
work through multiple invocations of the container. Named volumes that are
made manually with podman volume create will also work through multiple
invocations of the container.
https://github.com/containers/libpod/blob/master/docs/source/markdown/podman-run.1.md
mentions that "If no such named volume exists, Podman will create one."
That's great. It's worked well for me for a few months now.
The trouble is after an upgrade to version 1.7, where the same podman run
--rm invocation will work correctly the first time, but as soon as the
container is stopped the named volume is deleted in the background. The
next invocation of podman run --rm with that same set of command line
switches will create a new volume with no data in it and mount it normally.
The postgres database from our example becomes empty. That's the case in
particular that surprised me.
*Steps to reproduce the issue:*
# Podman version 1.6.2-2.fc31
/bin/podman run -it --rm --mount type=volume,src=vol16,dst=/home/user/vol16 --name voltest ubuntu:18.04 /bin/bash -c 'echo "yo" > /home/user/vol16/test.txt'
/bin/podman volume inspect vol16 | grep -i containerspecific
"ContainerSpecific": true
/bin/podman run -it --rm --mount type=volume,src=vol16,dst=/home/user/vol16 --name voltest ubuntu:18.04 /bin/bash -c 'cat /home/user/vol16/test.txt'
yo
/bin/podman run -it --rm --mount type=volume,src=vol16,dst=/home/user/vol16 --name voltest ubuntu:18.04 /bin/bash -c 'cat /home/user/vol16/test.txt'
yo
# Do an in-place upgrade
# Podman version 1.7.0-2.fc31
/bin/podman run -it --rm --mount type=volume,src=vol16,dst=/home/user/vol16 --name voltest ubuntu:18.04 /bin/bash -c 'cat /home/user/vol16/test.txt'
yo
/bin/podman run -it --rm --mount type=volume,src=vol16,dst=/home/user/vol16 --name voltest ubuntu:18.04 /bin/bash -c 'cat /home/user/vol16/test.txt'
cat: /home/user/vol16/test.txt: No such file or directory
/bin/podman run -it --rm --mount type=volume,src=vol16,dst=/home/user/vol16 --name voltest ubuntu:18.04 /bin/bash -c 'cat /home/user/vol16/test.txt'
cat: /home/user/vol16/test.txt: No such file or directory
*Describe the results you received:*
cat: /home/user/vol16/test.txt: No such file or directory
*Describe the results you expected:*
yo
*Additional information you deem important (e.g. issue happens only
occasionally):*
Investigating this a bit shows that the meaning of the ContainerSpecific
flag in the boltdb has changed slightly. Whereas prior to 0d62391
<0d62391>
the ContainerSpecific flag was set to true for any auto-created volume
including named ones, after that commit those are now ContainerSpecific:
false. Unfortunately 0d62391#diff-7b5097b329b325dc0490ec58b133a9aeR440
<0d62391#diff-7b5097b329b325dc0490ec58b133a9aeR440>
also changed the logic around volume deletion for containers started with
--rm. Now, any volume that is ContainerSpecific: true, regardless of
named-ness gets deleted when the container stops.
As you might expect, volumes created with podman volume create have ContainerSpecific:
false assigned, and are not affected by this situation.
I honestly don't know what to do about this. I don't think that there's
any other state information that we could glean from the database to
determine whether it's a good idea to delete these volumes, but it seems
like a really scary behaviour to mysteriously lose previously-persistent
volumes. Make it clear that automatically created volumes are different
from manually created ones? Update the code to ensure they're always made
via the same codepath? I'm open to suggestions here.
*Output of podman version:*
1.6.2 upgrading to 1.7.0
I'm pretty sure the change is in 1.6.3, but that's what I had to test with.
*Output of podman info --debug:*
n/a
*Package info (e.g. output of rpm -q podman or apt list podman):*
See above
*Additional environment details (AWS, VirtualBox, physical, etc.):*
This was tested on two physical machines, discovered on one running Fedora
CoreOS with rpm-ostreed doing automatic updates, and then verified on one
running Fedora 31's desktop spin.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#5009>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCAMMTRHCSPFNNH3IB3RAE2NHANCNFSM4KNAKYWA>
.
|
This appears to be a side-effect of our migration to anonymous volumes. Previously, we did not properly support anonymous volumes, but attempted to mimic this support through general named volumes. Docker removes anonymous volumes when containers are autoremoved via I don't believe any fix for 1.7.0 is possible, but we should be able to fix moving forward by deprecating |
In Podman 1.6.3, we added support for anonymous volumes - fixing our old, broken support for named volumes that were created with containers. Unfortunately, this reused the database field we used for the old implementation, and toggled volume removal on for `podman run --rm` - so now, we were removing *named* volumes created with older versions of Podman. We can't modify these old volumes in the DB, so the next-safest thing to do is swap to a new field to indicate volumes should be removed. Problem: Volumes created with 1.6.3 and up until this lands, even anonymous volumes, will not be removed. However, this is safer than removing too many volumes, as we were doing before. Fixes containers#5009 Signed-off-by: Matthew Heon <[email protected]>
Fix in #5018 |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Named volumes automatically created by version 1.6 of podman are retained through multiple invocations of
podman run --rm
. For example, postgres databases that specified a named volume the first time that they ran will work through multiple invocations of the container. Named volumes that are made manually withpodman volume create
will also work through multiple invocations of the container. https://github.com/containers/libpod/blob/master/docs/source/markdown/podman-run.1.md mentions that "If no such named volume exists, Podman will create one." That's great. It's worked well for me for a few months now.The trouble is after an upgrade to version 1.7, where the same
podman run --rm
invocation will work correctly the first time, but as soon as the container is stopped the named volume is deleted in the background. The next invocation ofpodman run --rm
with that same set of command line switches will create a new volume with no data in it and mount it normally. The postgres database from our example becomes empty. That's the case in particular that surprised me.Steps to reproduce the issue:
Describe the results you received:
cat: /home/user/vol16/test.txt: No such file or directory
Describe the results you expected:
yo
Additional information you deem important (e.g. issue happens only occasionally):
Investigating this a bit shows that the meaning of the
ContainerSpecific
flag in the boltdb has changed slightly. Whereas prior to 0d62391 theContainerSpecific
flag was set totrue
for any auto-created volume including named ones, after that commit those are nowContainerSpecific: false
. Unfortunately 0d62391#diff-7b5097b329b325dc0490ec58b133a9aeR440 also changed the logic around volume deletion for containers started with--rm
. Now, any volume that isContainerSpecific: true
, regardless of named-ness gets deleted when the container stops.As you might expect, volumes created with
podman volume create
haveContainerSpecific: false
assigned, and are not affected by this situation.I honestly don't know what to do about this. I don't think that there's any other state information that we could glean from the database to determine whether it's a good idea to delete these volumes, but it seems like a really scary behaviour to mysteriously lose previously-persistent volumes. Make it clear that automatically created volumes are different from manually created ones? Update the code to ensure they're always made via the same codepath? I'm open to suggestions here.
Output of
podman version
:I'm pretty sure the change is in 1.6.3, but that's what I had to test with.
Output of
podman info --debug
:n/a
Package info (e.g. output of
rpm -q podman
orapt list podman
):See above
Additional environment details (AWS, VirtualBox, physical, etc.):
This was tested on two physical machines, discovered on one running Fedora CoreOS with rpm-ostreed doing automatic updates, and then verified on one running Fedora 31's desktop spin.
The text was updated successfully, but these errors were encountered: