-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"sudo podman system reset" deletes current working directory #18349
Comments
I'm running debian package version 4.3.1+ds1-6+b2 which is from testing (debian 12 is not released yet) |
It also happens in a nested configuration for easier reproducibility (still requires to have btrfs as the backing filesystem). Steps to reproduce (with btrfs as backing filesystem):
Also works on debian:sid |
With the instructions for the nested test I can reproduce it on Fedora CoreOS version 37.20230401.3.0 |
You have to set runroot in the storage.conf file as well. This is really unfortunate, also considering #18295 (unknown cause) I think we should at least patch system reset to display the directories that we delete, that will at least give users a chance to abort it. And maybe better have a list of directories that we never delete (i.e. |
Setting runRoot does fix this. The podman version in debian unstable / 12 does accept a storage.conf without runRoot currently. That should probably be changed in debian @siretart . Thankfully I didn't loose anything besides settings in bashrc, .config, etc, but I started doing nightly |
so you are asking to backport containers/storage#1510 to podman 4.3.1 ? -- is that a patch that redhat / fedora would also backport? in any case, please file a bug in debian. We need to do an impact analysis and extensive convincing for the release team to accept such a code change that late in the release cycle. |
yes
Will do.
Really hope this gets in. I'm not familiar with the process in Debian, however the fix shouldn't impact general stability of Debian but it will definitely prevent a number of people from accidentally deleting their data. |
Fedora is on 4.5, we do not do any extra backports there. It just uses what upstream tags. For RHEL unless someone filled a bugzilla there to request a backport we will not backport it. |
The debian bug is at: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1034871 |
system reset it says it will delete containers, images, networks, etc... However it will also delete the graphRoot and runRoot directories. Normally this is not an issue, however in same cases these directories were set to the users home directory or some other important system directory. As first step simply show the directories that are configured and thus will be deleted by reset. As future step we could implement some safeguard will will not delete some known important directories however I tried to keep it simple for now. [NO NEW TESTS NEEDED] see containers#18349 and containers#18295 Signed-off-by: Paul Holzinger <[email protected]>
A friendly reminder that this issue had no activity for 30 days. |
Since this is fixed in current release closing. |
As a user who also got his working directory deleted by podman, I wanted to figure out which podman versions are vulnerable to this bug. I believe that this is fully fixed in Podman 4.5.0. Versions 4.0.0 to <4.5.0 are vulnerable if an earlier Podman was used to initialize I can only reproduce this if $CWD is under btrfs and I use Full findings below. CWD bugOn 2021-12-01 a user reported that Podman 4.0.0 (released on 2022-02-17) is the first release to contain this fix. The fix was not backported to any earlier branches (e.g. 3.4.5 was released on 2022-04-13 and it could have contained the fix). CWD bug reappearsAlmost a year later, on 2023-02-06 a user reported the same CWD issue, but this time with Podman 4.3.1 (which was meant to be fixed‽). It turns out that the We can reproduce the issue like this:
Now let's upgrade to Podman 4.4.4 and run
We're still vulnerable to the bug until we delete
Anyway, the author of the issue identified First data loss reportOn 2023-04-20 I opened #18287, reporting the deletion of Second data loss reportJust 5 days later, on 2023-04-25 another user reported the same bug (that's the current ticket). The issue was investigated and it turned out that a fix for this had already landed in Third data loss report2023-06-20: #17384 (comment). ConclusionI appreciate that the issue(s) here have been fixed. I am not familiar with your release strategy, but you might want to backport these fixes so users of older versions don't suffer data loss -- the 3rd data loss incident could have been avoided. It might also be worth mentioning these backports to distro maintainers, so they reach users stuck on earlier versions. More defensive and less stateful code would also be good (e.g. why would Podman 4.6.0+ will also print the |
Backport of commit 6aaf6a2. system reset it says it will delete containers, images, networks, etc... However it will also delete the graphRoot and runRoot directories. Normally this is not an issue, however in same cases these directories were set to the users home directory or some other important system directory. As first step simply show the directories that are configured and thus will be deleted by reset. As future step we could implement some safeguard will will not delete some known important directories however I tried to keep it simple for now. [NO NEW TESTS NEEDED] see containers#18349, containers#18295, and containers#19870 Signed-off-by: Valentin Rothberg <[email protected]>
Backport of commit 6aaf6a2. system reset it says it will delete containers, images, networks, etc... However it will also delete the graphRoot and runRoot directories. Normally this is not an issue, however in same cases these directories were set to the users home directory or some other important system directory. As first step simply show the directories that are configured and thus will be deleted by reset. As future step we could implement some safeguard will will not delete some known important directories however I tried to keep it simple for now. [NO NEW TESTS NEEDED] see containers#18349 and containers#18295 Signed-off-by: Paul Holzinger <[email protected]> Signed-off-by: Valentin Rothberg <[email protected]>
Backport of commit 6aaf6a2. system reset it says it will delete containers, images, networks, etc... However it will also delete the graphRoot and runRoot directories. Normally this is not an issue, however in same cases these directories were set to the users home directory or some other important system directory. As first step simply show the directories that are configured and thus will be deleted by reset. As future step we could implement some safeguard will will not delete some known important directories however I tried to keep it simple for now. [NO NEW TESTS NEEDED] see containers#18349, containers#18295, and containers#19870 Signed-off-by: Paul Holzinger <[email protected]> Signed-off-by: Valentin Rothberg <[email protected]>
Issue Description
I just figured out that
sudo podman system reset
deletes the current working directory. This is on btrfs with the btrfs storage driver on Debian 12 bookworm.When I found this bug my current working directory was my home directory :(
Steps to reproduce the issue
Steps to reproduce the issue:
Only try in an empty directory for test!
mkdir test
cd test
sudo podman system reset
cd ..
ls -l
Describe the results you received
Well, the working directory at the time of the reset is gone.
Describe the results you expected
podman shouldn't touch the working directory.
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
No
Additional environment details
Debian 12 bookworm
btrfs as backing filesystem with btrfs storage driver
Additional information
I'm also observing the directories "btrfs-containers", "btrfs-layers", "btrfs-locks" being created in random places around my filesystem.
I have tried this on Fedora CoreOS also with btrfs filesystem and driver and have not observed the issue. I'm also not observing this issue with rootless podman.
The text was updated successfully, but these errors were encountered: