Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"sudo podman system reset" deletes current working directory #18349

Closed
Cydox opened this issue Apr 25, 2023 · 13 comments
Closed

"sudo podman system reset" deletes current working directory #18349

Cydox opened this issue Apr 25, 2023 · 13 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue

Comments

@Cydox
Copy link
Contributor

Cydox commented Apr 25, 2023

Issue Description

I just figured out that sudo podman system reset deletes the current working directory. This is on btrfs with the btrfs storage driver on Debian 12 bookworm.

When I found this bug my current working directory was my home directory :(

Steps to reproduce the issue

Steps to reproduce the issue:

Only try in an empty directory for test!

  1. mkdir test
  2. cd test
  3. sudo podman system reset
  4. cd ..
  5. ls -l

Describe the results you received

Well, the working directory at the time of the reset is gone.

Describe the results you expected

podman shouldn't touch the working directory.

podman info output

host:
  arch: amd64
  buildahVersion: 1.28.2
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.6+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: unknown'
  cpuUtilization:
    idlePercent: 94.3
    systemPercent: 2.1
    userPercent: 3.61
  cpus: 4
  distribution:
    codename: bookworm
    distribution: debian
    version: "12"
  eventLogger: journald
  hostname: desktop
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.2.11
  linkmode: dynamic
  logDriver: journald
  memFree: 7669645312
  memTotal: 16706768896
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun_1.8.1-1+b1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 25769799680
  swapTotal: 25769799680
  uptime: 0h 59m 32.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: btrfs
  graphOptions: {}
  graphRoot: /var/lib/containers/storage/btrfs
  graphRootAllocated: 127497404416
  graphRootUsed: 16362708992
  graphStatus:
    Build Version: Btrfs v6.2
    Library Version: "102"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /home/jan
  volumePath: /var/lib/containers/storage/btrfs/volumes
version:
  APIVersion: 4.3.1
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.19.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

No

Additional environment details

Debian 12 bookworm
btrfs as backing filesystem with btrfs storage driver

Additional information

I'm also observing the directories "btrfs-containers", "btrfs-layers", "btrfs-locks" being created in random places around my filesystem.

I have tried this on Fedora CoreOS also with btrfs filesystem and driver and have not observed the issue. I'm also not observing this issue with rootless podman.

@Cydox Cydox added the kind/bug Categorizes issue or PR as related to a bug. label Apr 25, 2023
@lsm5
Copy link
Member

lsm5 commented Apr 25, 2023

@Cydox wonder if this is a debian-specific issue. Can't reproduce on Fedora rawhide either. Would you be able to try this on a debian testing / sid env which should likely have a newer podman as well?

copying @siretart as he's the debian package maintainer for podman.

@Cydox
Copy link
Contributor Author

Cydox commented Apr 25, 2023

I'm running debian package version 4.3.1+ds1-6+b2 which is from testing (debian 12 is not released yet)

@Cydox
Copy link
Contributor Author

Cydox commented Apr 25, 2023

It also happens in a nested configuration for easier reproducibility (still requires to have btrfs as the backing filesystem).

Steps to reproduce (with btrfs as backing filesystem):

  1. podman run --rm -it docker.io/debian:bookworm
  2. apt update
  3. apt install -y podman
  4. echo -e "[storage]\ndriver = \"btrfs\"\ngraphRoot = \"/var/lib/containers/storage/btrfs\"" > /etc/containers/storage.conf
  5. mkdir test
  6. cd test
  7. podman system reset
  8. cd ..
  9. ls -l
  10. test folder is gone

Also works on debian:sid

@Cydox
Copy link
Contributor Author

Cydox commented Apr 25, 2023

With the instructions for the nested test I can reproduce it on Fedora CoreOS version 37.20230401.3.0

@Luap99
Copy link
Member

Luap99 commented Apr 26, 2023

You have to set runroot in the storage.conf file as well.
I would expect containers/storage#1510 to fix this problem which should be included podman v4.5.

This is really unfortunate, also considering #18295 (unknown cause) I think we should at least patch system reset to display the directories that we delete, that will at least give users a chance to abort it. And maybe better have a list of directories that we never delete (i.e. /, /home, $HOME, /etc, /usr and so on).

@Cydox
Copy link
Contributor Author

Cydox commented Apr 26, 2023

Setting runRoot does fix this. The podman version in debian unstable / 12 does accept a storage.conf without runRoot currently. That should probably be changed in debian @siretart .

Thankfully I didn't loose anything besides settings in bashrc, .config, etc, but I started doing nightly rsyncs now xD.

@siretart
Copy link
Contributor

so you are asking to backport containers/storage#1510 to podman 4.3.1 ? -- is that a patch that redhat / fedora would also backport?

in any case, please file a bug in debian. We need to do an impact analysis and extensive convincing for the release team to accept such a code change that late in the release cycle.

@Cydox
Copy link
Contributor Author

Cydox commented Apr 26, 2023

so you are asking to backport containers/storage#1510 to podman 4.3.1 ?

yes

in any case, please file a bug in debian.

Will do.

We need to do an impact analysis and extensive convincing for the release team to accept such a code change that late in the release cycle.

Really hope this gets in. I'm not familiar with the process in Debian, however the fix shouldn't impact general stability of Debian but it will definitely prevent a number of people from accidentally deleting their data.

@Luap99
Copy link
Member

Luap99 commented Apr 26, 2023

Fedora is on 4.5, we do not do any extra backports there. It just uses what upstream tags. For RHEL unless someone filled a bugzilla there to request a backport we will not backport it.

@Cydox
Copy link
Contributor Author

Cydox commented Apr 26, 2023

The debian bug is at: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1034871

Luap99 added a commit to Luap99/libpod that referenced this issue Apr 26, 2023
system reset it says it will delete containers, images, networks, etc...
However it will also delete the graphRoot and runRoot directories.
Normally this is not an issue, however in same cases these directories
were set to the users home directory or some other important system
directory.

As first step simply show the directories that are configured and thus
will be deleted by reset. As future step we could implement some
safeguard will will not delete some known important directories however
I tried to keep it simple for now.

[NO NEW TESTS NEEDED]

see containers#18349 and containers#18295

Signed-off-by: Paul Holzinger <[email protected]>
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented May 27, 2023

Since this is fixed in current release closing.

@rhatdan rhatdan closed this as completed May 27, 2023
@gg7
Copy link

gg7 commented Aug 14, 2023

As a user who also got his working directory deleted by podman, I wanted to figure out which podman versions are vulnerable to this bug.

I believe that this is fully fixed in Podman 4.5.0. Versions 4.0.0 to <4.5.0 are vulnerable if an earlier Podman was used to initialize bolt_state.db.

I can only reproduce this if $CWD is under btrfs and I use driver = "overlay" in storage.conf.

Full findings below.

CWD bug

On 2021-12-01 a user reported that graphRoot can default to the current working directory. That was fixed on the next day by containers/storage#1083, which was released in containers/storage v1.38.0+ (released on 2022-01-19) and v0.46.1.

Podman 4.0.0 (released on 2022-02-17) is the first release to contain this fix. The fix was not backported to any earlier branches (e.g. 3.4.5 was released on 2022-04-13 and it could have contained the fix).

CWD bug reappears

Almost a year later, on 2023-02-06 a user reported the same CWD issue, but this time with Podman 4.3.1 (which was meant to be fixed‽). It turns out that the CWD setting was being cached in /var/lib/containers/storage/libpod/bolt_state.db.

We can reproduce the issue like this:

# rm -rf /etc/containers/storage.conf /root/.config/containers/storage.conf /var/lib/containers/{cache,storage}
# cat > /etc/containers/storage.conf <<EOF
[storage]
driver = "overlay"
EOF
# fallocate -l 1G /tmp/podman-btrfs-test-volume.bin
# mkfs.btrfs /tmp/podman-btrfs-test-volume.bin
# mkdir /mnt/btrfs-test
# mount /tmp/podman-btrfs-test-volume.bin /mnt/btrfs-test
# cd /mnt/btrfs-test

# ls  # it's an empty directory

# podman info
time="2023-08-14T22:49:52Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
time="2023-08-14T22:49:52Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
time="2023-08-14T22:49:52Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
Error: no storage root specified: missing necessary StoreOptions

# ls  # it's still an empty directory

# podman info  # needs to be run again
time="2023-08-14T22:50:12Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
time="2023-08-14T22:50:12Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
time="2023-08-14T22:50:12Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: app-containers/conmon-2.1.7
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.1.7, commit: v2.1.7'
  cpus: 6
  distribution:
    distribution: gentoo
    version: "2.13"
  eventLogger: journald
  hostname: gentoo
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.1.38-gentoo-dist
  linkmode: dynamic
  logDriver: journald
  memFree: 60402511872
  memTotal: 67280408576
  ociRuntime:
    name: crun
    package: app-containers/crun-1.8.1
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: app-containers/slirp4netns-1.2.0
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 6h 18m 0.83s (Approximately 0.25 days)
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:5000
  search:
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /mnt/btrfs-test
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 0
  runRoot: /mnt/btrfs-test
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.4.2
  Built: 1692047438
  BuiltTime: Mon Aug 14 21:10:38 2023
  GitCommit: ""
  GoVersion: go1.20.5
  OsArch: linux/amd64
  Version: 3.4.2
# ls
mounts  overlay  overlay-containers  overlay-images  overlay-layers  overlay-locks  storage.lock  tmp  userns.lock

graphRoot will follow $CWD:

# mkdir test-subdir
# cd test-subdir
# podman info | grep graphRoot
  graphRoot: /mnt/btrfs-test/test-subdir
# ls
mounts  overlay  overlay-containers  overlay-images  overlay-layers  overlay-locks  storage.lock  tmp  userns.lock

Now let's upgrade to Podman 4.4.4 and run podman info again:

# mkdir /mnt/btrfs-test/new-test-subdir
# cd /mnt/btrfs-test/new-test-subdir
# podman info | grep graphRoot
WARN[0000] Storage configuration is unset - using hardcoded default graph root "/var/lib/containers/storage"  # lies!
  graphRoot: /mnt/btrfs-test/new-test-subdir
  graphRootAllocated: 1073741824
  graphRootUsed: 4030464
# ls
defaultNetworkBackend  overlay  overlay-containers  overlay-images  overlay-layers  overlay-locks  storage.lock  userns.lock
# podman info  # without grep, just fyi
time="2023-08-14T22:59:24Z" level=warning msg="Storage configuration is unset - using hardcoded default graph root \"/var/lib/containers/storage\""
host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: app-containers/conmon-2.1.7
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.1.7, commit: v2.1.7'
  cpuUtilization:
    idlePercent: 99.16
    systemPercent: 0.41
    userPercent: 0.43
  cpus: 6
  distribution:
    distribution: gentoo
    version: "2.13"
  eventLogger: journald
  hostname: gentoo
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.1.38-gentoo-dist
  linkmode: dynamic
  logDriver: journald
  memFree: 60261470208
  memTotal: 67280408576
  networkBackend: cni
  ociRuntime:
    name: crun
    package: app-containers/crun-1.8.1
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: app-containers/slirp4netns-1.2.0
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 4294963200
  swapTotal: 4294963200
  uptime: 6h 27m 12.00s (Approximately 0.25 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: localhost:5000
    PullFromMirror: ""
  search:
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /mnt/btrfs-test/new-test-subdir
  graphRootAllocated: 1073741824
  graphRootUsed: 4030464
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /mnt/btrfs-test/new-test-subdir
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.4.4
  Built: 1692053745
  BuiltTime: Mon Aug 14 22:55:45 2023
  GitCommit: c8223435f49a
  GoVersion: go1.20.5
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.4

We're still vulnerable to the bug until we delete bolt_state.db:

# mv /var/lib/containers/storage/libpod/bolt_state.db /var/lib/containers/storage/libpod/bolt_state.db.old
# cd /mnt/btrfs-test/new-test-subdir && podman info | grep 'graphRoot:'
WARN[0000] Storage configuration is unset - using hardcoded default graph root "/var/lib/containers/storage"
Error: no storage runroot or graphroot specified: missing necessary StoreOptions

Anyway, the author of the issue identified bolt_state.db as the culprit, deleted the file, and closed the ticket. However, he did point out that this bug could lead to the deletion of important directories, but unfortunately no action was taken and no new ticket was created.

First data loss report

On 2023-04-20 I opened #18287, reporting the deletion of /root (which must have been my CWD). The ticket was closed on the same day, asking for a reproducer. I failed to reproduce the issue because I had already upgraded to Podman 4.3.1 and seeing the closed ticket I lost interest in investigating this further.

Second data loss report

Just 5 days later, on 2023-04-25 another user reported the same bug (that's the current ticket).

The issue was investigated and it turned out that a fix for this had already landed in containers/storage on 2023-02-21: containers/storage#1510. This fix is present in podman versions 4.5.0+ (but again, it was not backported).

Third data loss report

2023-06-20: #17384 (comment).

Conclusion

I appreciate that the issue(s) here have been fixed. I am not familiar with your release strategy, but you might want to backport these fixes so users of older versions don't suffer data loss -- the 3rd data loss incident could have been avoided.

It might also be worth mentioning these backports to distro maintainers, so they reach users stuck on earlier versions.

More defensive and less stateful code would also be good (e.g. why would podman info report different results between the first and second invocation, without changing any of the configuration files?), but I know this is easier said than done.

Podman 4.6.0+ will also print the graphRoot/runRoot directories before removal thanks to #18354, for which I am grateful!

vrothberg added a commit to vrothberg/libpod that referenced this issue Sep 6, 2023
Backport of commit 6aaf6a2.

system reset it says it will delete containers, images, networks, etc...
However it will also delete the graphRoot and runRoot directories.
Normally this is not an issue, however in same cases these directories
were set to the users home directory or some other important system
directory.

As first step simply show the directories that are configured and thus
will be deleted by reset. As future step we could implement some
safeguard will will not delete some known important directories however
I tried to keep it simple for now.

[NO NEW TESTS NEEDED]

see containers#18349, containers#18295, and containers#19870

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg pushed a commit to vrothberg/libpod that referenced this issue Sep 6, 2023
Backport of commit 6aaf6a2.

system reset it says it will delete containers, images, networks, etc...
However it will also delete the graphRoot and runRoot directories.
Normally this is not an issue, however in same cases these directories
were set to the users home directory or some other important system
directory.

As first step simply show the directories that are configured and thus
will be deleted by reset. As future step we could implement some
safeguard will will not delete some known important directories however
I tried to keep it simple for now.

[NO NEW TESTS NEEDED]

see containers#18349 and containers#18295

Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg pushed a commit to vrothberg/libpod that referenced this issue Sep 6, 2023
Backport of commit 6aaf6a2.

system reset it says it will delete containers, images, networks, etc...
However it will also delete the graphRoot and runRoot directories.
Normally this is not an issue, however in same cases these directories
were set to the users home directory or some other important system
directory.

As first step simply show the directories that are configured and thus
will be deleted by reset. As future step we could implement some
safeguard will will not delete some known important directories however
I tried to keep it simple for now.

[NO NEW TESTS NEEDED]

see containers#18349, containers#18295, and containers#19870

Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: Valentin Rothberg <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Nov 13, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 13, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. stale-issue
Projects
None yet
Development

No branches or pull requests

6 participants