Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman system reset deletes user directory?!? #19870

Closed
NobodysNightmare opened this issue Sep 6, 2023 · 11 comments
Closed

podman system reset deletes user directory?!? #19870

NobodysNightmare opened this issue Sep 6, 2023 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@NobodysNightmare
Copy link

NobodysNightmare commented Sep 6, 2023

Issue Description

podman system reset just wiped my entire home directory.

Steps to reproduce the issue

$ sudo podman system reset

WARNING! This will remove:
        - all containers
        - all pods
        - all images
        - all build cache
Are you sure you want to continue? [y/N] y
ERRO[0066] unlinkat /home/myusername: device or resource busy 
ERRO[0066] unlinkat /home/myusername: device or resource busy

$ ls -lh /home/myusername
total 0

Describe the results you received

I will set up my machine from scratch I guess...

Describe the results you expected

Container stuff being deleted, not my computer.

podman info output

Rootless (This is what I wanted to fix):


❯ podman info
Error: 'overlay' is not supported over zfs, a mount_program is required: backing file system is unsupported for this graph driver

With root:

❯ sudo podman info
host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: 'conmon: /usr/bin/conmon'
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: unknown'
  cpus: 16
  distribution:
    codename: jammy
    distribution: ubuntu
    version: "22.04"
  eventLogger: journald
  hostname: BN02WT3
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.2.0-32-generic
  linkmode: dynamic
  logDriver: journald
  memFree: 24536809472
  memTotal: 32839950336
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17
      commit: 0e9229ae34caaebcb86f1fde18de3acaf18c6d9a
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 1.0.1
      commit: 6a7b16babc95b6a3056b33fb45b74a6f62262dd4
      libslirp: 4.6.1
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 40m 40.82s
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: zfs
  graphOptions: {}
  graphRoot: /home/myusername
  graphStatus:
    Compression: lz4
    Parent Dataset: rpool/USERDATA/myusername_ac0sfr
    Parent Quota: "no"
    Space Available: "421820657664"
    Space Used By Parent: "93089792"
    Zpool: rpool
    Zpool Health: ONLINE
  imageStore:
    number: 0
  runRoot: /home/myusername
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.4.4
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.18.1
  OsArch: linux/amd64
  Version: 3.4.4


### Podman in a container

No

### Privileged Or Rootless

None

### Upstream Latest Release

Yes

### Additional environment details

Ubuntu 22, this was a fresh installation of Podman (`sudo apt install podman`) because I thought it would be a feasible replacement of docker (which was uninstalled beforehand).

When I write above that I wanted to fix a specific error, I essentially wanted to [follow this guide](https://www.jwillikers.com/podman-with-btrfs-and-zfs), however, I did not really make it towards the part where I would configure the storage driver, because step 1 is the system reset...

### Additional information

_No response_
@NobodysNightmare NobodysNightmare added the kind/bug Categorizes issue or PR as related to a bug. label Sep 6, 2023
@vrothberg
Copy link
Member

Thank you for reaching out, @NobodysNightmare.

I think we need to backport commit 6aaf6a28435c. podman system reset removes the "graph root" directory where containers, images and other things are stored.

Did you configure Podman to use your $HOME directory for that?

@vrothberg
Copy link
Member

@rhatdan @mheon I vote for backporting this to previous versions. The above commit made it into 4.6 but it can have devastating consequences, so I feel a backport to 3.4 and 4.4 are needed; at least for Debian/Ubuntu.

Cc: @siretart

@NobodysNightmare
Copy link
Author

Did you configure Podman to use your $HOME directory for that?

Does this happen to be the default? I did not specifically configure anything after installing podman. However, I executed the command from my home directory (i.e. pwd would give you /home/myusername). So if the current working directory has an influence, that could also be it.

@NobodysNightmare
Copy link
Author

I now read the commit you suggest to backport. I can confirm that this would have helped, because I definitely read the disclaimer before proceeding 👍

@vrothberg
Copy link
Member

Does this happen to be the default?

No, $HOME should never be the default and it's not picked by Podman.

I did not specifically configure anything after installing podman. However, I executed the command from my home directory (i.e. pwd would give you /home/myusername). So if the current working directory has an influence, that could also be it.

How did you log into the machine/user? Can you share the output of printenv (if possible)?

@NobodysNightmare
Copy link
Author

Weird. So disclaimer: I am currently in the process of fixing my computer and I can't guarantee that this is the env that was present when I executed podman (and lots of "evidence" was deleted in the process):

SHELL=/bin/bash
SESSION_MANAGER=local/BN02WT3:@/tmp/.ICE-unix/9635,unix/BN02WT3:/tmp/.ICE-unix/9635
QT_ACCESSIBILITY=1
COLORTERM=truecolor
XDG_CONFIG_DIRS=/etc/xdg/xdg-ubuntu:/etc/xdg
SSH_AGENT_LAUNCHER=gnome-keyring
XDG_MENU_PREFIX=gnome-
GNOME_DESKTOP_SESSION_ID=this-is-deprecated
LC_ADDRESS=de_DE.UTF-8
GNOME_SHELL_SESSION_MODE=ubuntu
LC_NAME=de_DE.UTF-8
SSH_AUTH_SOCK=/run/user/1000/keyring/ssh
GIT_PS1_SHOWDIRTYSTATE=1
XMODIFIERS=@im=ibus
DESKTOP_SESSION=ubuntu
LC_MONETARY=de_DE.UTF-8
RBENV_SHELL=bash
GTK_MODULES=gail:atk-bridge
PWD=/home/****/development/****
LOGNAME=****
XDG_SESSION_DESKTOP=ubuntu
XDG_SESSION_TYPE=wayland
SYSTEMD_EXEC_PID=11444
XAUTHORITY=/run/user/1000/.mutter-Xwaylandauth.911XA2
GIT_PS1_SHOWCOLORHINTS=1
CDPATH=.:/home/****/development/****:/home/****/development/private
HOME=/home/****
USERNAME=****
IM_CONFIG_PHASE=1
LC_PAPER=de_DE.UTF-8
LANG=en_US.UTF-8
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:
XDG_CURRENT_DESKTOP=ubuntu:GNOME
STARSHIP_SHELL=bash
VTE_VERSION=6800
WAYLAND_DISPLAY=wayland-0
STARSHIP_CONFIG=/home/****/.dotfiles/starship.toml
GNOME_TERMINAL_SCREEN=/org/gnome/Terminal/screen/60c0c6a6_a801_4d14_a724_b8b785c3bddf
STARSHIP_SESSION_KEY=1077331941152352
GIT_PS1_SHOWSTASHSTATE=1
GNOME_SETUP_DISPLAY=:1
GIT_PS1_SHOWCOLORHINT=1
LESSCLOSE=/usr/bin/lesspipe %s %s
XDG_SESSION_CLASS=user
TERM=xterm-256color
LC_IDENTIFICATION=de_DE.UTF-8
LESSOPEN=| /usr/bin/lesspipe %s
USER=****
GNOME_TERMINAL_SERVICE=:1.102
DISPLAY=:0
SHLVL=1
LC_TELEPHONE=de_DE.UTF-8
PROMPT_DIRTRIM=2
QT_IM_MODULE=ibus
LC_MEASUREMENT=de_DE.UTF-8
XDG_RUNTIME_DIR=/run/user/1000
LC_TIME=de_DE.UTF-8
XDG_DATA_DIRS=/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop
PATH=~/.dotfiles/bin:/home/****/go/bin:/home/****/.rbenv/shims:/home/****/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/snap/bin:/usr/local/go/bin
GDMSESSION=ubuntu
DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
GIT_PS1_SHOWUNTRACKEDFILES=1
LC_NUMERIC=de_DE.UTF-8
GOPATH=/home/****/go
_=/usr/bin/printenv

@NobodysNightmare
Copy link
Author

Looking at the Podman info output, this is what strikes me as possibly relevant:

store:
  ...
  graphRoot: /home/myusername
  graphStatus:
    ...
    Parent Dataset: rpool/USERDATA/myusername_ac0sfr
    ...

So apparently the graph root was set to my home directory.

Seeing that it also indicates /etc/containers/storage.conf as the store.configFile, I have to say that I briefly touched this according to this post, this means at some point it contained the following content:

[storage]
driver = "zfs"

I am semi-sure that this file was not existing at the point that I ran the system reset, but if it was, could that explain the issue?

I can however confirm that I neither created the corresponding file in the user directory nor had any other content in that file.

@vrothberg
Copy link
Member

Can you share the output (if present) of /etc/containers/storage.conf and $HOME/.config/containers/storage.conf?

@giuseppe
Copy link
Member

giuseppe commented Sep 6, 2023

I think this can be the same root cause as: #20324

@NobodysNightmare
Copy link
Author

Can you share the output (if present) of /etc/containers/storage.conf and $HOME/.config/containers/storage.conf?

See my freshly posted message above (not sure whether we sent our message at the same time):

  • /etc/containers/storage.conf was either non-existent (but existed before running the command) or it existed and contained the following config
  • $HOME/.config/containers/storage.conf never existed (or at least was never created by me directly)
[storage]
driver = "zfs"

vrothberg added a commit to vrothberg/libpod that referenced this issue Sep 6, 2023
Backport of commit 6aaf6a2.

system reset it says it will delete containers, images, networks, etc...
However it will also delete the graphRoot and runRoot directories.
Normally this is not an issue, however in same cases these directories
were set to the users home directory or some other important system
directory.

As first step simply show the directories that are configured and thus
will be deleted by reset. As future step we could implement some
safeguard will will not delete some known important directories however
I tried to keep it simple for now.

[NO NEW TESTS NEEDED]

see containers#18349, containers#18295, and containers#19870

Signed-off-by: Valentin Rothberg <[email protected]>
vrothberg pushed a commit to vrothberg/libpod that referenced this issue Sep 6, 2023
Backport of commit 6aaf6a2.

system reset it says it will delete containers, images, networks, etc...
However it will also delete the graphRoot and runRoot directories.
Normally this is not an issue, however in same cases these directories
were set to the users home directory or some other important system
directory.

As first step simply show the directories that are configured and thus
will be deleted by reset. As future step we could implement some
safeguard will will not delete some known important directories however
I tried to keep it simple for now.

[NO NEW TESTS NEEDED]

see containers#18349, containers#18295, and containers#19870

Signed-off-by: Paul Holzinger <[email protected]>
Signed-off-by: Valentin Rothberg <[email protected]>
@vrothberg
Copy link
Member

#19874 has merged, so I am going to close the issue here. The remainder is backporting in downstream.

But we will continue investigating the issue.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Dec 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Dec 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

3 participants