Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage.conf mishandling with zfs storage driver #20324

Open
ajakk opened this issue Aug 20, 2023 · 22 comments
Open

storage.conf mishandling with zfs storage driver #20324

ajakk opened this issue Aug 20, 2023 · 22 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@ajakk
Copy link

ajakk commented Aug 20, 2023

Issue Description

Using this storage.conf:

[storage]
driver = "zfs"
graphroot = "/var/lib/containers/storage"
#runroot = "/var/lib/containers/runroot"

[storage.options.zfs]
fsname = "zpool/podman"

There seems to be no default for runroot:

~/podman # ./bin/podman system info
ERRO[0000] runroot must be set

Uncommenting the runroot option does enable me to use that command, but shows runroot and graphroot as being the current working directory (output heavily abbreviated, full output below):

sirius ~/podman # ./bin/podman system info
store:
  configFile: /etc/containers/storage.conf
  graphDriverName: zfs
  graphOptions:
    zfs.fsname: zpool/podman
  graphRoot: /root/podman
  runRoot: /root/podman
  volumePath: /var/lib/containers/storage/volumes

Steps to reproduce the issue

Steps to reproduce the issue

  1. Clone podman. reproduced with 20f28e5 and Gentoo's 4.5.0
  2. make
  3. Use bin/podman

Describe the results you received

Podman litters the current working directory with zfs-related directories and files:

~/podman # git status
On branch main
Your branch is up to date with 'origin/main'.

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        defaultNetworkBackend
        storage.lock
        userns.lock
        zfs-containers/
        zfs-images/
        zfs-layers/

nothing added to commit but untracked files present (use "git add" to track)

Describe the results you expected

Podman should respect the configuration file, and provide a sane default for the runroot.

podman info output

~/podman # ./bin/podman info
host:
  arch: amd64
  buildahVersion: 1.32.0-dev
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - hugetlb
  - pids
  - misc
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: app-containers/conmon-2.1.7
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.1.7, commit: v2.1.7'
  cpuUtilization:
    idlePercent: 99.55
    systemPercent: 0.26
    userPercent: 0.19
  cpus: 24
  databaseBackend: boltdb
  distribution:
    distribution: gentoo
    version: "2.14"
  eventLogger: journald
  freeLocks: 2045
  hostname: sirius
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.1.41-gentoo-dist-hardened
  linkmode: dynamic
  logDriver: journald
  memFree: 17448341504
  memTotal: 33621889024
  networkBackend: cni
  networkBackendInfo:
    backend: cni
    dns: {}
    package: app-containers/cni-plugins-1.2.0
    path: /opt/cni/bin
  ociRuntime:
    name: crun
    package: app-containers/crun-1.8.1
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  pasta:
    executable: ""
    package: ""
    version: ""
  remoteSocket:
    exists: false
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: app-containers/slirp4netns-1.2.0
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.4
  swapFree: 17179865088
  swapTotal: 17179865088
  uptime: 148h 41m 49.00s (Approximately 6.17 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
  - quay.io
  - registry.fedoraproject.org
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 0
    stopped: 3
  graphDriverName: zfs
  graphOptions:
    zfs.fsname: zpool/podman
  graphRoot: /root/podman
  graphRootAllocated: 964842221568
  graphRootUsed: 350250168320
  graphStatus:
    Compression: lz4
    Parent Dataset: zpool/podman
    Parent Quota: "no"
    Space Available: "81018269696"
    Space Used By Parent: "6990344192"
    Zpool: zpool
    Zpool Health: ONLINE
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /root/podman
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.7.0-dev
  Built: 1692496059
  BuiltTime: Sat Aug 19 18:47:39 2023
  GitCommit: 20f28e538d21aef62eb8159e6689e4e71ade0b87
  GoVersion: go1.20.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.7.0-dev

Podman in a container

No

Privileged Or Rootless

None

Upstream Latest Release

Yes

Additional environment details

No response

Additional information

No response

@ajakk ajakk added the kind/bug Categorizes issue or PR as related to a bug. label Aug 20, 2023
@Luap99
Copy link
Member

Luap99 commented Aug 21, 2023

@giuseppe PTAL

@giuseppe
Copy link
Member

from the title, it seems the issue happens only with zfs. Can you confirm that?

@ajakk
Copy link
Author

ajakk commented Aug 24, 2023

ZFS is the only storage backend I've reproduced with, but I haven't tried with more than that and the default.

@giuseppe
Copy link
Member

I've never tried the ZFS backend myself and not sure in what state it is.

Do you have any particular reason for using it instead of overlay?

@ajakk
Copy link
Author

ajakk commented Aug 24, 2023

Mostly because it's easier to manage ZFS datasets rather than a directory on my root filesystem. And I get the niceties that ZFS enables, like snapshots.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@ajakk
Copy link
Author

ajakk commented Sep 24, 2023

A friendly reminder that this issue had no activity for 30 days.

And yet, no sign of a fix...

@giuseppe
Copy link
Member

nobody from the core team is working on the zfs backend, since overlay is what we support and suggest people to use.

There won't likely be any update unless someone steps up and look into it

@rhatdan rhatdan transferred this issue from containers/podman Sep 24, 2023
@rhatdan
Copy link
Member

rhatdan commented Sep 24, 2023

Most likely a container/storage bug so transferring over there. If you are interested in opening a PR to fix, it would be welcome.

@shlande
Copy link

shlande commented Oct 3, 2023

Also encountered this problem. it seems like a podman bug or just a wrong configuration.

   363:		// Grab config from the database so we can reset some defaults
   364:		dbConfig, err := runtime.state.GetDBConfig()
=> 365:		if err != nil {
   366:			if runtime.doReset {
   367:				// We can at least delete the DB and the static files
   368:				// directory.
   369:				// Can't safely touch anything else because we aren't
   370:				// sure of other directories.
(dlv) print dbConfig
*github.com/containers/podman/v4/libpod.DBConfig {
	LibpodRoot: "/var/lib/containers/storage/libpod",
	LibpodTmp: "/run/libpod",
	StorageRoot: ".",
	StorageTmp: ".",
	GraphDriver: "zfs",
	VolumePath: "/var/lib/containers/storage/volumes",}

Podman runtime uses boltDB to store some states, which include those wrong storageRoot and storageTmp. These args then are passed to the zfs driver and lead to this strange behavior.

In my case, after deleting the blotDB file (/var/lib/containers/storage/libpod/bolt_state.db), everything is back to normal.

Having no idea how these blotDB records are created

reproduced it in version 4.4.1. Podman leaves a "." record in bolt_state.db when runRoot is empty, which can affect even upgraded.

@rhatdan rhatdan transferred this issue from containers/storage Oct 10, 2023
@rhatdan
Copy link
Member

rhatdan commented Oct 10, 2023

@mheon PTAL

@mheon
Copy link
Member

mheon commented Oct 10, 2023

Bolt here is placing the configured root and runroot into the DB, without doing any validation on them; so I suspect that here, it's only showing that we have bad values for storage config. @shlande What does your storage.conf look like?

@shlande
Copy link

shlande commented Oct 13, 2023

same as ajakk, version 4.4.1

[storage]
driver = "zfs"
graphroot = "/var/lib/containers/storage"
#runroot = "/var/lib/containers/runroot"

[storage.options.zfs]
fsname = "storage"

in higher version podman may complain this config.

@mheon
Copy link
Member

mheon commented Oct 13, 2023

I think this is probably a ZFS specific issue, and probably in c/storage because of that. Graphroot is explicitly specified, and somehow has been reset to CWD by the time Podman has finished initializing.

@ajakk
Copy link
Author

ajakk commented Oct 14, 2023

Yes, I've only seen this with the ZFS storage driver.

@virtorgan
Copy link

Just installed podman on a fresh debian 12 :
apt install podman result in :
Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 145.
Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 145.
Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 145.

then looked at log messages from services: systemctl status podman
Error: 'overlay' is not supported over zfs, a mount_program is required:

then looked for /etc/containers/storage.conf : not existing
same for ~/.config/containers

Does it mean Podman is not compatible with zfs & btrfs file systems ?

@mheon
Copy link
Member

mheon commented Dec 10, 2023

It does not. I use Podman over btrfs every day using the overlay driver. You are not using the ZFS graph driver, so your issue is unrelated to this one. Please open a new bug and fill out the full bug template.

@tazmo
Copy link

tazmo commented Dec 11, 2023

With the recent addition of overlayfs support in openzfs-2.2.0, any chance this could be addressed now?

@mheon
Copy link
Member

mheon commented Dec 11, 2023

No, this is specific to the ZFS graphdriver which does not use overlayfs

@h0tw1r3
Copy link

h0tw1r3 commented Dec 24, 2023

@tazmo easy to do already
Create a mount helper script /usr/local/bin/overmount:

#!/bin/sh
/bin/mount -t overlay overlay $*

Then add to /etc/containers/storage.conf:

[storage.options]
mount_program = "/usr/local/bin/overmount"

@DFINITYManu
Copy link

That suggestion doesn't work @h0tw1r3 :

 user   master ⚑ 4  ~  src  ic  SIGINT  sudo podman image exists ...
[sudo] password for user: 
WARN[0000] Storage configuration is unset - using hardcoded default graph root "/var/lib/containers/storage" 
WARN[0000] Storage configuration is unset - using hardcoded default graph root "/var/lib/containers/storage" 
WARN[0000] Storage configuration is unset - using hardcoded default graph root "/var/lib/containers/storage" 
Error: Unknown option zfs.mount_program

@giuseppe
Copy link
Member

mount_program is specific to overlay. It has no effect with other drivers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

10 participants