-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
storage.conf mishandling with zfs storage driver #20324
Comments
@giuseppe PTAL |
from the title, it seems the issue happens only with zfs. Can you confirm that? |
ZFS is the only storage backend I've reproduced with, but I haven't tried with more than that and the default. |
I've never tried the ZFS backend myself and not sure in what state it is. Do you have any particular reason for using it instead of overlay? |
Mostly because it's easier to manage ZFS datasets rather than a directory on my root filesystem. And I get the niceties that ZFS enables, like snapshots. |
A friendly reminder that this issue had no activity for 30 days. |
And yet, no sign of a fix... |
nobody from the core team is working on the zfs backend, since overlay is what we support and suggest people to use. There won't likely be any update unless someone steps up and look into it |
Most likely a container/storage bug so transferring over there. If you are interested in opening a PR to fix, it would be welcome. |
Also encountered this problem. it seems like a podman bug or just a wrong configuration.
Podman runtime uses boltDB to store some states, which include those wrong storageRoot and storageTmp. These args then are passed to the zfs driver and lead to this strange behavior. In my case, after deleting the blotDB file (/var/lib/containers/storage/libpod/bolt_state.db), everything is back to normal.
reproduced it in version 4.4.1. Podman leaves a "." record in bolt_state.db when runRoot is empty, which can affect even upgraded. |
@mheon PTAL |
Bolt here is placing the configured root and runroot into the DB, without doing any validation on them; so I suspect that here, it's only showing that we have bad values for storage config. @shlande What does your storage.conf look like? |
same as ajakk, version 4.4.1
in higher version podman may complain this config. |
I think this is probably a ZFS specific issue, and probably in c/storage because of that. Graphroot is explicitly specified, and somehow has been reset to CWD by the time Podman has finished initializing. |
Yes, I've only seen this with the ZFS storage driver. |
Just installed podman on a fresh debian 12 : then looked at log messages from services: systemctl status podman then looked for /etc/containers/storage.conf : not existing Does it mean Podman is not compatible with zfs & btrfs file systems ? |
It does not. I use Podman over btrfs every day using the overlay driver. You are not using the ZFS graph driver, so your issue is unrelated to this one. Please open a new bug and fill out the full bug template. |
With the recent addition of overlayfs support in openzfs-2.2.0, any chance this could be addressed now? |
No, this is specific to the ZFS graphdriver which does not use overlayfs |
@tazmo easy to do already
Then add to
|
That suggestion doesn't work @h0tw1r3 :
|
|
Issue Description
Using this
storage.conf
:There seems to be no default for
runroot
:Uncommenting the runroot option does enable me to use that command, but shows
runroot
andgraphroot
as being the current working directory (output heavily abbreviated, full output below):Steps to reproduce the issue
Steps to reproduce the issue
make
bin/podman
Describe the results you received
Podman litters the current working directory with zfs-related directories and files:
Describe the results you expected
Podman should respect the configuration file, and provide a sane default for the
runroot
.podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
No response
Additional information
No response
The text was updated successfully, but these errors were encountered: