-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filesystems appear simultaneously mounted and unmounted - can't export pool, unmount, mount, or see files. #9082
Comments
btw, I could only reproduce this on Linux, but recently found this had been reported in FreeBSD as well. https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=237517 |
I can reproduce it on Linux easily, with Debian's kernel Is there any additional information I can provide? |
@ttelford if possible can you upgrade to the zfs-0.8.3-1 packages provided by Debian and confirm if this is still an issue. |
@behlendorf just updated to 0.8.3, and can confirm the problem still exists. Reliably, over several reboots. And I still get a General protection failure from the kernel, in the zfs module, every time I run As long as I avoid the -a option to mount, I have no problems. |
Addition: the behavior is slightly different. The filesystems show as unmounted by both |
I'm also getting this on my system (Manjaro). Happy to get more/more precise info when I can, please let me know what would be required:
This seems somewhat serious to me as a user, in that this data is totally inaccessible to me at the moment, and that I'm also presuming, in a state-machine manner of thinking, mounted and unmounted should be the only two options, with both mutually exclusive. I'm also slightly confused on the version numbers, whilst I'm here - my Mint system, previously stuck on the Bionic package base, was on ZoL version 0.7.something, and following the move to a Disco base, was then 0.8, in line with the Debian packages above. However the output is reporting 2.0. Has the project really moved through two major releases since then? Or is this caused by a change between the upstream versioning from illumos-gate vs. the Linux sources? |
In my case, I found what was causing the issue -- though I'm not sure it isn't still a bug. My issues were on the I had manually set some mountpoint for the burp pool. The parent/top-level zfs filesystem (ie. the zpool root) had a trailing backslash in the mountpoint. By repairing that issue (which required unmounting every filesystem in the pool, as the problem was at the top level), everything mounts fine now. @stellarpower: Can you verify whether or not you have a trailing backslash in your mountpoints?
|
I can get that output for you next time I'm booted, I'll make a note, however, for this system, the root dataset shouldn't have a mountpoint, I have a boot pool and a root pool and it's only a few layers in they actually are mounted at all. |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
👆 This trailing slash issue/situation doesn't exist for the scenario I'm about to share. Good to know though!
FWIW I have a potential explanation for a similar scenario, with seeming very close result/issue being reported in this thread. Consider the following filesystem datasets:
Consider the following properties
After booting I can mount If I later wish to mount I would suppose this is the expected behaviour? In effect I've mounted over the top of the child dataset(s). It does seem to confuse ZFS though, as follows: Trying to I'm not sure if there's anything the developers can do here, but it might help someone in the future who's in the same situation as me and wonders what's going on. The solution for me is a little loop, something like:
This loop walks the Hope that helps someone in the future. |
System information
Describe the problem you're observing
I have six pools, and 66 filesystems. All was working fine prior to updating to 0.8.1-3 (Debian Packages)
After upgrading,
zfs mount -a
reports all as mounted. However, they aren't. I'll try to explain:The pool is named "burp", and has five filesystems. The pool is for backups; stopping the burp dæmon leaves nothing holding open files/directories:
mount -a
reports no errors.df -h | grep burp
shows onlyburp/pilot
as being mounted.mount
and/proc/mounts
shows that all of the filesystems are mounted.After verifying nothing is using the mounts, I tried to export the pool:
Attempting to unmount any of the filesystems fails:
The mounting state appears to be inconsistent:
/proc/mounts
and themount
command shows the filesystems as mounted, whiledf
and evenzpool export
shows the filesystem is not mounted.When I manually export (and disconnect) two of the other pools, and then reboot, the
burp
pool mounts and works correctly.I then attached the two disconnected pools and imported them. The filesystems on the 'most recently imported' pool then seem to have the issue. It appears like there's some sort of maximum pool or filesystem limit that's being tickled somehow...
Describe how to reproduce the problem
All I have to do is boot the system, and mount enough ZFS filesystems. (I'm not sure if the pool count is significant, or if it's really the only problem.)
Include any warning/errors/backtraces from the system logs
Unfortunately, there are no log messages in
/var/log/syslog
, nothing visible indmesg
. I'm happy to help get logging information, if you tell me what to do...The text was updated successfully, but these errors were encountered: