-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mounting legacy dataset gives cryptic error messages #633
Milestone
Comments
We can certainly do something about this. The error is coming from our mount.zfs helper. What would you expect correct behavior to be? cmd/mount_zfs/mount_zfs.c:478 if (!fake) { error = mount(dataset, mntpoint, MNTTYPE_ZFS, mntflags, mntopts); if (error) { switch (errno) { case EBUSY: (void) fprintf(stderr, gettext("filesystem " "'%s' is already mounted\n"), dataset); return (MOUNT_SYSERR); default: (void) fprintf(stderr, gettext("filesystem " "'%s' can not be mounted due to error " "%d\n"), dataset, errno); return (MOUNT_USAGE); } } } |
I will patch this and then file a pull request when a patch is ready. |
Thanks for fixing this. Just ran into it on 0.6.0-rc8. |
behlendorf
pushed a commit
to behlendorf/zfs
that referenced
this issue
May 21, 2018
It is just plain unsafe to peek inside in-kernel mutex structure and make assumptions about what kernel does with those internal fields like owner. Kernel is all too happy to stop doing the expected things like tracing lock owner once you load a tainted module like spl/zfs that is not GPL. As such you will get instant assertion failures like this: VERIFY3(((*(volatile typeof((&((&zo->zo_lock)->m_mutex))->owner) *)& ((&((&zo->zo_lock)->m_mutex))->owner))) == ((void *)0)) failed (ffff88030be28500 == (null)) PANIC at zfs_onexit.c:104:zfs_onexit_destroy() Showing stack for process 3626 CPU: 0 PID: 3626 Comm: mkfs.lustre Tainted: P OE ------------ 3.10.0-debug #1 Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 Call Trace: dump_stack+0x19/0x1b spl_dumpstack+0x44/0x50 [spl] spl_panic+0xbf/0xf0 [spl] zfs_onexit_destroy+0x17c/0x280 [zfs] zfsdev_release+0x48/0xd0 [zfs] Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: Chunwei Chen <[email protected]> Signed-off-by: Oleg Drokin <[email protected]> Closes openzfs#632 Closes openzfs#633
behlendorf
added a commit
to behlendorf/zfs
that referenced
this issue
May 21, 2018
This reverts commit d89616f which introduced some build failures which need to be resolved before this can be merged. Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#633
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Attempting to mount a legacy dataset on a non-existent mount point will give a numerical error message:
vserver ~ # zfs create rpool/backup/www
vserver ~ # mount -t zfs rpool/backup/www /mnt/backup/www
filesystem 'rpool/backup/www' can not be mounted due to error 2
vserver ~ # mkdir /mnt/backup/www
vserver ~ # mount -t zfs rpool/backup/www /mnt/backup/www
vserver ~ # zfs list rpool/backup
NAME USED AVAIL REFER MOUNTPOINT
rpool/backup 409G 5.02T 409G legacy
I am not sure if this is an issue that we can address, but I thought I would report it so others can take a look.
The text was updated successfully, but these errors were encountered: