-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read-only pool hangs on sys_umount call #1338
Comments
Okay, so I was doing some digging. Here's the lowdown. If a pool is imported readonly, but filesystems have properties set to readonly=off (naturally the default) then the superblock MS_RDONLY flag gets cleared out during property initialization (gdb) bt #0 readonly_changed_cb (arg=0xffff88002aa40000, newval=18446612133033518728) at zfs_vfsops.c:172 #1 0xffffffffa0208c8d in dsl_prop_register (ds=, propname=0xffffffffa02973aa "readonly", callback=0xffffffffa0258960 , cbarg=0xffff88002aa40000) at dsl_prop.c:260 #2 0xffffffffa0259e21 in zfs_register_callbacks (zsb=0xffff88002aa40000) at zfs_vfsops.c:255 #3 0xffffffffa025a65f in zfs_sb_setup (zsb=0xffff88002aa40000, mounting=B_TRUE) at zfs_vfsops.c:750 #4 0xffffffffa025b8fe in zfs_domount (sb=0xffff88002d59e000, data=, silent=) at zfs_vfsops.c:1215 Now that the filesystem isn't marked as read-only anymore, it can take various types of dirtiness. During unmount/export we jam since we can't clean it up. This does affect exporting above and beyond merely unmounting. As a workaround, if you mark your filesystems as readonly ahead of time then the pool can be imported and exported read-only without issue. ; update property # zpool import tank -N # zfs set readonly=on tank # zpool export tank ; Do a successful full import and export with a readonly pool # zpool import tank -o readonly=on # zfs mount -a # zpool export tank Not ideal, but it demonstrates the issue. Possible code workaround: if the pool is readonly, treat the property as a hard-coded value? |
@DeHackEd We attempt to do exactly that, see the very top of diff --git a/module/zfs/zfs_vfsops.c b/module/zfs/zfs_vfsops.c
index 602c332..9ae7ab5 100644
--- a/module/zfs/zfs_vfsops.c
+++ b/module/zfs/zfs_vfsops.c
@@ -233,10 +233,11 @@ zfs_register_callbacks(zfs_sb_t *zsb)
{
struct dsl_dataset *ds = NULL;
objset_t *os = zsb->z_os;
+ boolean_t do_readonly = B_FALSE;
int error = 0;
if (zfs_is_readonly(zsb) || !spa_writeable(dmu_objset_spa(os)))
- readonly_changed_cb(zsb, B_TRUE);
+ do_readonly = B_TRUE;
/*
* Register property callbacks.
@@ -271,6 +272,9 @@ zfs_register_callbacks(zfs_sb_t *zsb)
if (error)
goto unregister;
+ if (do_readonly)
+ readonly_changed_cb(zsb, B_TRUE);
+
return (0);
unregister: |
A quick round of testing yields good results! With write-enabled filesystems on a read-only pool unmounting and export are now fine. |
During mount a filesystem dataset would have the MS_RDONLY bit incorrectly cleared even if the entire pool was read-only. There is existing to code to handle this case but it was being run before the property callbacks were registered. To resolve the issue we move this read-only code after the callback registration. Signed-off-by: Brian Behlendorf <[email protected]> Closes openzfs#1338
Carrying over from #1332 and #1333 (which does fix importing of read-only pools) the following failure occurs during some unmount calls, most notably when exporting but not necessarily part of the export process itself.
As per ryao's request I verified no filesystem was mounted read-write (the 'rw' mount flag as found in /proc/mounts) while the pool was active. The pool consists of a small amount of data (about 8 GB in use) with dedup enabled and heavily used (1.99x ratio)
Pool was imported using the patch to make importing work.
Brief repetition testing suggests that leaf filesystems unmount cleanly but parent filesystems don't. One hung umount() doesn't prevent me from unmounting leaves.
The text was updated successfully, but these errors were encountered: