Skip to content
This repository has been archived by the owner on Feb 26, 2020. It is now read-only.

VERIFY3(c < (1ULL << 17) >> 9) failed (65535 < 256) SPLError: 18818:0:(zio.c:247:zio_buf_alloc()) SPL PANIC #390

Closed
akorn opened this issue Sep 18, 2014 · 10 comments

Comments

@akorn
Copy link

akorn commented Sep 18, 2014

Hi,

I have filesystem with xattr=sa and acltype=posixacl.

lgetxattr() on inodes with ACLs on it causes the process performing the lookup to hang, like this:

VERIFY3(c < (1ULL << 17) >> 9) failed (65535 < 256)
SPLError: 18818:0:(zio.c:247:zio_buf_alloc()) SPL PANIC
SPL: Showing stack for process 18818
CPU: 1 PID: 18818 Comm: rsync Tainted: P           O 3.10.16-vs2.3.6.6-superguppy #1
Hardware name: Supermicro H8SGL/H8SGL, BIOS 3.0        08/31/2012
 0000000000000000 ffff8807bed33598 ffffffff814eb2db ffff8807bed335b0
 ffffffffa019140b 0000000000000000 ffff8807bed335d0 ffffffffa019265a
 0000000000000033 ffffffffa051dae3 ffff8807bed33750 ffffffffa019be78
Call Trace:
 [<ffffffff814eb2db>] dump_stack+0x19/0x1b
 [<ffffffffa019140b>] spl_debug_dumpstack+0x46/0x49 [spl]
 [<ffffffffa019265a>] spl_debug_bug+0x88/0xcd [spl]
 [<ffffffffa019be78>] spl_PANIC+0x9d/0xc6 [spl]
 [<ffffffffa01913c3>] ? trace_unlock_tcd+0x11/0x13 [spl]
 [<ffffffffa0191f15>] ? spl_debug_msg+0x5a2/0x770 [spl]
 [<ffffffff811215d0>] ? ____cache_alloc+0x3a/0x29d
 [<ffffffff81121faa>] ? kmem_cache_alloc+0x8a/0xf8
 [<ffffffffa041721b>] ? arc_space_consume+0x9b/0xa0 [zfs]
 [<ffffffffa0417256>] ? buf_cons+0x36/0x3a [zfs]
 [<ffffffffa01964b7>] ? spl_kmem_cache_alloc+0xc6/0xb02 [spl]
 [<ffffffffa04c623c>] zio_buf_alloc+0x40/0x55 [zfs]
 [<ffffffffa0418126>] arc_get_data_buf+0x262/0x540 [zfs]
 [<ffffffffa041b9ad>] arc_buf_alloc+0x144/0x1b5 [zfs]
 [<ffffffffa041c7f3>] arc_read+0x43c/0xba7 [zfs]
 [<ffffffffa04246a9>] ? dbuf_rele_and_unlock+0x2b4/0x2b4 [zfs]
 [<ffffffffa0426546>] dbuf_read+0x58c/0x7db [zfs]
 [<ffffffffa0430b9b>] dmu_spill_hold_by_dnode+0xd4/0xfa [zfs]
 [<ffffffffa0430c32>] dmu_spill_hold_existing+0x71/0xe0 [zfs]
 [<ffffffffa0469243>] sa_get_spill+0x25/0x6d [zfs]
 [<ffffffffa046a252>] sa_attr_op+0x171/0x351 [zfs]
 [<ffffffffa046ac2a>] sa_lookup_impl+0x89/0x90 [zfs]
 [<ffffffffa046acd3>] sa_lookup+0xa2/0xb7 [zfs]
 [<ffffffffa019e1aa>] ? __cv_init+0x7b/0xd6 [spl]
 [<ffffffffa049f98d>] zfs_dirent_lock+0x3c9/0x510 [zfs]
 [<ffffffffa04a10c6>] zfs_get_xattrdir+0x68/0x152 [zfs]
 [<ffffffff8108147b>] ? should_resched+0x9/0x28
 [<ffffffff814ee800>] ? _cond_resched+0x9/0x19
 [<ffffffff814ed71f>] ? mutex_lock+0xd/0x2a
 [<ffffffffa04665f0>] ? refcount_add_many+0xdf/0xeb [zfs]
 [<ffffffffa046660d>] ? refcount_add+0x11/0x13 [zfs]
 [<ffffffffa04b33ac>] zfs_lookup+0x158/0x298 [zfs]
 [<ffffffffa04d1fb4>] __zpl_xattr_get+0xae/0x284 [zfs]
 [<ffffffffa04d2665>] zpl_xattr_get+0x57/0xc5 [zfs]
 [<ffffffffa04d3456>] zpl_get_acl+0xab/0x161 [zfs]
 [<ffffffffa04d3546>] zpl_xattr_acl_get+0x3a/0x8b [zfs]
 [<ffffffffa04d35fb>] zpl_xattr_acl_get_access+0x64/0x6f [zfs]
 [<ffffffff8114cc06>] generic_getxattr+0x47/0x59
 [<ffffffff8114cf9c>] vfs_getxattr+0x90/0x9d
 [<ffffffff8114d37e>] getxattr+0xc5/0x18e
 [<ffffffff8113b4b2>] ? final_putname+0x2f/0x32
 [<ffffffff8113b617>] ? putname+0x22/0x24
 [<ffffffff8113d987>] ? user_path_at_empty+0x84/0xad
 [<ffffffff811363e2>] ? SYSC_newlstat+0x25/0x2e
 [<ffffffff8114daba>] SyS_getxattr+0x4e/0x89
 [<ffffffff814efd56>] system_call_fastpath+0x1a/0x1f

The filesystem was created with 0.6.3-1~trusty from Darik's PPA ; the above was obtained with latest git master as of yesterday (f9bde4f, openzfs/zfs@2d50158)

Strace output up to the moment where the process hangs:

lstat("documents", {st_mode=S_IFDIR|S_ISGID|0775, st_size=4, ...}) = 0
lgetxattr("documents", "security.selinux"

Additionally, listxattr() on an entry with no xattrs returns EFAULT (at least after the above panic).

@dweeezil
Copy link
Contributor

@akorn This is likely a ZFS issue rather than an SPL issue. You're likely facing an issue which has recently started causing a lot of problems. The related issues are openzfs/zfs#2700, openzfs/zfs#2663, openzfs/zfs#2701 and openzfs/zfs#2717.

EDIT: Removed reference to issue 2214.

Unfortunately, I've not been able to reproduce this yet nor have I gotten my hands on a pool with the problem for further examination. If you could find the inode number of "documents", could you please run zdb -dddddd <pool>/<fs> <inode_number> on it or if you are comfortable building the latest code, use the zdb in https://github.com/dweeezil/zfs/tree/zdb. It will give some insight into the possible corruption.

@akorn
Copy link
Author

akorn commented Sep 18, 2014

@dweeezil, zdb segfaults:

Dataset foo/bar [ZPL], ID 46655, cr_txg 1364394, 1.46G, 9946 objects, rootbp DVA[0]=<0:1c43d9014000:4000> DVA[1]=<0:cc222120000:4000> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=1418891L/1418891P fill=9946 cksum=1237e4d318:6761a7cf0a3:136760788e9fa:28c6ea8dfbbbad

    Object  lvl   iblk   dblk  dsize  lsize   %full  type
         8    1    16K    512  46.5K    512  100.00  ZFS directory (K=inherit) (Z=inherit)
                                        284   bonus  System attributes
        dnode flags: USED_BYTES USERUSED_ACCOUNTED SPILL_BLKPTR
        dnode maxblkid: 0
        path    /documents
        uid     289
        gid     20007
        atime   Sat Aug 23 03:17:14 2014
        mtime   Fri Aug 22 18:03:12 2014
        ctime   Sat Aug 23 03:17:14 2014
        crtime  Wed Aug 13 07:15:29 2014
        gen     1366082
        mode    42775
        size    4
        parent  4
        links   4
        pflags  40800000044
[1]    11553 segmentation fault (core dumped)  zdb -dddddd foo/bar 8
#0  0x00007fe7540ebd09 in zio_buf_alloc () from /lib/libzpool.so.2
(gdb) bt
#0  0x00007fe7540ebd09 in zio_buf_alloc () from /lib/libzpool.so.2
#1  0x00007fe75407529d in ?? () from /lib/libzpool.so.2
#2  0x00007fe754077ab7 in arc_buf_alloc () from /lib/libzpool.so.2
#3  0x00007fe7540789e2 in arc_read () from /lib/libzpool.so.2
#4  0x00007fe75407eb57 in dbuf_read () from /lib/libzpool.so.2
#5  0x00007fe754085f48 in dmu_spill_hold_by_dnode () from /lib/libzpool.so.2
#6  0x00007fe75408604b in dmu_spill_hold_existing () from /lib/libzpool.so.2
#7  0x00007fe7540bc7f8 in ?? () from /lib/libzpool.so.2
#8  0x00007fe7540bdc48 in sa_attr_op () from /lib/libzpool.so.2
#9  0x00007fe7540bdcc5 in sa_lookup () from /lib/libzpool.so.2

Would it make sense to retry with your latest zdb (do you expect it to work where the zdb from openzfs/zfs@2d50158 segfaulted)? Or maybe the information it did print is enough?

@dweeezil
Copy link
Contributor

@akorn That version of my zdb branch would segfault, too. I just added a couple of changes to it ending with dweeezil/zfs@6efc9d3. That one will still segfault, but it should dump the spill blkptr before it does so (run with 7 -d's). You've clearly got a corrupted spill blkptr and my hope is by examining a dump of it, the nature of the corruption will be made more clear.

Am I correct in assuming this system is also using selinux and that it has a policy in place which would label files on ZFS? Inode 8 would normally be one of the very first objects created in a new file system. Had the file system just been created prior to running the rsync? Can you reliably reproduce this problem (in a reasonable amount of time)?

@akorn
Copy link
Author

akorn commented Sep 18, 2014

@dweeezil, I'll try with your zdb tomorrow if I have time.

As for selinux, no. I only use xattrs for posix ACLs.

This fs is the backup of another, live fs (xfs); I use rsync to update the copy, and then I take a zfs snapshot of it. The 'documents' directory probably has inode number 8 because rsync first created the top-level directories and only then started populating them.

Interestingly, all three existing snapshots of this fs exhibit the corruption, but it can't have been present when the snapshots were taken because the backup process locks up on a corrupt fs, leading to no new snapshot being created. The snapshots are from 25 August and 1 September, which means the corruption occurred after that, and that it took a while for it to happen (counting from initial fs creation).

I have no idea how reproducible the problem is. Since it's a backup fs (and the 2nd backup of the live system), I can destroy and recreate it at will, i.e. we can experiment.

@akorn
Copy link
Author

akorn commented Sep 20, 2014

@dweeezil, here is the output from your zdb:

Dataset foo/bar [ZPL], ID 46655, cr_txg 1364394, 1.46G, 9946 objects, rootbp DVA[0]=<0:1a47e62a4000:4000> DVA[1]=<0:cc1c02d0000:4000> [L0 DMU objset] fletcher4 lzjb LE contiguous unique double size=800L/200P birth=1476211L/1476211P fill=9946 cksum=16e080b728:7ca93872979:169dec236e385:2e3aa2544ad7bc

    Object  lvl   iblk   dblk  dsize  lsize   %full  type
         8    1    16K    512  46.5K    512  100.00  ZFS directory (K=inherit) (Z=inherit)
                                        284   bonus  System attributes
        dnode flags: USED_BYTES USERUSED_ACCOUNTED SPILL_BLKPTR
        dnode maxblkid: 0

        Spill blkptr:
                402653184:5cdacae8e6f2e600:0 2000000L/2000000P F=0 B=4294967295/1407516617474047
        Spill blkptr dump: \000\000\000\140\000\000\000\030\163\171\163\164\145\155\056\160\157\163\151\170\137\141\143\154\137\144\145\146\141\165\154\164\000\000\000\012\000\000\000\054\002\000\000\000\001\000\007\000\377\377\377\377\004\000\007\000\377\377\377\377\010\000\007\000\047\116\000\000\020\000\007\000\377\377\377\377\040\000\005\000\377\377\377\377\000\000\000\000\000\000\000\000\000\000\000\000\363\027\260\326\016\000\000\000\156\222\044\032\335\005\000\000\177\361\060\026\200\060\001\000\314\033\024\102\101\067\052\000
        SA hdrsize 16
        SA layout 3
        path    /documents
        uid     289
        gid     20007
        atime   Sat Aug 23 03:17:14 2014
        mtime   Fri Aug 22 18:03:12 2014
        ctime   Sat Aug 23 03:17:14 2014
        crtime  Wed Aug 13 07:15:29 2014
        gen     1366082
        mode    42775
        size    4
        parent  4
        links   4
        pflags  40800000044
        ndacl   3
zdb: ../../module/zfs/zio.c:247: Assertion `c < (1ULL << 17) >> 9 (0xffff < 0x100)' failed.
[1]    25045 abort (core dumped)  /usr/local/stow/zfs-dweeezil-ce58fc178b/sbin/zdb -ddddddd  8

And fwiw here is the backtrace:

(gdb) bt
#0  0x00007fcecc97c407 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x00007fcecc97d7e8 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x00007fcecc975526 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3  0x00007fcecc9755d2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4  0x00007fcecc97564b in __assert () from /lib/x86_64-linux-gnu/libc.so.6
#5  0x00007fcecd848376 in zio_buf_alloc (size=size@entry=33554432) at ../../module/zfs/zio.c:247
#6  0x00007fcecd7a1158 in arc_get_data_buf (buf=<optimized out>, buf=<optimized out>) at ../../module/zfs/arc.c:2788
#7  0x00007fcecd7a3fde in arc_buf_alloc (spa=spa@entry=0x1bdf390, size=size@entry=33554432, tag=tag@entry=0x2d22450, type=ARC_BUFC_METADATA)
    at ../../module/zfs/arc.c:1462
#8  0x00007fcecd7a5768 in arc_read (pio=pio@entry=0x2d27010, spa=0x1bdf390, bp=0x2d49180, done=done@entry=0x7fcecd7b0d90 <dbuf_read_done>, 
    private=private@entry=0x2d22450, priority=priority@entry=ZIO_PRIORITY_SYNC_READ, zio_flags=64, arc_flags=0x7fff722a2e6c, zb=0x7fff722a2e70)
    at ../../module/zfs/arc.c:3314
#9  0x00007fcecd7afb24 in dbuf_read_impl (db=db@entry=0x2d22450, zio=zio@entry=0x2d27010, flags=flags@entry=0x7fff722a2edc) at ../../module/zfs/dbuf.c:643
#10 0x00007fcecd7aff5b in dbuf_read (db=db@entry=0x2d22450, zio=0x2d27010, zio@entry=0x0, flags=flags@entry=6) at ../../module/zfs/dbuf.c:691
#11 0x00007fcecd7bb903 in dmu_spill_hold_by_dnode (dn=0x2d1dad0, flags=6, tag=0x0, dbp=0x2d70890) at ../../module/zfs/dmu.c:326
#12 0x00007fcecd7bba4b in dmu_spill_hold_existing (bonus=0x2cfdb20, tag=tag@entry=0x0, dbp=dbp@entry=0x2d70890) at ../../module/zfs/dmu.c:352
#13 0x00007fcecd807b6a in sa_get_spill (hdl=0x2d70850) at ../../module/zfs/sa.c:327
#14 0x00007fcecd8092fb in sa_attr_op (hdl=0x2d70850, bulk=<optimized out>, count=<optimized out>, data_op=SA_LOOKUP, tx=0x0) at ../../module/zfs/sa.c:371
#15 0x00007fcecd80979c in sa_lookup (hdl=0x2d70850, attr=<optimized out>, buf=buf@entry=0x7fff722a3088, buflen=buflen@entry=8) at ../../module/zfs/sa.c:1484
#16 0x000000000040795e in dump_znode (os=0x2c956d0, object=<optimized out>, data=<optimized out>, size=<optimized out>) at ../../cmd/zdb/zdb.c:1726
#17 0x000000000040b3d9 in dump_object (os=os@entry=0x2c956d0, object=8, verbosity=verbosity@entry=7, print_header=print_header@entry=0x7fff722a56ac)
    at ../../cmd/zdb/zdb.c:1890
#18 0x000000000040b9e9 in dump_dir (os=0x2c956d0) at ../../cmd/zdb/zdb.c:1991
#19 0x0000000000405084 in main (argc=1, argv=<optimized out>) at ../../cmd/zdb/zdb.c:3663

@akorn
Copy link
Author

akorn commented Sep 20, 2014

FWIW, these are the ACLs that should be set (obtained using "getfacl -n --skip-base *" on the source fs that was rsynced to zfs):

# file: attachments
# owner: 289
# group: 20007
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

# file: documents
# owner: 289
# group: 20007
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

# file: param
# owner: 289
# group: 20007
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

# file: report
# owner: 1034
# group: 20007
# flags: -s-
user::rwx
group::rwx
group:20007:rwx
mask::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

# file: report25
# owner: 289
# group: 20007
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

# file: uploads
# owner: 289
# group: 20007
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

# file: workspace
# owner: 289
# group: 20007
# flags: -s-
user::rwx
group::rwx
other::r-x
default:user::rwx
default:group::rwx
default:group:20007:rwx
default:mask::rwx
default:other::r-x

"documents" only contains two subdirectories (no files).

@dweeezil
Copy link
Contributor

@akorn Thank you. This is good information. It is certainly the same problem as in openzfs/zfs#2700 and other related issues I mentioned except that in your case, the problem has manifested itself in a corrupted spill blkptr as the SPILL_BLKPTR flag being set when it shouldn't be.

I'm still trying to reproduce the problem locally but now, between the zdb output I got from @sopmot in openzfs/zfs#2700 and your debugging above, I've got a better idea of what's happening.

If I can't come up with a reproducer and/or find the problem through code inspection, and if you're able to reliably reproduce the problem, could you do an strace output of rsync (and all its subordinate threads/processes; use "strace -ff -o ") and grep it for all operations involving the "documents" directory (beginning with its creation)? I'll post a followup here and to openzfs/zfs#2700 with more details after I work on this some more.

@dweeezil
Copy link
Contributor

I corrected the description of the corruption in the previous issue comment.

@akorn
Copy link
Author

akorn commented Nov 26, 2014

I can no longer import my pool (openzfs/zfs#2932); is there any way to recover, perhaps even resorting to a hex editor? I don't have backups because these are the backups. :) I don't mind losing a filesystem or two, but losing the entire pool would be inconvenient.

Trying to import with -o noacl,noxattr doesn't help either, I still hit the panic.

@behlendorf
Copy link
Contributor

Closing, the root cause of the issue was resolved by openzfs/zfs@4254acb

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants