Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.6.3 zfs hang, known issue? #3532

Closed
chjohnst opened this issue Jun 26, 2015 · 12 comments
Closed

0.6.3 zfs hang, known issue? #3532

chjohnst opened this issue Jun 26, 2015 · 12 comments
Milestone

Comments

@chjohnst
Copy link

I have a dozen or so production 0.6.3 ZFS servers that export NFS with billions of small files. We used to get hangs similar to this when we left the default arc cache value but recently we reduced it to around 16GB (512GB host), and starting to see them pop up again. The host gets into this state which we eventually deadlock on. Freeing the page cache does nothing, and the only resort is to reboot. I bumped up the arc cache to 64GB this morning hoping it would help. I can easily reproduce the hang if someone does an rsync of the data (either locally or over NFS).

2015-06-26T10:12:16.685241-04:00 kernel: [196952.904761] INFO: task spl_kmem_cache/:1532 blocked for more than 120 seconds.
2015-06-26T10:12:16.685243-04:00 kernel: [196952.904827] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
2015-06-26T10:12:16.685245-04:00 kernel: [196952.904903] spl_kmem_cache/ D ffff884063e25080     0  1532      2 0x00000000
2015-06-26T10:12:16.685246-04:00 kernel: [196952.904905]  ffff884063e25080 0000000000000046 0000000000000000 ffff88406688d7c0
2015-06-26T10:12:16.685247-04:00 kernel: [196952.904907]  0000000000012740 ffff884064b93fd8 ffff884064b93fd8 0000000000012740
2015-06-26T10:12:16.685248-04:00 kernel: [196952.904908]  ffff884063e25080 0000000000012740 0000000000012740 ffff884064b92010
2015-06-26T10:12:16.685249-04:00 kernel: [196952.904910] Call Trace:
2015-06-26T10:12:16.685251-04:00 kernel: [196952.904912]  [<ffffffff8136c518>] ? __mutex_lock_common+0x10c/0x172
2015-06-26T10:12:16.685252-04:00 kernel: [196952.904913]  [<ffffffff8136c644>] ? mutex_lock+0x1a/0x2c
2015-06-26T10:12:16.685253-04:00 kernel: [196952.904916]  [<ffffffffa02d04aa>] ? spl_kmem_cache_reap_now+0x224/0x279 [spl]
2015-06-26T10:12:16.685253-04:00 kernel: [196952.904923]  [<ffffffffa04fd9dd>] ? zpl_nr_cached_objects+0x1c/0x31 [zfs]
2015-06-26T10:12:16.685254-04:00 kernel: [196952.904925]  [<ffffffff8110b12c>] ? prune_super+0x66/0x14f
2015-06-26T10:12:16.685255-04:00 kernel: [196952.904927]  [<ffffffff810cdf5c>] ? shrink_slab+0x96/0x266
2015-06-26T10:12:16.685273-04:00 kernel: [196952.904929]  [<ffffffff810cf44b>] ? do_try_to_free_pages+0x32c/0x4c9
2015-06-26T10:12:16.685276-04:00 kernel: [196952.904931]  [<ffffffff810cf84d>] ? try_to_free_pages+0xa9/0xe9
2015-06-26T10:12:16.685277-04:00 kernel: [196952.904933]  [<ffffffff810c6060>] ? __alloc_pages_nodemask+0x4ef/0x799
2015-06-26T10:12:16.685278-04:00 kernel: [196952.904937]  [<ffffffff810f2305>] ? alloc_pages_current+0xbb/0xd8
2015-06-26T10:12:16.685278-04:00 kernel: [196952.904938]  [<ffffffff810c41ba>] ? __get_free_pages+0x9/0x46
2015-06-26T10:12:16.685279-04:00 kernel: [196952.904941]  [<ffffffffa02d18aa>] ? spl_cache_grow_work+0x32/0x3bd [spl]
2015-06-26T10:12:16.685281-04:00 kernel: [196952.904944]  [<ffffffffa02d4957>] ? taskq_thread+0x2be/0x43a [spl]
2015-06-26T10:12:16.685282-04:00 kernel: [196952.904948]  [<ffffffff810472bd>] ? try_to_wake_up+0x191/0x191
2015-06-26T10:12:16.685284-04:00 kernel: [196952.904950]  [<ffffffffa02d4699>] ? task_expire+0xe5/0xe5 [spl]
2015-06-26T10:12:16.685298-04:00 kernel: [196952.904953]  [<ffffffffa02d4699>] ? task_expire+0xe5/0xe5 [spl]
2015-06-26T10:12:16.685299-04:00 kernel: [196952.904954]  [<ffffffff81064ae5>] ? kthread+0x7a/0x82
2015-06-26T10:12:16.685301-04:00 kernel: [196952.904956]  [<ffffffff81374934>] ? kernel_thread_helper+0x4/0x10
2015-06-26T10:12:16.685303-04:00 kernel: [196952.904958]  [<ffffffff81064a6b>] ? kthread_worker_fn+0x147/0x147
2015-06-26T10:12:16.685304-04:00 kernel: [196952.904960]  [<ffffffff81374930>] ? gs_change+0x13/0x13
@dasjoe
Copy link
Contributor

dasjoe commented Jun 26, 2015

I strongly recommend upgrading to v0.6.4.1 - it may include fixes to your issue.
If your issue persists on 0.6.4.1 you should follow @ryao's suggestion from #2240, run
perf record -F 997 -p $PID -g -- sleep 10
and either generate the flame graph yourself or submit the resulting file.

@behlendorf
Copy link
Contributor

Better yet, hold for just a little bit and jump to 0.6.4.2 which have several additional fixes in this area.

@chjohnst
Copy link
Author

Oh nice, when is 0.6.4.2 expected to be released? @dasjoe yea that should be useful. Basically what I am seeing in the rsync jobs that run that are reading in the gazillion files, and writing them to another path. I see my arccache (which is now set to 64GB) grow very very quickly. Looking at arcstat I can see mdmiss% at 100% which my guess is upon reboot, the first scan is a miss and is showing up.

In some situations like this I am wondering if primarycache=metadata might be better suited for this workload.

@odoucet
Copy link

odoucet commented Jun 27, 2015

0.6.4.2 was released yesterday :)

@chjohnst
Copy link
Author

Oh cool I'll check that out in my RnD lab!
On Jun 27, 2015 1:00 PM, "Olivier Doucet" [email protected] wrote:

0.6.4.2 was released yesterday :)


Reply to this email directly or view it on GitHub
#3532 (comment).

@wellhardh
Copy link

I got a similar problem in 0.6.4.2. The problem happens after running rsync. The machine is sort of usable but needs a reboot to make zfs work again. The metaslab_group thread look suspicious as zfs has been reentered via spl and kernel eviction.

Jul  2 12:00:06 : INFO: task kswapd0:32 blocked for more than 120 seconds.
Jul  2 12:00:06 : "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul  2 12:00:06 : kswapd0         D ffff88023fd13680     0    32      2 0x00000000
Jul  2 12:00:06 : ffff88023082b830 0000000000000046 ffff880233a00b60 ffff88023082bfd8
Jul  2 12:00:06 : ffff88023082bfd8 ffff88023082bfd8 ffff880233a00b60 ffffffffa01d8b90
Jul  2 12:00:06 : ffffffffa01d8b94 ffff880233a00b60 00000000ffffffff ffffffffa01d8b98
Jul  2 12:00:06 : Call Trace:
Jul  2 12:00:06 : [<ffffffff8160a899>] schedule_preempt_disabled+0x29/0x70
Jul  2 12:00:06 : [<ffffffff816085e5>] __mutex_lock_slowpath+0xc5/0x1c0
Jul  2 12:00:06 : [<ffffffff81607a4f>] mutex_lock+0x1f/0x2f
Jul  2 12:00:06 : [<ffffffffa00bb265>] arc_buf_remove_ref+0xa5/0x130 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c2007>] dbuf_rele_and_unlock+0x167/0x440 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c2436>] dbuf_rele+0x36/0x40 [zfs]
Jul  2 12:00:06 : [<ffffffffa00de040>] dnode_rele_and_unlock+0x80/0x90 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c227e>] dbuf_rele_and_unlock+0x3de/0x440 [zfs]
Jul  2 12:00:06 : [<ffffffff81600c64>] ? __slab_free+0x10e/0x277
Jul  2 12:00:06 : [<ffffffffa00c2436>] dbuf_rele+0x36/0x40 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c268e>] dmu_buf_rele+0xe/0x10 [zfs]
Jul  2 12:00:06 : [<ffffffffa0102963>] sa_handle_destroy+0x73/0xc0 [zfs]
Jul  2 12:00:06 : [<ffffffffa015dea7>] zfs_zinactive+0xa7/0x180 [zfs]
Jul  2 12:00:06 : [<ffffffffa01574b4>] zfs_inactive+0x64/0x230 [zfs]
Jul  2 12:00:06 : [<ffffffffa016ef13>] zpl_evict_inode+0x43/0x60 [zfs]
Jul  2 12:00:06 : [<ffffffff811e1d67>] evict+0xa7/0x170
Jul  2 12:00:06 : [<ffffffff811e1e6e>] dispose_list+0x3e/0x50
Jul  2 12:00:06 : [<ffffffff811e2d43>] prune_icache_sb+0x163/0x320
Jul  2 12:00:06 : [<ffffffff811c9e86>] prune_super+0xd6/0x1a0
Jul  2 12:00:06 : [<ffffffff81168e75>] shrink_slab+0x165/0x300
Jul  2 12:00:06 : [<ffffffff811c0161>] ? vmpressure+0x21/0x90
Jul  2 12:00:06 : [<ffffffff8116cac1>] balance_pgdat+0x4b1/0x5e0
Jul  2 12:00:06 : [<ffffffff8116cd63>] kswapd+0x173/0x450
Jul  2 12:00:06 : [<ffffffff81098230>] ? wake_up_bit+0x30/0x30
Jul  2 12:00:06 : [<ffffffff8116cbf0>] ? balance_pgdat+0x5e0/0x5e0
Jul  2 12:00:06 : [<ffffffff8109726f>] kthread+0xcf/0xe0
Jul  2 12:00:06 : [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul  2 12:00:06 : [<ffffffff81614158>] ret_from_fork+0x58/0x90
Jul  2 12:00:06 : [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul  2 12:00:06 : INFO: task metaslab_group_:874 blocked for more than 120 seconds.
Jul  2 12:00:06 : "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul  2 12:00:06 : metaslab_group_ D ffff88023fc13680     0   874      2 0x00000000
Jul  2 12:00:06 : ffff88022e763490 0000000000000046 ffff8800bae838e0 ffff88022e763fd8
Jul  2 12:00:06 : ffff88022e763fd8 ffff88022e763fd8 ffff8800bae838e0 ffff880225ced0d8
Jul  2 12:00:06 : ffff880225ced0dc ffff8800bae838e0 00000000ffffffff ffff880225ced0e0
Jul  2 12:00:06 : Call Trace:
Jul  2 12:00:06 : [<ffffffff8160a899>] schedule_preempt_disabled+0x29/0x70
Jul  2 12:00:06 : [<ffffffff816085e5>] __mutex_lock_slowpath+0xc5/0x1c0
Jul  2 12:00:06 : [<ffffffff81607a4f>] mutex_lock+0x1f/0x2f
Jul  2 12:00:06 : [<ffffffffa015de52>] zfs_zinactive+0x52/0x180 [zfs]
Jul  2 12:00:06 : [<ffffffffa01574b4>] zfs_inactive+0x64/0x230 [zfs]
Jul  2 12:00:06 : [<ffffffffa016ef13>] zpl_evict_inode+0x43/0x60 [zfs]
Jul  2 12:00:06 : [<ffffffff811e1d67>] evict+0xa7/0x170
Jul  2 12:00:06 : [<ffffffff811e1e6e>] dispose_list+0x3e/0x50
Jul  2 12:00:06 : [<ffffffff811e2d43>] prune_icache_sb+0x163/0x320
Jul  2 12:00:06 : [<ffffffff811c9e86>] prune_super+0xd6/0x1a0
Jul  2 12:00:06 : [<ffffffff81168e75>] shrink_slab+0x165/0x300
Jul  2 12:00:06 : [<ffffffff8115c08f>] ? zone_watermark_ok+0x1f/0x30
Jul  2 12:00:06 : [<ffffffff8117a9a3>] ? compaction_suitable+0xa3/0xb0
Jul  2 12:00:06 : [<ffffffff8116bfc2>] do_try_to_free_pages+0x3c2/0x4e0
Jul  2 12:00:06 : [<ffffffff8116c1dc>] try_to_free_pages+0xfc/0x180
Jul  2 12:00:06 : [<ffffffff8116085d>] __alloc_pages_nodemask+0x7fd/0xb90
Jul  2 12:00:06 : [<ffffffff8119f069>] alloc_pages_current+0xa9/0x170
Jul  2 12:00:06 : [<ffffffff811a90c5>] new_slab+0x275/0x300
Jul  2 12:00:06 : [<ffffffff816012eb>] __slab_alloc+0x315/0x48f
Jul  2 12:00:06 : [<ffffffffa000bd2a>] ? spl_kmem_cache_alloc+0xaa/0x180 [spl]
Jul  2 12:00:06 : [<ffffffff811ab703>] kmem_cache_alloc+0x193/0x1d0
Jul  2 12:00:06 : [<ffffffffa000bd2a>] ? spl_kmem_cache_alloc+0xaa/0x180 [spl]
Jul  2 12:00:06 : [<ffffffffa000bd2a>] spl_kmem_cache_alloc+0xaa/0x180 [spl]
Jul  2 12:00:06 : [<ffffffffa0163333>] zio_buf_alloc+0x23/0x30 [zfs]
Jul  2 12:00:06 : [<ffffffffa00b82c2>] arc_get_data_buf.isra.22+0x2b2/0x4a0 [zfs]
Jul  2 12:00:06 : [<ffffffffa00bb6fa>] arc_read+0x39a/0xa90 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c3cfb>] ? __dbuf_hold_impl+0x24b/0x520 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c423b>] dbuf_prefetch+0x1cb/0x2d0 [zfs]
Jul  2 12:00:06 : [<ffffffffa00cbb02>] dmu_prefetch+0x2d2/0x2f0 [zfs]
Jul  2 12:00:06 : [<ffffffffa011a451>] space_map_load+0xd1/0x530 [zfs]
Jul  2 12:00:06 : [<ffffffff810125c6>] ? __switch_to+0x136/0x4a0
Jul  2 12:00:06 : [<ffffffffa00fe026>] metaslab_load+0x36/0xe0 [zfs]
Jul  2 12:00:06 : [<ffffffffa00fe13f>] metaslab_preload+0x6f/0xc0 [zfs]
Jul  2 12:00:06 : [<ffffffffa000d7fe>] taskq_thread+0x1ae/0x350 [spl]
Jul  2 12:00:06 : [<ffffffff810a9500>] ? wake_up_state+0x20/0x20
Jul  2 12:00:06 : [<ffffffffa000d650>] ? taskq_cancel_id+0x140/0x140 [spl]
Jul  2 12:00:06 : [<ffffffff8109726f>] kthread+0xcf/0xe0
Jul  2 12:00:06 : [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul  2 12:00:06 : [<ffffffff81614158>] ret_from_fork+0x58/0x90
Jul  2 12:00:06 : [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul  2 12:00:06 : INFO: task txg_sync:879 blocked for more than 120 seconds.
Jul  2 12:00:06 : "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul  2 12:00:06 : txg_sync        D ffff88023fd13680     0   879      2 0x00000000
Jul  2 12:00:06 : ffff880232f7bbc8 0000000000000046 ffff88022f27a220 ffff880232f7bfd8
Jul  2 12:00:06 : ffff880232f7bfd8 ffff880232f7bfd8 ffff88022f27a220 ffff8802325adc38
Jul  2 12:00:06 : ffff8802325adc00 ffff8802325adc40 ffff8802325adc28 0000000000000000
Jul  2 12:00:06 : Call Trace:
Jul  2 12:00:06 : [<ffffffff816096a9>] schedule+0x29/0x70
Jul  2 12:00:06 : [<ffffffffa0011745>] cv_wait_common+0x125/0x150 [spl]
Jul  2 12:00:06 : [<ffffffff81098230>] ? wake_up_bit+0x30/0x30
Jul  2 12:00:06 : [<ffffffffa0011785>] __cv_wait+0x15/0x20 [spl]
Jul  2 12:00:06 : [<ffffffffa00fed43>] metaslab_sync_done+0xe3/0x3f0 [zfs]
Jul  2 12:00:06 : [<ffffffffa011ff73>] vdev_sync_done+0x43/0x70 [zfs]
Jul  2 12:00:06 : [<ffffffffa010af3b>] spa_sync+0x64b/0xbc0 [zfs]
Jul  2 12:00:06 : [<ffffffff8109825b>] ? autoremove_wake_function+0x2b/0x40
Jul  2 12:00:06 : [<ffffffffa011cdae>] txg_sync_thread+0x37e/0x610 [zfs]
Jul  2 12:00:06 : [<ffffffffa011ca30>] ? txg_fini+0x2a0/0x2a0 [zfs]
Jul  2 12:00:06 : [<ffffffffa000ccc1>] thread_generic_wrapper+0x71/0x80 [spl]
Jul  2 12:00:06 : [<ffffffffa000cc50>] ? __thread_exit+0x20/0x20 [spl]
Jul  2 12:00:06 : [<ffffffff8109726f>] kthread+0xcf/0xe0
Jul  2 12:00:06 : [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul  2 12:00:06 : [<ffffffff81614158>] ret_from_fork+0x58/0x90
Jul  2 12:00:06 : [<ffffffff810971a0>] ? kthread_create_on_node+0x140/0x140
Jul  2 12:00:06 : INFO: task rsync:6007 blocked for more than 120 seconds.
Jul  2 12:00:06 : "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul  2 12:00:06 : rsync           D ffff88023fc13680     0  6007   5976 0x00000080
Jul  2 12:00:06 : ffff88005eecb920 0000000000000082 ffff88007034b8e0 ffff88005eecbfd8
Jul  2 12:00:06 : ffff88005eecbfd8 ffff88005eecbfd8 ffff88007034b8e0 ffffffffa01d8b90
Jul  2 12:00:06 : ffffffffa01d8b94 ffff88007034b8e0 00000000ffffffff ffffffffa01d8b98
Jul  2 12:00:06 : Call Trace:
Jul  2 12:00:06 : [<ffffffff8160a899>] schedule_preempt_disabled+0x29/0x70
Jul  2 12:00:06 : [<ffffffff816085e5>] __mutex_lock_slowpath+0xc5/0x1c0
Jul  2 12:00:06 : [<ffffffff811ab5a5>] ? kmem_cache_alloc+0x35/0x1d0
Jul  2 12:00:06 : [<ffffffff81607a4f>] mutex_lock+0x1f/0x2f
Jul  2 12:00:06 : [<ffffffffa00b54f0>] buf_hash_find+0xa0/0x150 [zfs]
Jul  2 12:00:06 : [<ffffffffa00bb476>] arc_read+0x116/0xa90 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c22e0>] ? dbuf_rele_and_unlock+0x440/0x440 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c336d>] dbuf_read+0x2cd/0xa10 [zfs]
Jul  2 12:00:06 : [<ffffffffa00cce80>] dmu_buf_hold+0x50/0x80 [zfs]
Jul  2 12:00:06 : [<ffffffffa013147b>] zap_lockdir+0x5b/0x920 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c0739>] ? dbuf_find+0x1d9/0x1e0 [zfs]
Jul  2 12:00:06 : [<ffffffffa00c4122>] ? dbuf_prefetch+0xb2/0x2d0 [zfs]
Jul  2 12:00:06 : [<ffffffffa0131f54>] zap_cursor_retrieve+0x214/0x310 [zfs]
Jul  2 12:00:06 : [<ffffffffa00cba85>] ? dmu_prefetch+0x255/0x2f0 [zfs]
Jul  2 12:00:06 : [<ffffffffa015246e>] zfs_readdir+0x14e/0x4c0 [zfs]
Jul  2 12:00:06 : [<ffffffff811d6472>] ? path_openat+0xc2/0x490
Jul  2 12:00:06 : [<ffffffffa016d7a6>] zpl_readdir+0x76/0xc0 [zfs]
Jul  2 12:00:06 : [<ffffffff811da0c0>] ? fillonedir+0xe0/0xe0
Jul  2 12:00:06 : [<ffffffff811da0c0>] ? fillonedir+0xe0/0xe0
Jul  2 12:00:06 : [<ffffffff811d9fb0>] vfs_readdir+0xb0/0xe0
Jul  2 12:00:06 : [<ffffffff811da3d5>] SyS_getdents+0x95/0x120
Jul  2 12:00:06 : [<ffffffff81614209>] system_call_fastpath+0x16/0x1b

@wellhardh
Copy link

It happened again with similar stack traces. Load is increasing but the machine is idle, no significant CPU usage, i.e. deadlock rather than infinite loop.

@dweeezil
Copy link
Contributor

dweeezil commented Jul 3, 2015

@wellhardh Good catch. It seems we need to lock down the metaslab preload threads. In the mean time, you can likely work around it by setting metaslab_preload_enabled=0.

dweeezil added a commit to dweeezil/zfs that referenced this issue Jul 3, 2015
Reclaim during metaslab preloading can cause deadlocks involving znode
z_lock and ARC buffer header ht_lock.

Fixes openzfs#3532.
@dweeezil
Copy link
Contributor

dweeezil commented Jul 3, 2015

@wellhardh Please try #3557.

@odoucet
Copy link

odoucet commented Jul 3, 2015

Can this parameter be changed at any time - even with high load - without trouble (meaning there will be no locking when applying metaslab_preload_enabled=0) ?

@wellhardh
Copy link

@dweeezil Disabling metaslab_preload with

echo 0 > /sys/module/zfs/parameters/metaslab_preload_enabled

solved the problem. I managed to complete the rsync run and the script has been running some days without problems.

I managed to compile your fix and it is running now. I enabled metaslab_preload again. The fix looks to be active:

[root@localhost ~]# grep cookie /usr/src/zfs-0.6.4.2/module/zfs/metaslab.c
                              fstrans_cookie_t cookie = spl_fstrans_mark();
                              spl_fstrans_unmark(cookie);
[root@localhost ~]# /sbin/modinfo zfs|head -9
filename:       /lib/modules/3.10.0-229.7.2.el7.x86_64/extra/zfs.ko
version:        0.6.4.2-1_g8058672
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
rhelversion:    7.1
srcversion:     39498C86CC0104F095A42CF
depends:        spl,znvpair,zcommon,zunicode,zavl
vermagic:       3.10.0-229.7.2.el7.x86_64 SMP mod_unload modversions
[root@localhost ~]# cat /sys/module/zfs/parameters/metaslab_preload_enabled
1
[root@localhost ~]#

I will return with the result when the tests are done.

@behlendorf
Copy link
Contributor

@odoucet yes it can be changed safely at run time.

@wellhardh the proposed fix has been merged to master if you'd rather run with that code.

janlam7 pushed a commit to janlam7/zfs that referenced this issue Jul 6, 2015
Reclaim during metaslab preloading can cause deadlocks involving znode
z_lock and ARC buffer header ht_lock.

Signed-off-by: Tim Chase <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes openzfs#3532.
@behlendorf behlendorf added this to the 0.6.5 milestone Jul 6, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants