Skip to content
This repository has been archived by the owner on Feb 26, 2020. It is now read-only.

Use tsd to store tq for taskq_member #514

Closed
wants to merge 1 commit into from

Conversation

behlendorf
Copy link
Contributor

To prevent taskq_member holding tq_lock and doing linear search, thus causing
contention. We store the taskq pointer to which the thread belongs in tsd.
This way taskq_member will not need to touch tq_lock, and tsd has per slot
spinlock. So the contention should be reduced greatly.

Signed-off-by: Chunwei Chen [email protected]
Signed-off-by: Brian Behlendorf [email protected]
Issue #500
Issue #504
Issue #505

@tuxoko
Copy link
Contributor

tuxoko commented Dec 16, 2015

@behlendorf Nice catch. But shouldn't spl_kmem_cache_init be after spl_taskq_init?

@behlendorf
Copy link
Contributor Author

@tuxoko technically yes but in practice it should safe because spl_taskq_init() doesn't initialize anything critical for taskq_create() to work properly. I thought about reordering it but it felt a little out of scope.

@tuxoko
Copy link
Contributor

tuxoko commented Dec 16, 2015

@behlendorf It needs tsd_create() in spl_taskq_init().

@behlendorf
Copy link
Contributor Author

Ack, so it does. OK, I'll refresh this and correct the ordering.

To prevent taskq_member holding tq_lock and doing linear search, thus causing
contention. We store the taskq pointer to which the thread belongs in tsd.
This way taskq_member will not need to touch tq_lock, and tsd has per slot
spinlock. So the contention should be reduced greatly.

Signed-off-by: Chunwei Chen <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#500
Issue openzfs#504
Issue openzfs#505
@behlendorf
Copy link
Contributor Author

Refreshed, and requeued for testing with openzfs/zfs#4108.

@sempervictus
Copy link
Contributor

There be dragons here, the lurking data corrupting type.
Seems that under some conditions the taskq cannot be found to dispatch operations, and some data seems to not be written. In my case, this manifested as a heavy crash in send/recv and apparently prior non-commits under ecryptfs (my home directory) which i'm still sorting out. All sorts of caches are corrupt, and access to them results in a hard lock in the call stack between zfs and ecryptfs. We really need in-dataset crypto...

Here's the original crash, and the subsequent crashes it caused in uksmd (which only works in userspace memory, so should not have been affected) and some internal cache in the kernel around ecryptfs far as i can tell:

[Sat Dec 19 03:06:21 2015] ZFS: async work taskq for pool rpool@20151219-0306 not found; failed to dispatch work op 2
[Sat Dec 19 03:08:24 2015] INFO: rcu_sched self-detected stall on CPU
[Sat Dec 19 03:08:24 2015]  3: (59999 ticks this GP) idle=77d/140000000000001/0 softirq=14379/14380 fqs=19993 
[Sat Dec 19 03:08:24 2015]   (t=60000 jiffies g=27283 c=27282 q=3688)
[Sat Dec 19 03:08:24 2015] Task dump for CPU 3:
[Sat Dec 19 03:08:24 2015] Cache2 I/O      R  running task        0 10991   4530 0x00000008
[Sat Dec 19 03:08:24 2015]  ffffffff82c55ec0 ffff88083f0c3dd8 ffffffff8209ed9f 0000000000000003
[Sat Dec 19 03:08:24 2015]  ffffffff82c55ec0 ffff88083f0c3df0 ffffffff8209eee6 0000000000000004
[Sat Dec 19 03:08:24 2015]  ffff88083f0c3e20 ffffffff820cd4fa ffff88083f0d6f80 ffffffff82c55ec0
[Sat Dec 19 03:08:24 2015] Call Trace:
[Sat Dec 19 03:08:24 2015]  <IRQ>  [<ffffffff8209ed9f>] sched_show_task+0xaf/0x110
[Sat Dec 19 03:08:24 2015]  [<ffffffff8209eee6>] dump_cpu_task+0x36/0x40
[Sat Dec 19 03:08:24 2015]  [<ffffffff820cd4fa>] rcu_dump_cpu_stacks+0x8a/0xc0
[Sat Dec 19 03:08:24 2015]  [<ffffffff820d0bac>] rcu_check_callbacks+0x46c/0x750
[Sat Dec 19 03:08:24 2015]  [<ffffffff820dc8fa>] ? update_wall_time+0x23a/0x650
[Sat Dec 19 03:08:24 2015]  [<ffffffff820e48a0>] ? tick_sched_do_timer+0x30/0x30
[Sat Dec 19 03:08:24 2015]  [<ffffffff820d5fa9>] update_process_times+0x39/0x60
[Sat Dec 19 03:08:24 2015]  [<ffffffff820e42f6>] tick_sched_handle.isra.15+0x36/0x50
[Sat Dec 19 03:08:24 2015]  [<ffffffff820e48dd>] tick_sched_timer+0x3d/0x70
[Sat Dec 19 03:08:24 2015]  [<ffffffff820d6976>] __hrtimer_run_queues+0xd6/0x1d0
[Sat Dec 19 03:08:24 2015]  [<ffffffff820d6d78>] hrtimer_interrupt+0xa8/0x1a0
[Sat Dec 19 03:08:24 2015]  [<ffffffff820484b5>] local_apic_timer_interrupt+0x35/0x60
[Sat Dec 19 03:08:24 2015]  [<ffffffff827993ed>] smp_apic_timer_interrupt+0x3d/0x60
[Sat Dec 19 03:08:24 2015]  [<ffffffff82797252>] apic_timer_interrupt+0x82/0x90
[Sat Dec 19 03:08:24 2015]  <EOI>  [<ffffffffc03f6c7c>] ? zap_leaf_lookup_closest+0x8c/0x190 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffffc03f5867>] fzap_cursor_retrieve+0xb7/0x240 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffffc03f8ddc>] zap_cursor_retrieve+0x5c/0x210 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffffc03936d5>] ? dmu_prefetch+0x125/0x190 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffffc0415d6c>] zfs_readdir+0x14c/0x480 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffffc0417c84>] ? zfs_getattr_fast+0x124/0x1c0 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffff821d6dfc>] ? vfs_getattr_nosec+0x2c/0x40
[Sat Dec 19 03:08:24 2015]  [<ffffffff827945b2>] ? mutex_lock+0x12/0x2f
[Sat Dec 19 03:08:24 2015]  [<ffffffffc0430202>] zpl_iterate+0x52/0x80 [zfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffff821e4f3a>] iterate_dir+0x9a/0x120
[Sat Dec 19 03:08:24 2015]  [<ffffffffc18de54c>] ecryptfs_readdir+0x6c/0xc0 [ecryptfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffffc18de5a0>] ? ecryptfs_readdir+0xc0/0xc0 [ecryptfs]
[Sat Dec 19 03:08:24 2015]  [<ffffffff8230a343>] ? security_file_permission+0xa3/0xc0
[Sat Dec 19 03:08:24 2015]  [<ffffffff821e4f3a>] iterate_dir+0x9a/0x120
[Sat Dec 19 03:08:24 2015]  [<ffffffff8209116c>] ? task_work_run+0x7c/0x90
[Sat Dec 19 03:08:24 2015]  [<ffffffff821e53a9>] SyS_getdents+0x89/0xf0
[Sat Dec 19 03:08:24 2015]  [<ffffffff821e5090>] ? fillonedir+0xd0/0xd0
[Sat Dec 19 03:08:24 2015]  [<ffffffff82003a50>] ? syscall_return_slowpath+0x50/0x120
[Sat Dec 19 03:08:24 2015]  [<ffffffff827964b6>] entry_SYSCALL_64_fastpath+0x16/0x75
[Sat Dec 19 03:10:28 2015] INFO: task uksmd:163 blocked for more than 180 seconds.
[Sat Dec 19 03:10:28 2015]       Tainted: P           OE   4.3.2-sv-i7 #sv
[Sat Dec 19 03:10:28 2015] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Sat Dec 19 03:10:28 2015] uksmd           D ffffffff820984f4     0   163      2 0x00000000
[Sat Dec 19 03:10:28 2015]  ffff8808353cbc28 0000000000000046 ffff8808353cbb98 000000010040003f
[Sat Dec 19 03:10:28 2015]  ffffea001b636000 0000000000000000 ffff88083f299480 ffff88083ec03b00
[Sat Dec 19 03:10:28 2015]  ffffffff82eeff40 000000000000000a ffff88083bb6b600 ffff880097f1b600
[Sat Dec 19 03:10:28 2015] Call Trace:
[Sat Dec 19 03:10:28 2015]  [<ffffffff8209a0f9>] ? best_mask_cpu+0xa9/0x260
[Sat Dec 19 03:10:28 2015]  [<ffffffff82792a93>] schedule+0x33/0xb0
[Sat Dec 19 03:10:28 2015]  [<ffffffff82795379>] schedule_timeout+0x1c9/0x230
[Sat Dec 19 03:10:28 2015]  [<ffffffff8209bdca>] ? try_to_wake_up+0x1ea/0x4c0
[Sat Dec 19 03:10:28 2015]  [<ffffffff82793364>] wait_for_completion+0xa4/0x110
[Sat Dec 19 03:10:28 2015]  [<ffffffff8209c0e0>] ? wake_up_process+0x40/0x40
[Sat Dec 19 03:10:28 2015]  [<ffffffff8208c107>] flush_work+0xf7/0x170
[Sat Dec 19 03:10:28 2015]  [<ffffffff8208a150>] ? destroy_worker+0x90/0x90
[Sat Dec 19 03:10:28 2015]  [<ffffffff8216643e>] lru_add_drain_all+0x12e/0x170
[Sat Dec 19 03:10:28 2015]  [<ffffffff821aeb41>] uksm_do_scan+0x4e1/0x18d0
[Sat Dec 19 03:10:28 2015]  [<ffffffff820d4170>] ? trace_event_raw_event_tick_stop+0xc0/0xc0
[Sat Dec 19 03:10:28 2015]  [<ffffffff821b006f>] uksm_scan_thread+0x13f/0x180
[Sat Dec 19 03:10:28 2015]  [<ffffffff821aff30>] ? uksm_do_scan+0x18d0/0x18d0
[Sat Dec 19 03:10:28 2015]  [<ffffffff821aff30>] ? uksm_do_scan+0x18d0/0x18d0
[Sat Dec 19 03:10:28 2015]  [<ffffffff82092949>] kthread+0xc9/0xe0
[Sat Dec 19 03:10:28 2015]  [<ffffffff82092880>] ? kthread_park+0x60/0x60
[Sat Dec 19 03:10:28 2015]  [<ffffffff8279684f>] ret_from_fork+0x3f/0x70
[Sat Dec 19 03:10:28 2015]  [<ffffffff82092880>] ? kthread_park+0x60/0x60
[Sat Dec 19 03:11:24 2015] INFO: rcu_sched self-detected stall on CPU
[Sat Dec 19 03:11:24 2015]  3: (240002 ticks this GP) idle=77d/140000000000001/0 softirq=14379/14380 fqs=79991 
[Sat Dec 19 03:11:24 2015]   (t=240003 jiffies g=27283 c=27282 q=91492)
[Sat Dec 19 03:11:24 2015] Task dump for CPU 3:
[Sat Dec 19 03:11:24 2015] Cache2 I/O      R  running task        0 10991   4530 0x00000008
[Sat Dec 19 03:11:24 2015]  ffffffff82c55ec0 ffff88083f0c3dd8 ffffffff8209ed9f 0000000000000003
[Sat Dec 19 03:11:24 2015]  ffffffff82c55ec0 ffff88083f0c3df0 ffffffff8209eee6 0000000000000004
[Sat Dec 19 03:11:24 2015]  ffff88083f0c3e20 ffffffff820cd4fa ffff88083f0d6f80 ffffffff82c55ec0
[Sat Dec 19 03:11:24 2015] Call Trace:
[Sat Dec 19 03:11:24 2015]  <IRQ>  [<ffffffff8209ed9f>] sched_show_task+0xaf/0x110
[Sat Dec 19 03:11:24 2015]  [<ffffffff8209eee6>] dump_cpu_task+0x36/0x40
[Sat Dec 19 03:11:24 2015]  [<ffffffff820cd4fa>] rcu_dump_cpu_stacks+0x8a/0xc0
[Sat Dec 19 03:11:24 2015]  [<ffffffff820d0bac>] rcu_check_callbacks+0x46c/0x750
[Sat Dec 19 03:11:24 2015]  [<ffffffff820dc8fa>] ? update_wall_time+0x23a/0x650
[Sat Dec 19 03:11:24 2015]  [<ffffffff820e48a0>] ? tick_sched_do_timer+0x30/0x30
[Sat Dec 19 03:11:24 2015]  [<ffffffff820d5fa9>] update_process_times+0x39/0x60
[Sat Dec 19 03:11:24 2015]  [<ffffffff820e42f6>] tick_sched_handle.isra.15+0x36/0x50
[Sat Dec 19 03:11:24 2015]  [<ffffffff820e48dd>] tick_sched_timer+0x3d/0x70
[Sat Dec 19 03:11:24 2015]  [<ffffffff820d6976>] __hrtimer_run_queues+0xd6/0x1d0
[Sat Dec 19 03:11:24 2015]  [<ffffffff820d6d78>] hrtimer_interrupt+0xa8/0x1a0
[Sat Dec 19 03:11:24 2015]  [<ffffffff820484b5>] local_apic_timer_interrupt+0x35/0x60
[Sat Dec 19 03:11:24 2015]  [<ffffffff827993ed>] smp_apic_timer_interrupt+0x3d/0x60
[Sat Dec 19 03:11:24 2015]  [<ffffffff82797252>] apic_timer_interrupt+0x82/0x90
[Sat Dec 19 03:11:24 2015]  <EOI>  [<ffffffffc03f6cc2>] ? zap_leaf_lookup_closest+0xd2/0x190 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffffc03f5867>] fzap_cursor_retrieve+0xb7/0x240 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffffc03f8ddc>] zap_cursor_retrieve+0x5c/0x210 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffffc03936d5>] ? dmu_prefetch+0x125/0x190 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffffc0415d6c>] zfs_readdir+0x14c/0x480 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffffc0417c84>] ? zfs_getattr_fast+0x124/0x1c0 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffff821d6dfc>] ? vfs_getattr_nosec+0x2c/0x40
[Sat Dec 19 03:11:24 2015]  [<ffffffff827945b2>] ? mutex_lock+0x12/0x2f
[Sat Dec 19 03:11:24 2015]  [<ffffffffc0430202>] zpl_iterate+0x52/0x80 [zfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffff821e4f3a>] iterate_dir+0x9a/0x120
[Sat Dec 19 03:11:24 2015]  [<ffffffffc18de54c>] ecryptfs_readdir+0x6c/0xc0 [ecryptfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffffc18de5a0>] ? ecryptfs_readdir+0xc0/0xc0 [ecryptfs]
[Sat Dec 19 03:11:24 2015]  [<ffffffff8230a343>] ? security_file_permission+0xa3/0xc0
[Sat Dec 19 03:11:24 2015]  [<ffffffff821e4f3a>] iterate_dir+0x9a/0x120
[Sat Dec 19 03:11:24 2015]  [<ffffffff8209116c>] ? task_work_run+0x7c/0x90
[Sat Dec 19 03:11:24 2015]  [<ffffffff821e53a9>] SyS_getdents+0x89/0xf0
[Sat Dec 19 03:11:24 2015]  [<ffffffff821e5090>] ? fillonedir+0xd0/0xd0
[Sat Dec 19 03:11:24 2015]  [<ffffffff82003a50>] ? syscall_return_slowpath+0x50/0x120
[Sat Dec 19 03:11:24 2015]  [<ffffffff827964b6>] entry_SYSCALL_64_fastpath+0x16/0x75

After that the scheduler loses its mind, and everything goes south pretty quick to a hard lock (usually sysrq can reboot, but not always).

After reboot, it keeps throwing the following and proceeding to kill the entire system:

[  187.425033] Cache2 I/O      R  running task        0  8090   4575 0x00000008
[  187.425033]  ffffffff82c55ec0 ffff88083f083dd8 ffffffff8209ed9f 0000000000000002
[  187.425033]  ffffffff82c55ec0 ffff88083f083df0 ffffffff8209eee6 0000000000000003
[  187.425033]  ffff88083f083e20 ffffffff820cd4fa ffff88083f096f80 ffffffff82c55ec0
[  187.425033] Call Trace:
[  187.425033]  <IRQ>  [<ffffffff8209ed9f>] sched_show_task+0xaf/0x110
[  187.425033]  [<ffffffff8209eee6>] dump_cpu_task+0x36/0x40
[  187.425033]  [<ffffffff820cd4fa>] rcu_dump_cpu_stacks+0x8a/0xc0
[  187.425033]  [<ffffffff820d0bac>] rcu_check_callbacks+0x46c/0x750
[  187.425033]  [<ffffffff820dc8fa>] ? update_wall_time+0x23a/0x650
[  187.425033]  [<ffffffff820e48a0>] ? tick_sched_do_timer+0x30/0x30
[  187.425033]  [<ffffffff820d5fa9>] update_process_times+0x39/0x60
[  187.425033]  [<ffffffff820e42f6>] tick_sched_handle.isra.15+0x36/0x50
[  187.425033]  [<ffffffff820e48dd>] tick_sched_timer+0x3d/0x70
[  187.425033]  [<ffffffff820d6976>] __hrtimer_run_queues+0xd6/0x1d0
[  187.425033]  [<ffffffff820d6d78>] hrtimer_interrupt+0xa8/0x1a0
[  187.425033]  [<ffffffff820484b5>] local_apic_timer_interrupt+0x35/0x60
[  187.425033]  [<ffffffff827993ed>] smp_apic_timer_interrupt+0x3d/0x60
[  187.425033]  [<ffffffff82797252>] apic_timer_interrupt+0x82/0x90
[  187.425033]  <EOI>  [<ffffffffc03c4142>] ? zap_leaf_lookup_closest+0xd2/0x190 [zfs]
[  187.425033]  [<ffffffffc03c2ce7>] fzap_cursor_retrieve+0xb7/0x240 [zfs]
[  187.425033]  [<ffffffffc03c625c>] zap_cursor_retrieve+0x5c/0x210 [zfs]
[  187.425033]  [<ffffffffc03606d5>] ? dmu_prefetch+0x125/0x190 [zfs]
[  187.425033]  [<ffffffffc03e324c>] zfs_readdir+0x14c/0x480 [zfs]
[  187.425033]  [<ffffffffc03e5164>] ? zfs_getattr_fast+0x124/0x1c0 [zfs]
[  187.425033]  [<ffffffff821d6dfc>] ? vfs_getattr_nosec+0x2c/0x40
[  187.425033]  [<ffffffff827945b2>] ? mutex_lock+0x12/0x2f
[  187.425033]  [<ffffffffc03fd082>] zpl_iterate+0x52/0x80 [zfs]
[  187.425033]  [<ffffffff821e4f3a>] iterate_dir+0x9a/0x120
[  187.425033]  [<ffffffffc192154c>] ecryptfs_readdir+0x6c/0xc0 [ecryptfs]
[  187.425033]  [<ffffffffc19215a0>] ? ecryptfs_readdir+0xc0/0xc0 [ecryptfs]
[  187.425033]  [<ffffffff8230a343>] ? security_file_permission+0xa3/0xc0
[  187.425033]  [<ffffffff821e4f3a>] iterate_dir+0x9a/0x120
[  187.425033]  [<ffffffff8209116c>] ? task_work_run+0x7c/0x90
[  187.425033]  [<ffffffff821e53a9>] SyS_getdents+0x89/0xf0
[  187.425033]  [<ffffffff821e5090>] ? fillonedir+0xd0/0xd0
[  187.425033]  [<ffffffff82003a50>] ? syscall_return_slowpath+0x50/0x120
[  187.425033]  [<ffffffff827964b6>] entry_SYSCALL_64_fastpath+0x16/0x75

Earlier working builds of ZFS do not fix this - there's some damage at the data layer, but scrub says everything is ok in ZFS. Manually pruning caches and sqlite dbs from firefox and other apps which keep things in my home directory seems to stop the issue from resurfacing, but it is a bit of a PITA to lose all the url completions and histories from browsers/editors. C'est la vie, my own fault for using myself as a crash test dummy.

Can we add ecryptfs testing to the buildbots? This isnt the first time ecryptfs has shown some strange new error in ZoL and the repairs can get a bit painful, snapshots and all (since mounting a snap doesnt really help access the encrypted data within when the user is logged into the live home dir with the same keys).

Dynamic threads were enabled in SPL when this hit, disabled since, but not too eager to test that build until i have some more time to blow up a VM or 10. Anyone else see strange things happening with this patch?

@tuxoko
Copy link
Contributor

tuxoko commented Dec 20, 2015

@sempervictus
"ZFS: async work taskq for pool rpool@20151219-0306 not found; failed to dispatch work op 2"
Where does this come from?

@sempervictus
Copy link
Contributor

Dmesg doesn't have a precursor to it, i was doing a send/recv over SSH though.

@behlendorf
Copy link
Contributor Author

@sempervictus thanks for the heads up but lets determine if in fact it is this patch. The ZFS: async work taskq error message referenced by @tuxoko doesn't appear in the master source or the release branches. Since that looks like it may in fact be the cause we're curious which patch you've applied it's part of.

@sempervictus
Copy link
Contributor

Taking apart my changelogs, i see spl as:

  * origin/pr/391
  ** Add support for signing kernel modules
  * origin/pr/512
  ** Add spl_kmem_cache_kmem_threads man page entry
  * origin/pr/513
  ** kobj_read_file: Return -1 on vn_rdwr() error
  * master @ cb877e0ff2648085c56ab78f15740f2b64bab849

and zfs as:

  * origin/pr/2012
  ** Add option to zpool status to print guids
  * origin/pr/3166
  ** Make linking with and finding libblkid required
  * origin/pr/3169
  ** Add dfree_zfs for changing how Samba reports space
  * origin/pr/3574
  ** 5745 zfs set allows only one dataset property to be set at a time
  * origin/pr/3643
  ** Remove fastwrite mutex
  * origin/pr/3830
  ** zfsonlinux issue #3681 - lock order inversion between zvol_open() and
  * origin/pr/3984
  ** Illumos 6292 exporting a pool while an async destroy is running can leave entries in the deferred tree
  * origin/pr/3985
  ** Illumos 6319 assertion failed in zio_ddt_write: bp->blk_birth == txg
  * origin/pr/3987
  ** Illumos 6288 dmu_buf_will_dirty could be faster
  * origin/pr/3988
  ** Illumos 6171 dsl_prop_unregister() slows down dataset eviction.
  * origin/pr/4081
  ** Describe the fragmentation metric in zpool.8
  * origin/pr/4104
  ** kobj_read_file: Return -1 on vn_rdwr() error
  * origin/pr/4123
  ** Fix empty xattr dir causing lockup
  * origin/pr/4124
  ** Add zfs_object_mutex_size module option
  * tuxoko/abd_next
  ** Fix wrong assertion after the sg merge patch
  * master @ 76d5bf196cf6e5625f884a9ebbdaf53873a5a979

As you can see, i wasn't kidding about the crashtest piece - it had built and worked fine in a VM for a few hours. Just more proof that gremlins prefer eating real data. I can pull up the branch when i free up a bit and try to hunt down the commit which introduced that message. Thank Git we have revision history (and snapshots).

@kernelOfTruth
Copy link

@tuxoko , @tuxoko most probably from openzfs/zfs#3830 issues #2217, #3681 - set of commits dealing with zvol__minor_() processing

specifically commit: bprotopopov/zfs@3586fa9 zfsonlinux issue #3681 - lock order inversion between zvol_open() and dsl_pool_sync()...zvol_rename_minors()

@kernelOfTruth
Copy link

up-to-date test results from the buildbots should be available soon via

openzfs/zfs#4131 [rebase + buildbot retest, pull #3830] issues #2217, #3681 - set of commits dealing with zvol__minor_() processing

@behlendorf
Copy link
Contributor Author

Did we determine if there's actually a problem with this patch? Or was it from an unrelated commit?

@tuxoko
Copy link
Contributor

tuxoko commented Jan 12, 2016

I don't think the above discussion is related to this pull request.

@kernelOfTruth
Copy link

no problems observed for now - using in: https://github.com/kernelOfTruth/spl/commits/spl_kOT_08.01.2016

@behlendorf
Copy link
Contributor Author

Merged, the reported issue is unrelated.

16522ac Use tsd to store tq for taskq_member

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants