-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RC8 release - rsync crash #642
Comments
Did you compile your kernel with CONFIG_PREEMPT_VOLUNTARY? It looks like this was caused by that. |
Let me elaborate on my just posted previous reply. Everything "kernel" was from a yum update. I am presuming that if CONFIG_PREEMPT_VOLUNTARY=y is set in the development rpm, then it is truly set in the kernel. Here, I did not compile the kernel. -----Original Message----- Did you compile your kernel with CONFIG_PREEMPT_VOLUNTARY? It looks like this was caused by that. Reply to this email directly or view it on GitHub: |
You might be able to do |
Noted that this reply did not make it into the issue notes last Friday. -----Original Message----- No such file in /proc but: -----Original Message----- You might be able to do Reply to this email directly or view it on GitHub: |
That file describes how your kernel is configured. It confirms that your kernel is compiled in a way known to cause problems such as the one you posted. |
Yes... my thinking is inverted here. I just searched the phrase in all the issues and realized that... finally. I should actually force a CONFIG_PREEMPT_VOLUNTARY=N as opposed to 'not set'. Just to make sure I have no more flawed logic, what is the current recommendation for the CONFIG_PREEMPT_XXX flags. Here are the current kernel defaults. I will re-compile. -----Original Message----- That file describes how your kernel is configured. Tt confirms that your kernel is compiled in a way known to cause problems such as the one you posted. Reply to this email directly or view it on GitHub: |
Use menuconfig to set "Preemption Model (No Forced Preemption (Server))". Here is what effect that has on your .config file, although I recommend using menuconfig rather than setting this by hand. zgrep PREEMPT /proc/config.gzCONFIG_PREEMPT_RCU is not setCONFIG_PREEMPT_NOTIFIERS=y CONFIG_PREEMPT_VOLUNTARY is not setCONFIG_PREEMPT is not set |
Im not sure if this matters, but did you see commit #1f0d8a5 ? |
For CentOS, my new kernel shows: CONFIG_PREEMPT_VOLUNTARY is not setCONFIG_PREEMPT is not set[root@tsdpl-bu boot] -----Original Message----- Use menuconfig to set "Preemption Model (No Forced Preemption (Server))". Here is what effect that has on your .config file, although I recommend using menuconfig rather than setting this by hand. zgrep PREEMPT /proc/config.gzCONFIG_PREEMPT_RCU is not setCONFIG_PREEMPT_NOTIFIERS=y CONFIG_PREEMPT_VOLUNTARY is not setCONFIG_PREEMPT is not setReply to this email directly or view it on GitHub: |
@tstudios make sure that you have the patches that were committed today. |
Just had a crash with the new kernel. Did not pick up the patches committed 04/11/12. However, the symptom I'm seeing with rsync apparently was fixed in a patch between -rc6 and -rc7, which was the code I'm was running. Should I include the las crash trace here? -----Original Message----- @tstudios make sure that you have the patches that were committed today. Reply to this email directly or view it on GitHub: |
It would not hurt to post the crash trace, but I do suggest updating to the latest code. Your issue might have been fixed in it. |
The trace is below. I loaded the system with many rsync streams and watched free -om. Free Just today, I was doing a search and read using the key words ARC and INFO: task kswapd0:82 blocked for more than 120 seconds. |
Ahhh... I see zfsonlinux-zfs-0.6.0-rc8-15-gcf81b00.tar.gz when I click I do appreciate your looking at the trace. Seems like rsync is one of -----Original Message----- It would not hurt to post the crash trace, but I do suggest updating to Reply to this email directly or view it on GitHub: |
@tstudios I had a similar issue with crashes when doing large rsyncs on my server that had 16GB of RAM. It only occurred with the patch in issue #618, but the patch in issue #660 appears to have fixed it. You should be able to achieve the same result with the current code by setting zfs_arc_max when the module is loaded. I suggest trying that. You could use a size of 1/4 system memory for the sake of using a round number. |
Yes, thanks. I'm running 4 streams right now on a job that needs to -----Original Message----- @tstudios I had a similar issue with crashes when doing large rsyncs on You should be able to achieve the same result with the current code by Reply to this email directly or view it on GitHub: |
The setting: options zfs zfs_arc_max=8589934592 zfs_arc_min=0 in -----Original Message----- @tstudios I had a similar issue with crashes when doing large rsyncs on You should be able to achieve the same result with the current code by Reply to this email directly or view it on GitHub: |
All my rsyncs ran just fine this morning. However, there was not much -----Original Message----- @tstudios It probably would be okay to set zfs_arc_max to a few hundred Reply to this email directly or view it on GitHub: |
@tstudios Please keep this open until a commit has been made to the GIT repository to address this issue. Also, would you try testing with zfs_arc_max set to exactly 1/2 of your system RAM? @behlendorf plans to merge the fix for this into zfsonlinux HEAD, but he would prefer to use 1/2 rather than 1/3. I will not have time to test that until next week. |
I will do the test. It may be early tomorrow. I still have 1 large new -----Original Message----- @tstudios Please keep this open until a commit has been made to the GIT Reply to this email directly or view it on GitHub: |
The value is in bytes. |
I was looking to see if you wanted mathematical -----Original Message----- The value is in bytes. Reply to this email directly or view it on GitHub: |
One half of what /proc/meminfo reports. |
I was not able to reboot this morning after the modprobe.d/zfs.conf -----Original Message----- @tstudios Please keep this open until a commit has been made to the GIT Reply to this email directly or view it on GitHub: |
Set zfs_arc_max=16437974000, which is 1/2 of what MemTotal reports in -----Original Message----- One half of what /proc/meminfo reports. Reply to this email directly or view it on GitHub: |
Sounds promising. Thanks for the update, unless I hear otherwise I'm planning to change the default zfs_arc_max value to 1/2 of total system memory when I merge the other VM changes. |
Excellent! Glad to provide a test bed, as much as I can on a I should have some 'data churn' over the weekend. That will give my The simultaneous launch of 39 streams to backup "client" systems to my Right now the "backup" server is rsyncing the "primary" server with 4 |
Reviewed-by: George Melikov <[email protected]> Reviewed-by: Giuseppe Di Natale <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Closes openzfs#642
…4ab21b6-28e4-4ed2-81df-1ca843e377ee QA-37846 zpool_import_014_pos creates poolA but don't destory in cleanup.
Replaced rc6 code with patches today. Multiple rsync streams to the pool are the current operations. CentOS6.0 kernel: Linux tsdpl.turner.com 2.6.32-71.29.1.el6.x86_64 #1 SMP Mon Jun 27 19:49:27 BST 2011 x86_64 x86_64 x86_64 GNU/Linux. SPL and ZFS RPMs made on like kernel on like OS. Installed there and here after rpm -e all old spl and zfs modules. Dmsg output below:
[root@tsdpl ~]# cat /root/dmesg.txt
INFO: task kswapd0:82 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kswapd0 D ffffffffffffffff 0 82 2 0x00000000
ffff8808083e7ab0 0000000000000046 ffff8808083e7a40 ffffffffa0439c24
ffff8808083e7a40 ffff8807b71a1e70 0000000000000000 ffffffff81013c8e
ffff8808083e5a98 ffff8808083e7fd8 0000000000010518 ffff8808083e5a98
Call Trace:
[] ? arc_buf_remove_ref+0xd4/0x120 [zfs]
[] ? apic_timer_interrupt+0xe/0x20
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] balance_pgdat+0x54e/0x770
[] ? isolate_pages_global+0x0/0x380
[] kswapd+0x134/0x390
[] ? autoremove_wake_function+0x0/0x40
[] ? kswapd+0x0/0x390
[] kthread+0x96/0xa0
[] child_rip+0xa/0x20
[] ? kthread+0x0/0xa0
[] ? child_rip+0x0/0x20
INFO: task rsync:3905 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rsync D ffffffffffffffff 0 3905 3774 0x00000080
ffff88064d80f268 0000000000000086 0000000000000000 ffff8801161ce2d8
ffff88043dca01e8 ffffffffffffff10 ffffffff81013c8e ffff88064d80f268
ffff8808064e3068 ffff88064d80ffd8 0000000000010518 ffff8808064e3068
Call Trace:
[] ? apic_timer_interrupt+0xe/0x20
[] ? mutex_spin_on_owner+0x9b/0xc0
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] do_try_to_free_pages+0x2d6/0x500
[] ? get_page_from_freelist+0x15c/0x820
[] try_to_free_pages+0x9f/0x130
[] ? isolate_pages_global+0x0/0x380
[] __alloc_pages_nodemask+0x3ee/0x850
[] alloc_pages_current+0x9a/0x100
[] __get_free_pages+0xe/0x50
[] kv_alloc+0x3f/0xc0 [spl]
[] spl_kmem_cache_alloc+0x500/0xb90 [spl]
[] dnode_create+0x42/0x170 [zfs]
[] dnode_hold_impl+0x3ec/0x550 [zfs]
[] dnode_hold+0x19/0x20 [zfs]
[] dmu_bonus_hold+0x34/0x260 [zfs]
[] ? ifind_fast+0x3c/0xb0
[] sa_buf_hold+0xe/0x10 [zfs]
[] zfs_zget+0xca/0x1e0 [zfs]
[] ? kmem_alloc_debug+0x26b/0x350 [spl]
[] zfs_dirent_lock+0x481/0x550 [zfs]
[] zfs_dirlook+0x8b/0x270 [zfs]
[] ? arc_read+0xad/0x150 [zfs]
[] zfs_lookup+0x2ff/0x350 [zfs]
[] zpl_lookup+0x57/0xc0 [zfs]
[] do_lookup+0x18b/0x220
[] __link_path_walk+0x6f5/0x1040
[] ? __link_path_walk+0x729/0x1040
[] path_walk+0x6a/0xe0
[] do_path_lookup+0x5b/0xa0
[] user_path_at+0x57/0xa0
[] ? current_fs_time+0x27/0x30
[] vfs_fstatat+0x3c/0x80
[] vfs_lstat+0x1e/0x20
[] sys_newlstat+0x24/0x50
[] ? audit_syscall_entry+0x272/0x2a0
[] system_call_fastpath+0x16/0x1b
INFO: task rsync:4636 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rsync D ffff88082f828800 0 4636 4610 0x00000080
ffff88042f04d268 0000000000000086 0000000000000000 ffffea00020b3350
ffffea00020b37e8 ffffea00020b3740 ffffffff81013c8e 000000010068e992
ffff880807a1db18 ffff88042f04dfd8 0000000000010518 ffff880807a1db18
Call Trace:
[] ? apic_timer_interrupt+0xe/0x20
[] ? mutex_spin_on_owner+0x9b/0xc0
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] do_try_to_free_pages+0x2d6/0x500
[] ? get_page_from_freelist+0x15c/0x820
[] try_to_free_pages+0x9f/0x130
[] ? isolate_pages_global+0x0/0x380
[] ? wakeup_kswapd+0x1/0x130
[] __alloc_pages_nodemask+0x3ee/0x850
[] alloc_pages_current+0x9a/0x100
[] __get_free_pages+0xe/0x50
[] kv_alloc+0x3f/0xc0 [spl]
[] spl_kmem_cache_alloc+0x500/0xb90 [spl]
[] dnode_create+0x42/0x170 [zfs]
[] dnode_hold_impl+0x3ec/0x550 [zfs]
[] dnode_hold+0x19/0x20 [zfs]
[] dmu_bonus_hold+0x34/0x260 [zfs]
[] ? ifind_fast+0x3c/0xb0
[] sa_buf_hold+0xe/0x10 [zfs]
[] zfs_zget+0xca/0x1e0 [zfs]
[] ? kmem_alloc_debug+0x26b/0x350 [spl]
[] zfs_dirent_lock+0x481/0x550 [zfs]
[] zfs_dirlook+0x8b/0x270 [zfs]
[] ? tsd_exit+0x5f/0x1c0 [spl]
[] zfs_lookup+0x2ff/0x350 [zfs]
[] zpl_lookup+0x57/0xc0 [zfs]
[] do_lookup+0x18b/0x220
[] __link_path_walk+0x6f5/0x1040
[] ? __link_path_walk+0x729/0x1040
[] path_walk+0x6a/0xe0
[] do_path_lookup+0x5b/0xa0
[] user_path_at+0x57/0xa0
[] ? _atomic_dec_and_lock+0x55/0x80
[] ? cp_new_stat+0xe4/0x100
[] vfs_fstatat+0x3c/0x80
[] vfs_lstat+0x1e/0x20
[] sys_newlstat+0x24/0x50
[] ? audit_syscall_entry+0x272/0x2a0
[] system_call_fastpath+0x16/0x1b
INFO: task kswapd0:82 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
kswapd0 D ffffffffffffffff 0 82 2 0x00000000
ffff8808083e7ab0 0000000000000046 ffff8808083e7a40 ffffffffa0439c24
ffff8808083e7a40 ffff8807b71a1e70 0000000000000000 ffffffff81013c8e
ffff8808083e5a98 ffff8808083e7fd8 0000000000010518 ffff8808083e5a98
Call Trace:
[] ? arc_buf_remove_ref+0xd4/0x120 [zfs]
[] ? apic_timer_interrupt+0xe/0x20
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] balance_pgdat+0x54e/0x770
[] ? isolate_pages_global+0x0/0x380
[] kswapd+0x134/0x390
[] ? autoremove_wake_function+0x0/0x40
[] ? kswapd+0x0/0x390
[] kthread+0x96/0xa0
[] child_rip+0xa/0x20
[] ? kthread+0x0/0xa0
[] ? child_rip+0x0/0x20
INFO: task khugepaged:84 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
khugepaged D ffff88082f829000 0 84 2 0x00000000
ffff8808083ef8b0 0000000000000046 0000000000000000 ffffea0015a08c80
ffff88080658cb78 ffff8800456d69f0 0000000000000000 000000010069ce87
ffff8808083e4638 ffff8808083effd8 0000000000010518 ffff8808083e4638
Call Trace:
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] do_try_to_free_pages+0x2d6/0x500
[] ? get_page_from_freelist+0x15c/0x820
[] try_to_free_pages+0x9f/0x130
[] ? isolate_pages_global+0x0/0x380
[] __alloc_pages_nodemask+0x3ee/0x850
[] ? del_timer_sync+0x22/0x30
[] alloc_pages_vma+0x93/0x150
[] ? autoremove_wake_function+0x0/0x40
[] khugepaged+0xa9b/0x1210
[] ? autoremove_wake_function+0x0/0x40
[] ? khugepaged+0x0/0x1210
[] kthread+0x96/0xa0
[] child_rip+0xa/0x20
[] ? kthread+0x0/0xa0
[] ? child_rip+0x0/0x20
INFO: task txg_quiesce:3508 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
txg_quiesce D ffff88082f829000 0 3508 2 0x00000080
ffff8807bfa25d60 0000000000000046 ffff8807bfa25d28 ffff8807bfa25d24
ffff8807bfa25d30 ffff88082f829000 ffff880045676980 00000001006a26f0
ffff8807eef7b0a8 ffff8807bfa25fd8 0000000000010518 ffff8807eef7b0a8
Call Trace:
[] cv_wait_common+0x9c/0x1a0 [spl]
[] ? autoremove_wake_function+0x0/0x40
[] ? __bitmap_weight+0x8c/0xb0
[] __cv_wait+0x13/0x20 [spl]
[] txg_quiesce_thread+0x1eb/0x330 [zfs]
[] ? set_user_nice+0xd7/0x140
[] ? txg_quiesce_thread+0x0/0x330 [zfs]
[] thread_generic_wrapper+0x68/0x80 [spl]
[] ? thread_generic_wrapper+0x0/0x80 [spl]
[] kthread+0x96/0xa0
[] child_rip+0xa/0x20
[] ? kthread+0x0/0xa0
[] ? child_rip+0x0/0x20
INFO: task rsync:3905 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rsync D ffffffffffffffff 0 3905 3774 0x00000080
ffff88064d80f268 0000000000000086 0000000000000000 ffff8801161ce2d8
ffff88043dca01e8 ffffffffffffff10 ffffffff81013c8e ffff88064d80f268
ffff8808064e3068 ffff88064d80ffd8 0000000000010518 ffff8808064e3068
Call Trace:
[] ? apic_timer_interrupt+0xe/0x20
[] ? mutex_spin_on_owner+0x9b/0xc0
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] do_try_to_free_pages+0x2d6/0x500
[] ? get_page_from_freelist+0x15c/0x820
[] try_to_free_pages+0x9f/0x130
[] ? isolate_pages_global+0x0/0x380
[] __alloc_pages_nodemask+0x3ee/0x850
[] alloc_pages_current+0x9a/0x100
[] __get_free_pages+0xe/0x50
[] kv_alloc+0x3f/0xc0 [spl]
[] spl_kmem_cache_alloc+0x500/0xb90 [spl]
[] dnode_create+0x42/0x170 [zfs]
[] dnode_hold_impl+0x3ec/0x550 [zfs]
[] dnode_hold+0x19/0x20 [zfs]
[] dmu_bonus_hold+0x34/0x260 [zfs]
[] ? ifind_fast+0x3c/0xb0
[] sa_buf_hold+0xe/0x10 [zfs]
[] zfs_zget+0xca/0x1e0 [zfs]
[] ? kmem_alloc_debug+0x26b/0x350 [spl]
[] zfs_dirent_lock+0x481/0x550 [zfs]
[] zfs_dirlook+0x8b/0x270 [zfs]
[] ? arc_read+0xad/0x150 [zfs]
[] zfs_lookup+0x2ff/0x350 [zfs]
[] zpl_lookup+0x57/0xc0 [zfs]
[] do_lookup+0x18b/0x220
[] __link_path_walk+0x6f5/0x1040
[] ? __link_path_walk+0x729/0x1040
[] path_walk+0x6a/0xe0
[] do_path_lookup+0x5b/0xa0
[] user_path_at+0x57/0xa0
[] ? current_fs_time+0x27/0x30
[] vfs_fstatat+0x3c/0x80
[] vfs_lstat+0x1e/0x20
[] sys_newlstat+0x24/0x50
[] ? audit_syscall_entry+0x272/0x2a0
[] system_call_fastpath+0x16/0x1b
INFO: task rsync:4496 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rsync D ffff88082f828e00 0 4496 4495 0x00000080
ffff880074b31a78 0000000000000086 0000000000000000 ffff88038cddd378
ffff880026ce9c00 0000001f00000200 ffff88053a65ce18 00000001006a1954
ffff88080447a6b8 ffff880074b31fd8 0000000000010518 ffff88080447a6b8
Call Trace:
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_mknode+0x139/0xc70 [zfs]
[] ? txg_rele_to_quiesce+0x11/0x20 [zfs]
[] ? dmu_tx_assign+0x3e1/0x480 [zfs]
[] zfs_create+0x59a/0x6f0 [zfs]
[] zpl_create+0xa7/0xe0 [zfs]
[] ? generic_permission+0x5c/0xb0
[] vfs_create+0xb4/0xe0
[] do_filp_open+0xb70/0xd50
[] ? mntput_no_expire+0x30/0x110
[] ? alloc_fd+0x92/0x160
[] do_sys_open+0x69/0x140
[] sys_open+0x20/0x30
[] system_call_fastpath+0x16/0x1b
INFO: task rsync:4636 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
rsync D ffff88082f828800 0 4636 4610 0x00000080
ffff88042f04d268 0000000000000086 0000000000000000 ffffea00020b3350
ffffea00020b37e8 ffffea00020b3740 ffffffff81013c8e 000000010068e992
ffff880807a1db18 ffff88042f04dfd8 0000000000010518 ffff880807a1db18
Call Trace:
[] ? apic_timer_interrupt+0xe/0x20
[] ? mutex_spin_on_owner+0x9b/0xc0
[] __mutex_lock_slowpath+0x13e/0x180
[] mutex_lock+0x2b/0x50
[] zfs_zinactive+0x7e/0x110 [zfs]
[] zfs_inactive+0x87/0x200 [zfs]
[] zpl_clear_inode+0xe/0x10 [zfs]
[] clear_inode+0x8f/0x110
[] dispose_list+0x40/0x120
[] shrink_icache_memory+0x274/0x2e0
[] shrink_slab+0x13a/0x1a0
[] do_try_to_free_pages+0x2d6/0x500
[] ? get_page_from_freelist+0x15c/0x820
[] try_to_free_pages+0x9f/0x130
[] ? isolate_pages_global+0x0/0x380
[] ? wakeup_kswapd+0x1/0x130
[] __alloc_pages_nodemask+0x3ee/0x850
[] alloc_pages_current+0x9a/0x100
[] __get_free_pages+0xe/0x50
[] kv_alloc+0x3f/0xc0 [spl]
[] spl_kmem_cache_alloc+0x500/0xb90 [spl]
[] dnode_create+0x42/0x170 [zfs]
[] dnode_hold_impl+0x3ec/0x550 [zfs]
[] dnode_hold+0x19/0x20 [zfs]
[] dmu_bonus_hold+0x34/0x260 [zfs]
[
The text was updated successfully, but these errors were encountered: