-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kernel panic VERIFY3(sa.sa_magic == SA_MAGIC) failed #12659
Labels
Type: Defect
Incorrect behavior (e.g. crash, hang)
Comments
#11433 seems relevant. |
thanks @rincebrain . I saw it too, but i am not 100% sure whether it is the same case, thus i created new one. Here the problem comes from sending from unencrypted to encrypted parent dataset. |
Ah yes, encryption. For the moment, I would probably suggest just...avoiding encryption. |
thanks @rincebrain . I might consider LUKS + no encryption for the time being |
Closed with #13144 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
System information
Describe the problem you're observing
I get kernel panic on target during initial synchronization between source and ssh remote machine using syncoid. Once i get this error the only way to fix it is to reboot the target. The zfs recv process stays in "D" state on target machine.
Please suggest how i can at least kill the process.
Both machines have same version of zfs,kernel and ubuntu
I tested with any types of compression or switch it off on both sides with no luck. The only difference is that target has encrypted parent dataset and source doesn't.
Describe how to reproduce the problem
(source) sudo zfs create -o compression=lz4 ssd/private
(target) sudo zfs create -o compression=lz4 -o keyformat=passphrase -o keylocation=file:///home/xxxx/backuppass -o canmount=noauto -o encryption=on backup/encrypted
crash happens during syncoid run.
sudo syncoid --no-stream --debug --create-bookmark --no-sync-snap ssd/private root@hostname:backup/encrypted/private
Same happens with zfs-replicate which calls this command internally
zfs send -p -c -L -v -R ssd/private@snap1| ssh root@host zfs recv -F -v -s -x encryption backup/encrypted/private
The process succeeds if i send to unencrypted base dataset.
Include any warning/errors/backtraces from the system logs
kernel: [53264.927615] VERIFY3(sa.sa_magic == SA_MAGIC) failed (3511224769 == 3100762)
kernel: [53264.927624] PANIC at zfs_quota.c:89:zpl_get_file_info()
kernel: [53264.927628] Showing stack for process 1211
kernel: [53264.927631] CPU: 2 PID: 1211 Comm: z_upgrade Tainted: P O 5.13.0-19-generic #19-Ubuntu
kernel: [53264.927635] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H81M-DGS, BIOS P2.00 03/10/2016
kernel: [53264.927638] Call Trace:
kernel: [53264.927644] show_stack+0x52/0x58
kernel: [53264.927652] dump_stack+0x7d/0x9c
kernel: [53264.927663] spl_dumpstack+0x29/0x2b [spl]
kernel: [53264.927685] spl_panic+0xd4/0xfc [spl]
kernel: [53264.927701] ? __cond_resched+0x1a/0x50
kernel: [53264.927707] ? __mutex_lock.constprop.0+0x35/0x4f0
kernel: [53264.927712] ? do_raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53264.927881] ? __raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53264.928010] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23
kernel: [53264.928019] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23
kernel: [53264.928024] ? __cond_resched+0x1a/0x50
kernel: [53264.928028] ? slab_pre_alloc_hook.constprop.0+0x96/0xe0
kernel: [53264.928036] zpl_get_file_info+0xa0/0x230 [zfs]
kernel: [53264.928236] dmu_objset_userquota_get_ids+0x161/0x440 [zfs]
kernel: [53264.928384] dnode_setdirty+0x38/0xf0 [zfs]
kernel: [53264.928540] dbuf_dirty+0x44b/0x6d0 [zfs]
kernel: [53264.928681] dmu_buf_will_dirty_impl+0xb7/0x110 [zfs]
kernel: [53264.928821] dmu_buf_will_dirty+0x16/0x20 [zfs]
kernel: [53264.928959] dmu_objset_space_upgrade+0xca/0x1c0 [zfs]
kernel: [53264.929107] dmu_objset_id_quota_upgrade_cb+0xae/0x190 [zfs]
kernel: [53264.929205] dmu_objset_upgrade_task_cb+0xd2/0x100 [zfs]
kernel: [53264.929293] taskq_thread+0x235/0x430 [spl]
kernel: [53264.929309] ? wake_up_q+0xa0/0xa0
kernel: [53264.929314] kthread+0x11f/0x140
kernel: [53264.929318] ? param_set_taskq_kick+0xf0/0xf0 [spl]
kernel: [53264.929329] ? set_kthread_struct+0x50/0x50
kernel: [53264.929332] ret_from_fork+0x22/0x30
kernel: [53282.420220] VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, &zp->z_sa_hdl)) failed
kernel: [53282.420225] PANIC at zfs_znode.c:339:zfs_znode_sa_init()
kernel: [53282.420228] Showing stack for process 9492
kernel: [53282.420230] CPU: 1 PID: 9492 Comm: ls Tainted: P O 5.13.0-19-generic #19-Ubuntu
kernel: [53282.420232] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H81M-DGS, BIOS P2.00 03/10/2016
kernel: [53282.420234] Call Trace:
kernel: [53282.420237] show_stack+0x52/0x58
kernel: [53282.420244] dump_stack+0x7d/0x9c
kernel: [53282.420251] spl_dumpstack+0x29/0x2b [spl]
kernel: [53282.420267] spl_panic+0xd4/0xfc [spl]
kernel: [53282.420278] ? queued_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.420405] ? do_raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.420499] ? __raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.420592] ? dmu_buf_replace_user+0x65/0x80 [zfs]
kernel: [53282.420688] ? dmu_buf_set_user+0x13/0x20 [zfs]
kernel: [53282.420783] ? dmu_buf_set_user_ie+0x15/0x20 [zfs]
kernel: [53282.420878] zfs_znode_sa_init+0xd9/0xe0 [zfs]
kernel: [53282.421023] zfs_znode_alloc+0x101/0x560 [zfs]
kernel: [53282.421168] ? queued_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.421262] ? do_raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.421354] ? __raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.421447] ? dbuf_rele_and_unlock+0x13b/0x520 [zfs]
kernel: [53282.421540] ? queued_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.421632] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23
kernel: [53282.421639] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23
kernel: [53282.421643] ? queued_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.421748] ? do_raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.421853] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23
kernel: [53282.421857] ? dmu_object_info_from_dnode+0x8e/0xa0 [zfs]
kernel: [53282.421956] zfs_zget+0x235/0x280 [zfs]
kernel: [53282.422099] zfs_dirent_lock+0x420/0x560 [zfs]
kernel: [53282.422244] zfs_dirlook+0x91/0x2a0 [zfs]
kernel: [53282.422388] zfs_lookup+0x1f8/0x3f0 [zfs]
kernel: [53282.422537] zpl_lookup+0xcb/0x220 [zfs]
kernel: [53282.422684] __lookup_slow+0x84/0x150
kernel: [53282.422687] walk_component+0x141/0x1b0
kernel: [53282.422689] path_lookupat+0x6e/0x1c0
kernel: [53282.422692] ? __raw_spin_unlock+0x9/0x10 [zfs]
kernel: [53282.422824] filename_lookup+0xbf/0x1c0
kernel: [53282.422827] ? __virt_addr_valid+0x49/0x70
kernel: [53282.422832] ? __check_object_size.part.0+0x128/0x150
kernel: [53282.422835] ? __check_object_size+0x1c/0x20
kernel: [53282.422837] ? strncpy_from_user+0x44/0x140
kernel: [53282.422843] ? getname_flags.part.0+0x4c/0x1b0
kernel: [53282.422845] user_path_at_empty+0x59/0x90
kernel: [53282.422848] vfs_statx+0x7a/0x120
kernel: [53282.422851] ? __mark_inode_dirty+0x2b6/0x2f0
kernel: [53282.422856] do_statx+0x45/0x80
kernel: [53282.422860] ? iterate_dir+0x121/0x1c0
kernel: [53282.422864] ? __x64_sys_getdents64+0xd5/0x120
kernel: [53282.422867] ? __ia32_sys_getdents+0x120/0x120
kernel: [53282.422870] __x64_sys_statx+0x1f/0x30
kernel: [53282.422873] do_syscall_64+0x61/0xb0
kernel: [53282.422878] ? do_syscall_64+0x6e/0xb0
kernel: [53282.422880] ? exc_page_fault+0x8f/0x170
kernel: [53282.422884] ? asm_exc_page_fault+0x8/0x30
kernel: [53282.422887] entry_SYSCALL_64_after_hwframe+0x44/0xae
kernel: [53282.422892] RIP: 0033:0x7fd04b27a16e
The text was updated successfully, but these errors were encountered: