Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consistent General Protection Fault; zfs_rangelock_enter() -> avl_nearest() -> avl_walk() #10593

Closed
legoadk opened this issue Jul 18, 2020 · 28 comments · Fixed by #11682
Closed
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@legoadk
Copy link

legoadk commented Jul 18, 2020

System information

Type Version/Name
Distribution Name Proxmox
Distribution Version Proxmox 6.2 (Debian Buster)
Linux Kernel 5.4.44-2-pve
Architecture x86_64
ZFS Version 0.8.4-pve1
SPL Version 0.8.4-pve1

Describe the problem you're observing

I'm experiencing a consistent general protection fault after upgrade from 0.7.x (Proxmox 5.4.x) to 0.8.4 (Proxmox 6.2). This fault looks similar to issue #7873, but not exactly the same. The result of this fault is a complete hang of most processes running within the Linux Container that uses the associated ZFS volume, along with various other unresponsive host processes. I cannot regain use of the container until the entire Proxmox host is (hard) rebooted. The normal reboot process fails to proceed due to the hung LXC.

Describe how to reproduce the problem

ZFS is my root (rpool), two JBOD drives mirrored, with several Linux Containers using their own ZFS volumes.

Bug can be encountered by using the system normally; I can make it maybe two days before the issue forces me to reboot the machine.

I believe Proxmox has a scheduled scrub that - while running - increases the chance of occurrence. However, daily use also seems to cause the same issue. This has been consistently triggered by the Plex Media Server process for me; I have not seen any other processes noted in the errors, but I do not see how this can be a Plex issue (all fingers point to a ZFS issue).

I use ECC memory and have run memtest86+ recently to verify that memory is good.
Drives are healthy, pool is as well.

Include any warning/errors/backtraces from the system logs

[136422.505658] general protection fault: 0000 [#1] SMP PTI
[136422.505866] CPU: 15 PID: 25510 Comm: Plex Media Serv Tainted: P          IO      5.4.44-2-pve #1
[136422.506115] Hardware name: HP ProLiant ML350 G6, BIOS D22 05/05/2011
[136422.506303] RIP: 0010:avl_walk+0x33/0x70 [zavl]
[136422.506437] Code: 10 b9 01 00 00 00 29 d1 4c 01 c6 48 89 e5 48 85 f6 74 48 48 63 d2 48 89 f7 48 8b 04 d6 48 85 c0 74 20 48 63 c9 eb 03 48 89 d0 <48> 8b 14 c8 48 85 d2 75 f4 48 89 c2 48 89 d0 5d 4c 29 c0 c3 39 f1
[136422.507020] RSP: 0018:ffffacf29a163c80 EFLAGS: 00010282
[136422.507171] RAX: e089443875c085d0 RBX: ffffa0aac4d9c150 RCX: 0000000000000000
[136422.507375] RDX: 0000000000000001 RSI: ffffffffc01c1478 RDI: ffffffffc01c1478
[136422.507579] RBP: ffffacf29a163c80 R08: 0000000000000008 R09: ffffa0a8c3406f40
[136422.507784] R10: ffffa0ab33be8000 R11: 0000008000000000 R12: ffffa0ab33be8000
[136422.508031] R13: ffffa0aac4d9c178 R14: 0000000000000000 R15: 0000000000000000
[136422.508236] FS:  00007f6f78ff9700(0000) GS:ffffa0adc39c0000(0000) knlGS:0000000000000000
[136422.508465] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[136422.508630] CR2: 00007f6fbc294278 CR3: 0000000469bfc000 CR4: 00000000000006e0
[136422.508834] Call Trace:
[136422.508910]  avl_nearest+0x2a/0x30 [zavl]
[136422.509140]  zfs_rangelock_enter+0x405/0x580 [zfs]
[136422.509286]  ? spl_kmem_zalloc+0xe9/0x140 [spl]
[136422.509419]  ? spl_kmem_zalloc+0xe9/0x140 [spl]
[136422.509587]  zfs_get_data+0x157/0x340 [zfs]
[136422.509746]  zil_commit_impl+0x9ad/0xd90 [zfs]
[136422.509913]  zil_commit+0x3d/0x60 [zfs]
[136422.510062]  zfs_fsync+0x77/0xe0 [zfs]
[136422.510252]  zpl_fsync+0x68/0xa0 [zfs]
[136422.510364]  vfs_fsync_range+0x48/0x80
[136422.510474]  ? __fget_light+0x59/0x70
[136422.510582]  do_fsync+0x3d/0x70
[136422.510674]  __x64_sys_fsync+0x14/0x20
[136422.510786]  do_syscall_64+0x57/0x190
[136422.510895]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[136422.511041] RIP: 0033:0x7f6fffef4b17
[136422.511146] Code: 00 00 0f 05 48 3d 00 f0 ff ff 77 3f f3 c3 0f 1f 44 00 00 53 89 fb 48 83 ec 10 e8 04 f5 ff ff 89 df 89 c2 b8 4a 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 2b 89 d7 89 44 24 0c e8 46 f5 ff ff 8b 44 24
[136422.511700] RSP: 002b:00007f6f78ff79c0 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[136422.511916] RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007f6fffef4b17
[136422.512121] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000000000000c
[136422.512373] RBP: 00000000038164b8 R08: 0000000000000000 R09: 00007f6fbc447770
[136422.512583] R10: 00007f6fbc0c4460 R11: 0000000000000293 R12: 0000000000000000
[136422.512792] R13: 0000000003825eb8 R14: 0000000000000002 R15: 0000000000000000
[136422.512999] Modules linked in: veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables mptctl mptbase binfmt_misc iptable_filter bpfilter softdog cpuid nfnetlink_log nfnetlink radeon ttm drm_kms_helper drm i2c_algo_bit fb_sys_fops syscopyarea sysfillrect sysimgblt intel_powerclamp usblp kvm_intel ipmi_ssif input_leds kvm hpilo i7core_edac pcspkr irqbypass serio_raw ipmi_si intel_cstate ipmi_devintf ipmi_msghandler mac_hid vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi coretemp parport_pc ppdev lp parport sunrpc ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zlua(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) btrfs xor zstd_compress raid6_pq libcrc32c hid_generic usbkbd usbmouse usbhid hid gpio_ich psmouse pata_acpi uhci_hcd ehci_pci lpc_ich tg3 ehci_hcd hpsa scsi_transport_sas
[136422.515330] ---[ end trace b7f8cd7b0f091d95 ]---
[136422.515472] RIP: 0010:avl_walk+0x33/0x70 [zavl]
[136422.515649] Code: 10 b9 01 00 00 00 29 d1 4c 01 c6 48 89 e5 48 85 f6 74 48 48 63 d2 48 89 f7 48 8b 04 d6 48 85 c0 74 20 48 63 c9 eb 03 48 89 d0 <48> 8b 14 c8 48 85 d2 75 f4 48 89 c2 48 89 d0 5d 4c 29 c0 c3 39 f1
[136422.516197] RSP: 0018:ffffacf29a163c80 EFLAGS: 00010282
[136422.516352] RAX: e089443875c085d0 RBX: ffffa0aac4d9c150 RCX: 0000000000000000
[136422.516577] RDX: 0000000000000001 RSI: ffffffffc01c1478 RDI: ffffffffc01c1478
[136422.516826] RBP: ffffacf29a163c80 R08: 0000000000000008 R09: ffffa0a8c3406f40
[136422.517030] R10: ffffa0ab33be8000 R11: 0000008000000000 R12: ffffa0ab33be8000
[136422.517235] R13: ffffa0aac4d9c178 R14: 0000000000000000 R15: 0000000000000000
[136422.517439] FS:  00007f6f78ff9700(0000) GS:ffffa0adc39c0000(0000) knlGS:0000000000000000
[136422.517669] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[136422.517875] CR2: 00007f6fbc294278 CR3: 0000000469bfc000 CR4: 00000000000006e0
@axxelh
Copy link

axxelh commented Jul 25, 2020

I can report a similar problem, also with Plex in a Proxmox 6.2 container. ECC RAM, healthy mirrored pool, etc.

[386809.776800] CPU: 2 PID: 20182 Comm: Plex Media Serv Tainted: P O 5.4.44-2-pve #1
[386809.776823] Hardware name: Dell Inc. PowerEdge T30/07T4MC, BIOS 1.2.0 12/04/2019
[386809.776845] RIP: 0010:avl_walk+0x33/0x70 [zavl]
[386809.776857] Code: 10 b9 01 00 00 00 29 d1 4c 01 c6 48 89 e5 48 85 f6 74 48 48 63 d2 48 89 f7 48 8b 04 d6
48 85 c0 74 20 48 63 c9 eb 03 48 89 d0 <48> 8b 14 c8 48 85 d2 75 f4 48 89 c2 48 89 d0 5d 4c 29 c0 c3 39 f1
[386809.776904] RSP: 0018:ffffa27904d9fc80 EFLAGS: 00010282
[386809.777648] RAX: e089443875c085d0 RBX: ffff966f834a2320 RCX: 0000000000000000
[386809.778441] RDX: 0000000000000001 RSI: ffffffffc0363478 RDI: ffffffffc0363478
[386809.779104] RBP: ffffa27904d9fc80 R08: 0000000000000008 R09: ffff96760a402fc0
[386809.779883] R10: ffff96713cdff500 R11: 0000008000000000 R12: ffff96713cdff500
[386809.780548] R13: ffff966f834a2348 R14: 0000000000000000 R15: 0000000000000000
[386809.781193] FS: 00007fdd287f0700(0000) GS:ffff96760db00000(0000) knlGS:0000000000000000
[386809.781837] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[386809.782488] CR2: 00007fa9226c5968 CR3: 000000023e356004 CR4: 00000000003626e0
[386809.783169] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[386809.783832] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[386809.784515] Call Trace:
[386809.785191] avl_nearest+0x2a/0x30 [zavl]
[386809.785877] zfs_rangelock_enter+0x405/0x580 [zfs]
[386809.786524] ? spl_kmem_zalloc+0xe9/0x140 [spl]
[386809.787152] ? spl_kmem_zalloc+0xe9/0x140 [spl]
[386809.787790] zfs_get_data+0x157/0x340 [zfs]
[386809.788439] zil_commit_impl+0x9ad/0xd90 [zfs]
[386809.789078] zil_commit+0x3d/0x60 [zfs]
[386809.789774] zfs_fsync+0x77/0xe0 [zfs]
[386809.790382] zpl_fsync+0x68/0xa0 [zfs]
[386809.790982] vfs_fsync_range+0x48/0x80
[386809.791541] ? __fget_light+0x59/0x70
[386809.792100] do_fsync+0x3d/0x70
[386809.792684] __x64_sys_fsync+0x14/0x20
[386809.793220] do_syscall_64+0x57/0x190
[386809.793794] entry_SYSCALL_64_after_hwframe+0x44/0xa9

@bdaroz
Copy link

bdaroz commented Aug 11, 2020

Also seeing this issue, but on Ubuntu 18.04.5 LTS with 0.8.4. I also had this issue with 0.8.3, but did not have it with 7.x. I also have Plex running as well and while the effects aren't as bad (stopping containers, etc) it does effectively zombie the Plex server.

[16019.192423] general protection fault: 0000 [#1] SMP PTI
[16019.192454] Modules linked in: xt_mark veth vhost_net vhost tap xt_nat nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xt_CHECKSUM iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp ebtable_filter ebtables ip6table_filter ip6_tables devlink iptable_filter aufs overlay binfmt_misc intel_rapl x86_pkg_temp_thermal intel_powerclamp kvm_intel kvm irqbypass intel_cstate intel_rapl_perf intel_pch_thermal bridge stp llc joydev input_leds zfs(POE) zunicode(POE) zavl(POE) icp(POE) zlua(POE) ipmi_ssif zcommon(POE) znvpair(POE) spl(OE) ipmi_si ipmi_devintf ipmi_msghandler mac_hid mei_me mei acpi_pad ie31200_edac shpchp lpc_ich sch_fq_codel ib_iser rdma_cm
[16019.192701]  iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi jc42 coretemp lp parport ip_tables x_tables autofs4 algif_skcipher af_alg dm_crypt raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq libcrc32c raid1 raid0 multipath linear ses enclosure crct10dif_pclmul crc32_pclmul ghash_clmulni_intel pcbc ast ttm drm_kms_helper syscopyarea aesni_intel igb sysfillrect sysimgblt aes_x86_64 hid_generic dca e1000e i2c_algo_bit fb_sys_fops crypto_simd usbhid ahci mpt3sas glue_helper drm ptp hid libahci cryptd raid_class pps_core scsi_transport_sas video
[16019.192881] CPU: 3 PID: 4535 Comm: Plex Media Serv Tainted: P           OE    4.15.0-112-generic #113-Ubuntu
[16019.192912] Hardware name: Supermicro X10SLM-F/X10SLM-F, BIOS 3.0 04/24/2015
[16019.192938] RIP: 0010:avl_walk+0x33/0x60 [zavl]
[16019.192961] RSP: 0018:ffff9c366f08bc70 EFLAGS: 00010282
[16019.192979] RAX: e089445075c085d0 RBX: ffff8b42a28e5800 RCX: 0000000000000000
[16019.193001] RDX: 0000000000000001 RSI: ffffffffc05cb608 RDI: 0000000000000000
[16019.193024] RBP: ffff9c366f08bc70 R08: 0000000000000008 R09: ffff8b457f003200
[16019.193046] R10: ffff9c366f08bc20 R11: ffff8b457360ae50 R12: ffff8b44163f3918
[16019.193069] R13: ffff8b44163f3940 R14: ffff8b42a28e5300 R15: 0000000000000000
[16019.193092] FS:  00007f5716ffd700(0000) GS:ffff8b459fcc0000(0000) knlGS:0000000000000000
[16019.193118] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[16019.193137] CR2: 00007f891cbb2000 CR3: 00000006b812c001 CR4: 00000000001626e0
[16019.193159] Call Trace:
[16019.193173]  avl_nearest+0x2b/0x30 [zavl]
[16019.193241]  zfs_rangelock_enter+0x3b8/0x550 [zfs]
[16019.193274]  ? spl_kmem_zalloc+0xe9/0x150 [spl]
[16019.193306]  ? spl_kmem_zalloc+0xe9/0x150 [spl]
[16019.193401]  zfs_get_data+0x136/0x350 [zfs]
[16019.193496]  zil_commit_impl+0x9b9/0xd60 [zfs]
[16019.193589]  zil_commit+0x3d/0x60 [zfs]
[16019.193675]  zfs_fsync+0x77/0xe0 [zfs]
[16019.193727]  zpl_fsync+0x68/0xa0 [zfs]
[16019.193757]  vfs_fsync_range+0x51/0xb0
[16019.193784]  do_fsync+0x3d/0x70
[16019.193808]  SyS_fsync+0x10/0x20
[16019.195027]  do_syscall_64+0x73/0x130
[16019.196163]  entry_SYSCALL_64_after_hwframe+0x41/0xa6
[16019.197124] RIP: 0033:0x7f57bb8b6b17
[16019.197888] RSP: 002b:00007f5716ffbb60 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[16019.198537] RAX: ffffffffffffffda RBX: 000000000000000c RCX: 00007f57bb8b6b17
[16019.199280] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000000000000c
[16019.199938] RBP: 0000000002dc8aa8 R08: 0000000000000000 R09: 000000000005d0ec
[16019.200576] R10: 00007f57280008d0 R11: 0000000000000293 R12: 0000000000000000
[16019.201304] R13: 0000000002e4ad38 R14: 0000000000000002 R15: 0000000000000000
[16019.201929] Code: 47 10 bf 01 00 00 00 29 d7 48 89 e5 4c 01 c6 48 85 f6 74 40 48 63 d2 48 89 f1 48 8b 04 d6 48 85 c0 74 1a 48 63 cf eb 03 48 89 d0 <48> 8b 14 c8 48 85 d2 75 f4 4c 29 c0 5d c3 39 d7 74 f7 48 8b 41 
[16019.203480] RIP: avl_walk+0x33/0x60 [zavl] RSP: ffff9c366f08bc70
[16019.204133] ---[ end trace f50c07bc2d334abc ]---

@behlendorf behlendorf added the Type: Defect Incorrect behavior (e.g. crash, hang) label Aug 23, 2020
@4oo4
Copy link

4oo4 commented Sep 10, 2020

I've seen these same stack traces in Ubuntu 16.04 (zfs 0.7.13) and in a Debian 10 lxc container (Debian 10 host/zfs 0.8.4).

@stuckj
Copy link

stuckj commented Sep 25, 2020

Also seeing this inside a container running on proxmox (running Plex):

[1060935.777369] general protection fault: 0000 [#1] SMP PTI
[1060935.778037] CPU: 14 PID: 25452 Comm: Plex Media Serv Tainted: P           O      5.4.60-1-pve #1
[1060935.779104] Hardware name: Supermicro Super Server/X10SDV-TLN4F, BIOS 1.0b 09/09/2015
[1060935.780263] RIP: 0010:avl_walk+0x33/0x70 [zavl]
[1060935.781426] Code: 10 b9 01 00 00 00 29 d1 4c 01 c6 48 89 e5 48 85 f6 74 48 48 63 d2 48 89 f7 48 8b 04 d6 48 85 c0 74 20 48 63 c9 eb 03 48 89 d0 <48> 8b 14 c8 48 85 d2 75 f4 48 89 c2 48 89 d0 5d 4c 29 c0 c3 39 f1
[1060935.783639] RSP: 0018:ffffb1cc4e3d3c80 EFLAGS: 00010282
[1060935.784424] RAX: e089443875c085d0 RBX: ffff9c078df44e40 RCX: 0000000000000000
[1060935.785162] RDX: 0000000000000001 RSI: ffffffffc021a478 RDI: ffffffffc021a478
[1060935.785862] RBP: ffffb1cc4e3d3c80 R08: 0000000000000008 R09: ffff9c1b79406f40
[1060935.786565] R10: ffff9c0f10ecca00 R11: 0000008000000000 R12: ffff9c0f10ecca00
[1060935.787276] R13: ffff9c078df44e68 R14: 0000000000000000 R15: 0000000000000000
[1060935.788000] FS:  00007f384bfff700(0000) GS:ffff9c1b7fb80000(0000) knlGS:0000000000000000
[1060935.788704] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1060935.789456] CR2: 00007f392ba31000 CR3: 0000001eaa0b4002 CR4: 00000000003626e0
[1060935.790257] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[1060935.791343] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[1060935.792038] Call Trace:
[1060935.792726]  avl_nearest+0x2a/0x30 [zavl]
[1060935.793518]  zfs_rangelock_enter+0x405/0x580 [zfs]
[1060935.794390]  ? spl_kmem_zalloc+0xe9/0x140 [spl]
[1060935.795137]  ? spl_kmem_zalloc+0xe9/0x140 [spl]
[1060935.795863]  zfs_get_data+0x157/0x340 [zfs]
[1060935.796573]  zil_commit_impl+0x9ad/0xd90 [zfs]
[1060935.797281]  zil_commit+0x3d/0x60 [zfs]
[1060935.797976]  zfs_fsync+0x77/0xe0 [zfs]
[1060935.798647]  zpl_fsync+0x68/0xa0 [zfs]
[1060935.799300]  vfs_fsync_range+0x48/0x80
[1060935.799986]  ? __fget_light+0x59/0x70
[1060935.800882]  do_fsync+0x3d/0x70
[1060935.801665]  __x64_sys_fsync+0x14/0x20
[1060935.802329]  do_syscall_64+0x57/0x190
[1060935.802969]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[1060935.803587] RIP: 0033:0x7f392a12188b
[1060935.804215] Code: 4a 00 00 00 0f 05 48 3d 00 f0 ff ff 77 41 c3 48 83 ec 18 89 7c 24 0c e8 03 f7 ff ff 8b 7c 24 0c 41 89 c0 b8 4a 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 2f 44 89 c7 89 44 24 0c e8 41 f7 ff ff 8b 44
[1060935.805905] RSP: 002b:00007f384bffdd40 EFLAGS: 00000293 ORIG_RAX: 000000000000004a
[1060935.806696] RAX: ffffffffffffffda RBX: 0000000002d017e8 RCX: 00007f392a12188b
[1060935.807452] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000000000015
[1060935.808436] RBP: 0000000002c86338 R08: 0000000000000000 R09: 000000000004d96e
[1060935.809156] R10: 00007f379008b970 R11: 0000000000000293 R12: 0000000000000000
[1060935.809797] R13: 0000000002d01758 R14: 0000000000000002 R15: 0000000000000000
[1060935.810429] Modules linked in: tcp_diag inet_diag nf_conntrack_netlink xt_nat xt_tcpudp binfmt_misc xt_conntrack xt_MASQUERADE xfrm_user xt_addrtype iptable_nat veth ebtable_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables sctp iptable_filter bpfilter bonding openvswitch softdog nsh nf_conncount nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nfnetlink_log nfnetlink ipmi_ssif intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel crypto_simd cryptd ast glue_helper drm_vram_helper ttm rapl drm_kms_helper intel_cstate pcspkr mxm_wmi intel_pch_thermal drm fb_sys_fops syscopyarea sysfillrect joydev input_leds sysimgblt mei_me mei ioatdma ipmi_si ipmi_devintf ipmi_msghandler acpi_pad mac_hid vhost_net vhost tap ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi nfsd auth_rpcgss nfs_acl nfs lockd grace sunrpc
[1060935.810462]  fscache overlay aufs ip_tables x_tables autofs4 zfs(PO) zunicode(PO) zlua(PO) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) btrfs xor zstd_compress raid6_pq libcrc32c ses enclosure usbmouse usbkbd hid_generic usbhid hid gpio_ich ahci xhci_pci igb mpt3sas ehci_pci ixgbe i2c_i801 libahci lpc_ich i2c_algo_bit xhci_hcd raid_class ehci_hcd xfrm_algo scsi_transport_sas dca mdio wmi
[1060935.820245] ---[ end trace 2f95ade701523fe1 ]---
[1060937.895322] RIP: 0010:avl_walk+0x33/0x70 [zavl]
[1060937.896785] Code: 10 b9 01 00 00 00 29 d1 4c 01 c6 48 89 e5 48 85 f6 74 48 48 63 d2 48 89 f7 48 8b 04 d6 48 85 c0 74 20 48 63 c9 eb 03 48 89 d0 <48> 8b 14 c8 48 85 d2 75 f4 48 89 c2 48 89 d0 5d 4c 29 c0 c3 39 f1
[1060937.899611] RSP: 0018:ffffb1cc4e3d3c80 EFLAGS: 00010282
[1060937.900854] RAX: e089443875c085d0 RBX: ffff9c078df44e40 RCX: 0000000000000000
[1060937.902077] RDX: 0000000000000001 RSI: ffffffffc021a478 RDI: ffffffffc021a478
[1060937.903590] RBP: ffffb1cc4e3d3c80 R08: 0000000000000008 R09: ffff9c1b79406f40
[1060937.904987] R10: ffff9c0f10ecca00 R11: 0000008000000000 R12: ffff9c0f10ecca00
[1060937.906421] R13: ffff9c078df44e68 R14: 0000000000000000 R15: 0000000000000000
[1060937.907870] FS:  00007f384bfff700(0000) GS:ffff9c1b7fb80000(0000) knlGS:0000000000000000
[1060937.909202] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1060937.910201] CR2: 00007f392ba31000 CR3: 0000001eaa0b4002 CR4: 00000000003626e0
[1060937.911049] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[1060937.912436] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
root@plex:~# uname -a
Linux plex 5.4.60-1-pve #1 SMP PVE 5.4.60-1 (Mon, 31 Aug 2020 10:36:22 +0200) x86_64 x86_64 x86_64 GNU/Linux
root@plex:~# 

@stuckj
Copy link

stuckj commented Sep 27, 2020

I'm going to try to move plex to a VM (with NFS) rather than an lxc container to see if it resolves the issue. My guess is this is related to ZFS usage inside an LXC container. Plex probably just is accessing the filesystem in such a way as to trigger it more consistently.

@axxelh
Copy link

axxelh commented Sep 27, 2020

Your comment reminded me to post my workaround...

As indicated by the stacktraces, this appears to be an issue in the ZFS fsync() implementation. Disabling ZFS synchronous writes for the affected dataset (i.e. "zfs set sync=disabled somedataset") resolves the issue. I've been running this way for a month without the issue reappearing.

I presume the primary Plex usage for fsync() is its SQLite databases, so doing this undermines Plex's data consistency expectations, and isn't something to be done lightly. In my case, its an isolated dataset just for Plex and I'm not concerned about Plex corruption or even complete data loss.

@4oo4
Copy link

4oo4 commented Oct 9, 2020

I wonder if this related to this: #10440 ?

If so, would that mean that this is an issue with zfs and SQLite in general?

@stuckj
Copy link

stuckj commented Oct 9, 2020 via email

@ramos
Copy link

ramos commented Nov 13, 2020

I am still having this problem in kernels 5.8 and 5.9. Setting sync=disable does not solve the issue for me. Did anyone found a workaround?

Thanks

@stuckj
Copy link

stuckj commented Nov 13, 2020

For me, moving Plex to a VM instead of a container fixed it. I haven't seen the issue since. All of Plex's data (sqlite and such) I'm using inside the container's filesystem (which is ironic since that virtual disk is a ZFS volume...so you'd think it would have the same issues). The actual video is NFS shared into the container.

My guess is that the problem has something to do with ZFS being used by a containerized app with a fast R/W pattern. Not sure why, but there's that. :)

@bdaroz
Copy link

bdaroz commented Nov 13, 2020

I can provide a data point on the container aspect -- my Plex instance is running outside of a container and exhibits the issue at hand.

@stuckj
Copy link

stuckj commented Nov 13, 2020

Interesting. Perhaps my using a ZFS volume vs a dataset is a factor then? Or, coincidence I guess. :)

@stuckj
Copy link

stuckj commented Nov 13, 2020

If you wanted to test whether zvol is the issue @bdaroz you could make a zvol with, e.g. ext4 on the volume, then use that ext4 filesystem for plex data. If that still exhibits the issue...then I guess it's not a zvol vs dataset issue. If that fixes it...then there's another data point that points to the issue being dataset specific.

I don't know the ZFS code base at all so am not sure if that stack trace is more general or specific to datasets vs volumes.

@stuckj
Copy link

stuckj commented Nov 13, 2020

Actually, poking around in the code more (though I am DEFINITELY not well versed with it), this looks like it's dealing with any commit through the ZIL. I believe the zil is used for zvols like it is for datasets (write ahead log for synchronous writes). So, if there is any difference in behavior then it's possibly due to synchronous writes in the filesystem on top of the zvol not triggering triggering a corresponding synchronous write on the zvol (or some change in access pattern at least).

Regardless of all that...the actual GP fault is in avl_nearest which is code that's not even ZFS specific (the file itself says as much: https://github.com/openzfs/zfs/blob/zfs-0.8-release/module/avl/avl.c. It's a balanced binary search tree (AVL tree). That it's dying there suggests the problem is in the tree code itself. My guess would be a multi-threading issue since if it were a more general logic issue I'd expect we'd see this problem a lot more often. But...you never know.

That file's had a lot of changes since the 0.8.4 release: https://github.com/openzfs/zfs/commits/master/module/avl/avl.c. So...maybe there has been a fix since this bug was filed and it's just not released.

This is all speculation though. :)

@stuckj
Copy link

stuckj commented Nov 13, 2020

Oh. The newer code references here: https://illumos.org/man/9f/avl.

Which explicitly calls out:

MT-Safety
     AVL trees do not inherently have any internal locking, it is up to the
     consumer to use locks as appropriate.  See mutex(9F) and rwlock(9F) for
     more information on synchronization primitives.

So, yeah, maybe mt related if the locking isn't properly protecting the tree. So, that might indicate an issue in code related to locking with zfs_rangelock_enter.

@tuxoko
Copy link
Contributor

tuxoko commented Nov 19, 2020

I recently hit very similar issue.
I have looked into the core dump, and the problem is not in AVL code.

The way it happened seems to be like this.
The TX_WRITE was created on a file, but the file is later deleted and a new directory is created with the same object id.
And when zil_commit happened, it goes into zfs_get_data, where it got the wrong znode pointing to the new directory.
And it tries to do range lock with size = zp->z_blksz which is 0.

size = zp->z_blksz;

When there's two callers doing range lock on the same offset with size 0.
The second caller would get an uninitialized where

prev = avl_find(tree, new, &where);

This uninitialized where would later get dereferenced because it thinks the two size 0 range don't overlap.

if (prev->lr_offset + prev->lr_length <= off) {

next = avl_nearest(tree, where, AVL_AFTER);

@tuxoko
Copy link
Contributor

tuxoko commented Nov 19, 2020

If anyone here can repro this consistently, can you please try this patch.
https://gist.github.com/tuxoko/e9e08d07d2c182d983057c63d4796551

@fbuescher
Copy link

fbuescher commented Jan 14, 2021

hi,

i have the same issue(*), i'm running the ubuntu-tree in Version 0.8.4 (1ubuntu11). What Version is your patch for? Are you able to generate a patch against 0.8.4?
If so, i'm willing to patch my ubuntu-build-process and include your patch into my local build.

regards,
Friedhelm

(*) Not really the same, i'm using a zfs set for a medium plex installation, which contains tons of metadata, which is regulary scanned and updated. Thr Stacktrace is the same as this bug.

@tuxoko
Copy link
Contributor

tuxoko commented Jan 15, 2021

@fbuescher
The above one is for master.
Here's one I back port to 0.8 branch. It should work but I haven't tested it.
https://gist.github.com/tuxoko/2ca0b451c8fb80061ce25cb601dbdc7f

@fbuescher
Copy link

does not work.

I'm usually using the build from https://launchpad.net/ubuntu/+source/zfs-linux.
it applies the following patches to the 0.8.4 Version:

dpkg-source: info: extracting zfs-linux in zfs-linux-0.8.4
dpkg-source: info: unpacking zfs-linux_0.8.4.orig.tar.gz
dpkg-source: info: unpacking zfs-linux_0.8.4-1ubuntu11.debian.tar.xz
dpkg-source: info: applying dont-symlink-zed-scripts.patch
dpkg-source: info: applying 0001-Prevent-manual-builds-in-the-DKMS-source.patch
dpkg-source: info: applying 0002-Check-for-META-and-DCH-consistency-in-autoconf.patch
dpkg-source: info: applying 0003-relocate-zvol_wait.patch
dpkg-source: info: applying 0004-prefer-python3-tests.patch
dpkg-source: info: applying enable-zed.patch
dpkg-source: info: applying 1004-zed-service-bindir.patch
dpkg-source: info: applying 2100-zfs-load-module.patch
dpkg-source: info: applying 2200-add-zfs-0.6.x-ioctl-compat-shim.patch
dpkg-source: info: applying 3100-remove-libzfs-module-timeout.patch
dpkg-source: info: applying 3302-Use-obj-m-instead-of-subdir-m.patch
dpkg-source: info: applying 4000-mount-encrypted-dataset-fix.patch
dpkg-source: info: applying 4000-zsys-support.patch
dpkg-source: info: applying 4100-disable-bpool-upgrade.patch
dpkg-source: info: applying force-verbose-rules.patch
dpkg-source: info: applying zfs-mount-container-start.patch
dpkg-source: info: applying 4510-silently-ignore-modprobe-failure.patch
dpkg-source: info: applying git_fix_dependency_loop_encryption1.patch
dpkg-source: info: applying git_fix_dependency_loop_encryption2.patch
dpkg-source: info: applying 4520-Linux-5.8-compat-__vmalloc.patch
dpkg-source: info: applying 4521-enable-risc-v-isa.patch
dpkg-source: info: applying 4620-zfs-vol-wait-fix-locked-encrypted-vols.patch
dpkg-source: info: applying 4700-Fix-DKMS-build-on-arm64-with-PREEMPTION-and-BLK_CGRO.patch

after that, i applied your patch (worked without any fuzzy). compiled, installed, rebooted.

after zpool import -al, gives the following error:

This pool uses the following feature(s) not supported by this system:
com.datto:encryption (Support for dataset level encryption)
org.zfsonlinux:project_quota (space/object accounting based on project I
com.delphix:spacemap_v2 (Space maps representing large segments are more
cannot import 'pool1': unsupported version or feature
This pool uses the following feature(s) not supported by this system:
com.datto:encryption (Support for dataset level encryption)
com.delphix:spacemap_v2 (Space maps representing large segments are more
org.zfsonlinux:project_quota (space/object accounting based on project I
cannot import 'pool2': unsupported version or feature
This pool uses the following feature(s) not supported by this system:
com.datto:encryption (Support for dataset level encryption)
com.delphix:spacemap_v2 (Space maps representing large segments are more
org.zfsonlinux:project_quota (space/object accounting based on project I
cannot import 'pool3': unsupported version or feature

removed your patch, recompiled, installed, rebootet, everything worked as before.

@4oo4
Copy link

4oo4 commented Jan 17, 2021

@tuxoko I'm testing the patch now, on 0.8.6 on Debian Buster. Unfortunately it only occurs randomly for me, so I'll just try to keep an eye on it to see if it recurs.

Cheers

@tuxoko
Copy link
Contributor

tuxoko commented Jan 17, 2021

@4oo4
You can check for this printk message in your dmesg log:
https://gist.github.com/tuxoko/2ca0b451c8fb80061ce25cb601dbdc7f#file-get_data_gen_0-8-patch-L101
If it shows up then it means you'd hit it without the patch.

@fbuescher
The patch doesn't change any feature flags. You should double check if your build is correct.

@4oo4
Copy link

4oo4 commented Feb 3, 2021

@tuxoko

I'm trying to test your patch with zfs-dkms but I'm unable to build the dkms module properly, because of -Werror=implicit-function-declaration. Where would be the proper place for me to add -Wall -Wno-error to CFLAGS (or whatever flags are needed to override) as part of the dkms module build process? I tried doing this both with an environment variable while calling dkms install, and setting in the Makefile /var/lib/dkms/zfs/0.8.6/build/Makefile, but I think this needs to be declared somewhere in /usr/src/zfs-0.8.6?

/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c: In function ‘zfs_get_data’:
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:1068:3: error: implicit declaration of function ‘zfs_zrele_async’; did you mean ‘zfs_iput_async’? [-Werror=implicit-function-declaration]
   zfs_zrele_async(zp);
   ^~~~~~~~~~~~~~~
   zfs_iput_async
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:1073:40: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 3 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Wformat=]
   printk("%s: gen mismatch, expected=%lu got=%lu\n", __func__, gen, zp_gen);
                                      ~~^                       ~~~
                                      %llu
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:1073:48: warning: format ‘%lu’ expects argument of type ‘long unsigned int’, but argument 4 has type ‘uint64_t’ {aka ‘long long unsigned int’} [-Wformat=]
   printk("%s: gen mismatch, expected=%lu got=%lu\n", __func__, gen, zp_gen);
                                              ~~^                    ~~~~~~
                                              %llu
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_checksum.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_compress.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_crypt.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_inject.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zle.o
cc1: some warnings being treated as errors
make[7]: *** [/usr/src/linux-headers-4.19.0-13-common/scripts/Makefile.build:308: /var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.o] Error 1
make[7]: *** Waiting for unfinished jobs....
make[6]: *** [/usr/src/linux-headers-4.19.0-13-common/scripts/Makefile.build:549: /var/lib/dkms/zfs/0.8.6/build/module/zfs] Error 2
make[5]: *** [/usr/src/linux-headers-4.19.0-13-common/Makefile:1565: _module_/var/lib/dkms/zfs/0.8.6/build/module] Error 2
make[4]: *** [Makefile:146: sub-make] Error 2
make[3]: *** [Makefile:8: all] Error 2
make[3]: Leaving directory '/usr/src/linux-headers-4.19.0-13-amd64'
make[2]: *** [Makefile:30: modules] Error 2
make[2]: Leaving directory '/var/lib/dkms/zfs/0.8.6/build/module'
make[1]: *** [Makefile:843: all-recursive] Error 1
make[1]: Leaving directory '/var/lib/dkms/zfs/0.8.6/build'
make: *** [Makefile:712: all] Error 2

Cheers

@tuxoko
Copy link
Contributor

tuxoko commented Feb 3, 2021

@4oo4
Change zfs_zrele_async to zfs_iput_async.
It's interesting that gcc warning actually provided the correct suggestion. I didn't know it can do that.
The function was renamed in master. I overlooked it when I ported the patch.
For the printk warning just follows gcc's suggestion as well.

@4oo4
Copy link

4oo4 commented Feb 12, 2021

@tuxoko Oh weird, I never really thought about that, that is interesting that gcc can do that.

After correcting that, I'm still getting some dkms build errors, but different ones this time:

/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c: In function ‘zfs_get_data’:
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:1068:18: error: passing argument 1 of ‘zfs_iput_async’ from incompatible pointer type [-Werror=incompatible-pointer-types]
   zfs_iput_async(zp);
                  ^~
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:992:30: note: expected ‘struct inode *’ but argument is of type ‘znode_t *’ {aka ‘struct znode *’}
 zfs_iput_async(struct inode *ip)
                ~~~~~~~~~~~~~~^~
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:1075:18: error: passing argument 1 of ‘zfs_iput_async’ from incompatible pointer type [-Werror=incompatible-pointer-types]
   zfs_iput_async(zp);
                  ^~
/var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.c:992:30: note: expected ‘struct inode *’ but argument is of type ‘znode_t *’ {aka ‘struct znode *’}
 zfs_iput_async(struct inode *ip)
                ~~~~~~~~~~~~~~^~
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zil.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_checksum.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_compress.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_crypt.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zio_inject.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zle.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_ctldir.o
  CC [M]  /var/lib/dkms/zfs/0.8.6/build/module/zfs/zpl_export.o
cc1: some warnings being treated as errors
make[7]: *** [/usr/src/linux-headers-4.19.0-13-common/scripts/Makefile.build:308: /var/lib/dkms/zfs/0.8.6/build/module/zfs/zfs_vnops.o] Error 1
make[7]: *** Waiting for unfinished jobs....
make[6]: *** [/usr/src/linux-headers-4.19.0-13-common/scripts/Makefile.build:549: /var/lib/dkms/zfs/0.8.6/build/module/zfs] Error 2
make[5]: *** [/usr/src/linux-headers-4.19.0-13-common/Makefile:1565: _module_/var/lib/dkms/zfs/0.8.6/build/module] Error 2
make[4]: *** [Makefile:146: sub-make] Error 2
make[3]: *** [Makefile:8: all] Error 2
make[3]: Leaving directory '/usr/src/linux-headers-4.19.0-13-amd64'
make[2]: *** [Makefile:30: modules] Error 2
make[2]: Leaving directory '/var/lib/dkms/zfs/0.8.6/build/module'
make[1]: *** [Makefile:843: all-recursive] Error 1
make[1]: Leaving directory '/var/lib/dkms/zfs/0.8.6/build'
make: *** [Makefile:712: all] Error 2

I'm guessing there's a few more tweaks that need to be done?

Also is this commit related to this issue at all?
2921ad6

EDIT:

Managed to figure this out, after looking through some other examples of zfs_iput_async() and seeing the hint about zfs/inode pointer types, it looks like that needs to become this:

zfs_iput_async(ZTOI(zp));

After that it builds properly. Will test that out and see what happens from here.

Thanks

@4oo4
Copy link

4oo4 commented Mar 2, 2021

@tuxoko Have been running with this patch for about 2 weeks now, I have not experienced a deadlock since. It finally put something in dmesg too.

Feb 28 05:51:05 localhost kernel: [1347192.236182] zfs_get_data: gen mismatch, expected=20098561 got=20098562

I'm also looking at upgrading to 2.0.3 sometime soon, is there a 2.x patch for this ready to go? Otherwise I can keep using 0.8.6 for a while if you want me to keep testing that.

Cheers

@tuxoko
Copy link
Contributor

tuxoko commented Mar 2, 2021

@4oo4
Great! That means the issue you were hitting is indeed as I suspected.
I will submit a pull request then.

tuxoko pushed a commit to tuxoko/zfs that referenced this issue Mar 3, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593

Change-Id: I6258f045ce5875d9f7acd29bef52b73a7679808e
tuxoko pushed a commit to tuxoko/zfs that referenced this issue Mar 3, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593
tuxoko pushed a commit to tuxoko/zfs that referenced this issue Mar 19, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593

Change-Id: I07307002ad3e0a7de577bab487dc11c447645a83
behlendorf pushed a commit that referenced this issue Mar 20, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes #10593
Closes #11682
@youzhongyang
Copy link
Contributor

Will this resolve the issue reported in #10642?

jsai20 pushed a commit to jsai20/zfs that referenced this issue Mar 30, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593
Closes openzfs#11682
adamdmoss pushed a commit to adamdmoss/zfs that referenced this issue Apr 10, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593
Closes openzfs#11682
sempervictus pushed a commit to sempervictus/zfs that referenced this issue May 31, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593
Closes openzfs#11682
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Jun 3, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593
Closes openzfs#11682
tonyhutter pushed a commit to tonyhutter/zfs that referenced this issue Jun 10, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes openzfs#10593
Closes openzfs#11682
tonyhutter pushed a commit that referenced this issue Jun 23, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes #10593
Closes #11682
tonyhutter pushed a commit that referenced this issue Jun 23, 2021
If TX_WRITE is create on a file, and the file is later deleted and a new
directory is created on the same object id, it is possible that when
zil_commit happens, zfs_get_data will be called on the new directory.
This may result in panic as it tries to do range lock.

This patch fixes this issue by record the generation number during
zfs_log_write, so zfs_get_data can check if the object is valid.

Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Chunwei Chen <[email protected]>
Closes #10593
Closes #11682
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants