Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPF Processing Unlinked ZAP #252

Closed
behlendorf opened this issue May 23, 2011 · 1 comment
Closed

GPF Processing Unlinked ZAP #252

behlendorf opened this issue May 23, 2011 · 1 comment
Milestone

Comments

@behlendorf
Copy link
Contributor

Travis Tabbal reports:

I am attempting to migrate an old pool from OpenSolaris b134 to debian
squeeze with 0.6.0-rc4 drivers. Also running the Xen 4.1 hypervisor,
compiled from debian sid source packages. I see xen in the stacktrace,
so it might be relevant. This is all being done in the dom0, I don't
have any VMs running presently. Dom0 has 4G of RAM allocated to it, 2G
are showing as free using the "free"command after the file copy was
completed.

I am able to locate the pool and import it fine. Mounting the
filesystems fails on a single fs. I will include the dmesg output
below. I am able to mount the last snapshot, and am copying the data
into a new fs now, so I lost no data. As a result, I wouldn't mind
getting that auto-snapshot script I was running on OpenSolaris back up
and running...

One minor issue that cropped up, my devices get renamed at boot
apparently. So my pool thought it was corrupted without enough
redundancy. I was able to rmmod zfs, then add it back, and do "zpool
import -d . -a" while in the /dev/disk/by-id directory. Now the disks
are referred to with those names, which should not change, rather than
"sda" and such. Nice tip from the zfs-fuse mailing list...

Hardware is an AMD Phenom X3 with LSI SAS controllers on Samsung 1.5T
drives. Let me know if more details will be useful for you. Thanks for
getting this on Linux. It's working very well so far.

[  927.924798] general protection fault: 0000 [#1] SMP
[  927.928065] RIP: e030:[]  [] zfs_inode_destroy+0x4f/0x99 [zfs]
[  927.928065] RSP: e02b:ffff8800ba383b48  EFLAGS: 00010286
[  927.928065] RAX: ffff8800b99f7ca0 RBX: dead000000200200 RCX: dead000000100100
[  927.928065] RDX: dead000000200200 RSI: ffff8800ba383b28 RDI: ffff8800bbc431d8
[  927.928065] RBP: ffff8800b99f7cc8 R08: 0000000000000000 R09: 8000000000000000
[  927.928065] R10: dead000000100100 R11: ffffffffa03c1a08 R12: ffff8800bbc431d8
[  927.928065] R13: ffff8800b99f7b50 R14: ffff8800ba383bf8 R15: ffff8800cdb9ab00
[  927.928065] FS:  00007f1ea91c2b40(0000) GS:ffff880003890000(0000) knlGS:0000000000000000
[  927.928065] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  927.928065] CR2: 00007f1b37b62f20 CR3: 00000000ba374000 CR4: 0000000000000660
[  927.928065] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  927.928065] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[  927.928065] Process mount.zfs (pid: 4452, threadinfo ffff8800ba382000, task ffff8800ca4fa350)
[  927.928065] Stack:
[  927.928065]  ffff8800ba383bf0 ffff8800ba383bb8 ffff8800bbc43000 ffff8800ba383bf0
[  927.928065] <0> ffff8800ba383e08 ffffffffa039e934 0000400000020000 0000001100000013
[  927.928065] <0> 0000000000000108 ffffffff81000006 000000000095d5a1 000fff711d5a0000
[  927.928065] Call Trace:
[  927.928065]  [] ? zfs_unlinked_drain+0x95/0xdc [zfs]
[  927.928065]  [] ? check_events+0x12/0x20
[  927.928065]  [] ? xen_restore_fl_direct_end+0x0/0x1
[  927.928065]  [] ? _spin_unlock_irqrestore+0xd/0xe
[  927.928065]  [] ? __taskq_create+0x361/0x387 [spl]
[  927.928065]  [] ? xen_force_evtchn_callback+0x9/0xa
[  927.928065]  [] ? check_events+0x12/0x20
[  927.928065]  [] ? autoremove_wake_function+0x0/0x2e
[  927.928065]  [] ? zfs_get_data+0x0/0x22a [zfs]
[  927.928065]  [] ? zpl_fill_super+0x0/0xd [zfs]
[  927.928065]  [] ? zfs_sb_setup+0x7d/0xe0 [zfs]
[  927.928065]  [] ? zfs_domount+0x1c7/0x22b [zfs]
[  927.928065]  [] ? sget+0x39d/0x3af
[  927.928065]  [] ? set_anon_super+0x0/0xd5
[  927.928065]  [] ? zpl_fill_super+0x9/0xd [zfs]
[  927.928065]  [] ? get_sb_nodev+0x4f/0x83
[  927.928065]  [] ? zpl_get_sb+0x21/0x26 [zfs]
[  927.928065]  [] ? __get_free_pages+0x9/0x46
[  927.928065]  [] ? vfs_kern_mount+0x99/0x14b
[  927.928065]  [] ? do_kern_mount+0x43/0xe2
[  927.928065]  [] ? do_mount+0x72a/0x792
[  927.928065]  [] ? sys_mount+0x80/0xbd
[  927.928065]  [] ? system_call_fastpath+0x16/0x1b
[  927.928065] RIP  [] zfs_inode_destroy+0x4f/0x99 [zfs]
[  927.928065]  RSP 
[  927.928065] ---[ end trace a7919e7f17c0a727 ]---
@behlendorf
Copy link
Contributor Author

Closing issue, this looks like a duplicate of issue #282 which has a proposed fix.

ahrens pushed a commit to ahrens/zfs that referenced this issue Feb 24, 2021
mmaybee pushed a commit to mmaybee/openzfs that referenced this issue Apr 6, 2022
In the common case calling ZettaCache::insert() (with the default of
SIBLING_BLOCKS_INGEST_TO_ZETTACACHE=false), we memcpy() the block to
avoid pinning the whole object in memory.  However, when doing
sequential reads, we may be dropping most insert requests due to the
insertion buffer being full.  In this case, the memcpy() and associated
buffer allocation is unnecessary work as it will be thrown away after we
realize the insertion will fail.  This has been observed to take >17% of
one CPU.

This commit changes ZettaCache::insert() to take a closure which
computes the AlignedBytes to insert, and we only invoke it if we
successfully get an insertion permit (because there's enough space in
the insert buffer).
andrewc12 added a commit to andrewc12/openzfs that referenced this issue Aug 4, 2023
EchterAgo pushed a commit to EchterAgo/zfs that referenced this issue Sep 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant