-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPF Processing Unlinked ZAP #252
Milestone
Comments
Closing issue, this looks like a duplicate of issue #282 which has a proposed fix. |
ahrens
pushed a commit
to ahrens/zfs
that referenced
this issue
Feb 24, 2021
… not fail when warnings occur (openzfs#252)
mmaybee
pushed a commit
to mmaybee/openzfs
that referenced
this issue
Apr 6, 2022
In the common case calling ZettaCache::insert() (with the default of SIBLING_BLOCKS_INGEST_TO_ZETTACACHE=false), we memcpy() the block to avoid pinning the whole object in memory. However, when doing sequential reads, we may be dropping most insert requests due to the insertion buffer being full. In this case, the memcpy() and associated buffer allocation is unnecessary work as it will be thrown away after we realize the insertion will fail. This has been observed to take >17% of one CPU. This commit changes ZettaCache::insert() to take a closure which computes the AlignedBytes to insert, and we only invoke it if we successfully get an insertion permit (because there's enough space in the insert buffer).
andrewc12
added a commit
to andrewc12/openzfs
that referenced
this issue
Aug 4, 2023
Signed-off-by: Andrew Innes <[email protected]>
EchterAgo
pushed a commit
to EchterAgo/zfs
that referenced
this issue
Sep 21, 2023
Signed-off-by: Andrew Innes <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Travis Tabbal reports:
I am attempting to migrate an old pool from OpenSolaris b134 to debian
squeeze with 0.6.0-rc4 drivers. Also running the Xen 4.1 hypervisor,
compiled from debian sid source packages. I see xen in the stacktrace,
so it might be relevant. This is all being done in the dom0, I don't
have any VMs running presently. Dom0 has 4G of RAM allocated to it, 2G
are showing as free using the "free"command after the file copy was
completed.
I am able to locate the pool and import it fine. Mounting the
filesystems fails on a single fs. I will include the dmesg output
below. I am able to mount the last snapshot, and am copying the data
into a new fs now, so I lost no data. As a result, I wouldn't mind
getting that auto-snapshot script I was running on OpenSolaris back up
and running...
One minor issue that cropped up, my devices get renamed at boot
apparently. So my pool thought it was corrupted without enough
redundancy. I was able to rmmod zfs, then add it back, and do "zpool
import -d . -a" while in the /dev/disk/by-id directory. Now the disks
are referred to with those names, which should not change, rather than
"sda" and such. Nice tip from the zfs-fuse mailing list...
Hardware is an AMD Phenom X3 with LSI SAS controllers on Samsung 1.5T
drives. Let me know if more details will be useful for you. Thanks for
getting this on Linux. It's working very well so far.
The text was updated successfully, but these errors were encountered: