-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement uncached prefetch #14243
Implement uncached prefetch #14243
Conversation
I think this may nicely fill the gap between regular cached I/O and zero-copy O_DIRECT of #10018 , handling cases where full zero-copy is not possible, or where user can not give up on prefetch and writeback. |
As discussed on the latest monthly meeting, I've enabled linear buffer allocation for data that need to be decompressed. While those buffers can not be shared, since decompression code requires linear buffers, this still allows to avoid extra memory copy. My tests show read speed improvement for 50% LZ4 compressed 128KB blocks from 2900MB/s to 3400MB/s. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems good to me!
Just added some more comments and rebased. |
60567c5
to
4ca49ca
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, and has worked as expected in my local testing. I am a little concerned about always allocating uncachable buffers as linear. But this should only be an issue if they were allocated to accumulate and as you pointed out they're short lived so this should be okay.
44178ac
to
f9647b9
Compare
Rebased it on new master after #14123 merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you update arcstat
to get the new "uncached" stats?
Yes, it will need update after both PRs. |
Done. |
Before this change primarycache property was handled only on dbuf layer, controlling dbuf cache and through a hack an ARC evictions. Since speculative prefetcher is implemented on ARC level, it had to be disabled for uncacheable buffers because otherwise falsely prefetched and never read buffers would stay cached in ARC. This change gives ARC a knowledge about uncacheable buffers. It is passed to arc_read() and arc_write() and stored in ARC header. When remove_reference() drops last reference on the ARC header, it can either immediately destroy it, or if it is marked as prefetch, put it into new arc_uncached state. That state is scanned every second, looking for stale buffers that were not demand read (in which case they are evicted immediately). To handle cases of short or misaligned reads, this change tracks at dbuf layer buffers that were read from the beginning, but not to the end. It is assumed that such buffers may receive further reads, and so they are stored in dbuf cache. If some of following reads reaches the end of such buffer, it is immediately evicted. Otherwise it will follow regular dbuf cache eviction and will be evicted also from ARC the same moment. Since dbuf layer does not know the actual file size, this logic is not applied to the last buffers of dnodes, which are always evicted same as before. Since uncacheable buffers should no longer stay in ARC for long, this patch also tries to optimize I/O by allocating ARC physical buffers as linear to allow buffer sharing. It allows to avoid one of two memory copies for uncompressed data for both reads and writes by the cost of some higher KVA usage in case of prefetch. In case decompression is needed the sharing is impossible, but this still allows to avoid extra memory copy since decompression still require a linear buffer. With the combination of enabled prefetch and avoided memory copy this change improves sequential single-threaded read speed from a wide NVMe pool from 2049 to 3932 MiB/s. During write profiler shows 22% reduction of unhalted CPU cycles at the same throughput of 3653 MiB/s. Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc.
I've added one more chunk for dmu_tx_check_ioerr() to avoid double block read during short/misaligned uncached write as described in #14333 . It keeps uncacheable L0 blocks read by dmu_tx_check_ioerr() in dbuf cache until they will be read again by dmu_buf_will_dirty(). For higher level blocks this trick may not work, leaving uncacheable metadata in cache, since they seem to not always be read explicitly later to clean the flag. But it should be less critical, since dmu_buf_hold_array_by_dnode() should be able to handle indirects read errors, and also hopefully not many run primarycache=none with random write workload. |
Recent ARC commits added new statistic counters, such as iohits, uncached state, etc. Represent those. Also some of previously reported numbers were confusing or even made no sense. Cleanup and restructure existing reports. Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc. Issue #14115 Issue #14123 Issue #14243 Closes #14320
Thanks for implementing that feature, which will be very useful on fast NVMe devices! One question: being uncached buffer linearly allocated, does it means increased memory fragmentation despite Thanks. |
If data are not compressed, then only for amount of preferched, but not yet read data and for no more than one second. For compressed data it may add up to another size of dbuf cache, that is 3% of ARC by default. So I don't think it should cause problems. Dbuf cache and all dirty data are always linear, so this is not a dramatic change. |
Previously the primarycache property was handled only in the dbuf layer. Since the speculative prefetcher is implemented in the ARC, it had to be disabled for uncacheable buffers. This change gives the ARC knowledge about uncacheable buffers via arc_read() and arc_write(). So when remove_reference() drops the last reference on the ARC header, it can either immediately destroy it, or if it is marked as prefetch, put it into a new arc_uncached state. That state is scanned every second, evicting stale buffers that were not demand read. This change also tracks dbufs that were read from the beginning, but not to the end. It is assumed that such buffers may receive further reads, and so they are stored in dbuf cache. If a following reads reaches the end of the buffer, it is immediately evicted. Otherwise it will follow regular dbuf cache eviction. Since the dbuf layer does not know actual file sizes, this logic is not applied to the final buffer of a dnode. Since uncacheable buffers should no longer stay in the ARC for long, this patch also tries to optimize I/O by allocating ARC physical buffers as linear to allow buffer sharing. Reviewed-by: Brian Behlendorf <[email protected]> Reviewed-by: George Wilson <[email protected]> Reviewed-by: Ryan Moeller <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc. Closes openzfs#14243
Recent ARC commits added new statistic counters, such as iohits, uncached state, etc. Represent those. Also some of previously reported numbers were confusing or even made no sense. Cleanup and restructure existing reports. Reviewed-by: Ryan Moeller <[email protected]> Reviewed-by: Brian Behlendorf <[email protected]> Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc. Issue openzfs#14115 Issue openzfs#14123 Issue openzfs#14243 Closes openzfs#14320
New Features - Block cloning (#13392) - Linux container support (#14070, #14097, #12263) - Scrub error log (#12812, #12355) - BLAKE3 checksums (#12918) - Corrective "zfs receive" - Vdev and zpool user properties Performance - Fully adaptive ARC (#14359) - SHA2 checksums (#13741) - Edon-R checksums (#13618) - Zstd early abort (#13244) - Prefetch improvements (#14603, #14516, #14402, #14243, #13452) - General optimization (#14121, #14123, #14039, #13680, #13613, #13606, #13576, #13553, #12789, #14925, #14948) Signed-off-by: Brian Behlendorf <[email protected]>
Before this change primarycache property was handled only on dbuf layer, controlling dbuf cache and through a hack an ARC evictions. Since speculative prefetcher is implemented on ARC level, it had to be disabled for uncacheable buffers because otherwise falsely prefetched and never read buffers would stay cached in ARC.
This change gives ARC a knowledge about uncacheable buffers. It is passed to arc_read() and arc_write() and stored in ARC header. When remove_reference() drops last reference on the ARC header, it can either immediately destroy it, or if it is marked as prefetch, put it into new arc_uncached state. That state is scanned every second, looking for stale buffers that were not demand read (in which case they are evicted immediately).
To handle cases of short or misaligned reads, this change tracks at dbuf layer buffers that were read from the beginning, but not to the end. It is assumed that such buffers may receive further reads, and so they are stored in dbuf cache. If some of following reads reaches the end of such buffer, it is immediately evicted. Otherwise it will follow regular dbuf cache eviction and will be evicted also from ARC the same moment. Since dbuf layer does not know the actual file size, this logic is not applied to the last buffers of dnodes, which are always evicted same as before.
Since uncacheable buffers should no longer stay in ARC for long, this patch also tries to optimize I/O by allocating ARC physical buffers as linear to allow buffer sharing. It allows to avoid one of two memory copies for uncompressed data for both reads and writes by the cost of some higher KVA usage in case of prefetch. In case decompression is needed the sharing is impossible, but this still allows to avoid extra memory copy since decompression still require a linear buffer.
With the combination of enabled prefetch and avoided memory copy this change improves sequential single-threaded read speed from a wide NVMe pool from 2049 to 3932 MiB/s. During write profiler shows 22% reduction of unhalted CPU cycles at the same throughput of 3653 MiB/s.
Types of changes
Checklist:
Signed-off-by
.