-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prefetch doesn't kick in when reading a file sequentially through samba #9712
Comments
Ok... so this doesn't make sense to me and could just be an artefact but while I was trying to understand the difference in behavior between samba and dd, I noticed attaching strace to the smbd process gave me better performance. zfetchstats confirms the effect with almost no more increments to the misses and max_streams counter. I've been able to reproduce the effect every single time so I thought it could be worth mentioning. edit : Added two flamegraphs to illustre the difference in behavior. |
performance is only better while you have strace actively attached, and it goes back to normal when you detach it? |
@beren12 Yes. |
Not sure how this is a zfs bug given those are entirely different IO patterns. Samba is using sync for everything. Possibly from samba's own readahead (aio read size,aio max threads, etc). It's also not clear if and how you purged caches between your local and remote tests? |
@h1z1 Correct by setting aio read size = 0, samba no longer delegates I/Os to a threadpool and instead execute them synchronously in the process handling the connection. I then do have the expected prefetch behavior. However this would degrade performance when reading a block not in ARC and since ZFS prefetch works at the file level, it shouldn't matter which process requested the blocks as long as those requests are sequential. They are, I just checked in the ZFS read history. FWIW reverting to stock kernel, 4.15.0-74-generic, seems to solves the problem without the need to cripple samba. One of the obvious changes is the use of the blk-mq for SATA/SAS drives but I haven't had a chance yet to run the HWE kernel with blk-mq disabled. Is it worth trying or it couldn't be related at all ? |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
PR #11652 may help if the problem arise only with aio enabled. |
System information
Describe the problem you're observing
ZFS prefetcher doesn't seems to detect a sequential read if made from a windows client through samba resulting in limited performance ~300-400 MiB/s while a simple dd can achieve 2.6 GiB/s locally.
First though was to blame samba and move on but reading a file in cache gives me a NIC limited traffic ~1.1 GiB/s.
I also see a lots of increments in zfetchstats misses and max_streams when reading the file from the network while I see none during a local read.
Describe how to reproduce the problem
Read a file from a windows client.
Include any warning/errors/backtraces from the system logs
zpool iostat -q 1(during network read):
zpool iostat -q 1(during local read):
The text was updated successfully, but these errors were encountered: