You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the feature would like to see added to OpenZFS
Hello,
the CFQ and BFQ Linux IO schedulers are capable if managing IO classes and priorities with the ionice tool. Using this one can control how the scheduler handles processes. Long-running bulk copy jobs, background backups, maintenance tasks and similar things can be put into the -c3 (idle) class, so they won't interfere with more interactive loads. Latency-sensitive processes like databases can be put into a higher priority, maybe even the realtime class.
Services can be distinguished based on their priority and "need for interactivity/throughput". Basically, nice but for io. It's super useful.
I'm very surprised to find no issues or discussion regarding ionice in ZFS. It's obviously not using CFQ/BFQ, but its own ZIO scheduler (and leaving the vdev ones at noop/deadline). It does not speak ionice, wasting this precious opportunity. Is there a reason for this? Was it ever considered? Why/why not?
How will this feature improve OpenZFS?
The same way it improves other fs running on disks with the cfq/bfq scheduler. By prioritizing processes, latency and throughput can be greatly improved in mixed-workload cases. Useful on the desktop and the server.
Additional context
Here's a simple test case.
Copy a large file from a (mechanical) zpool to /dev/null
run stress -i 3 -d 4 in parallel
Watch the copy speed drop to very low numbers
Repeat with ionice -c3 stress -i 3 -d 4
If ZIO supported ionice, the copy speed would not be noticeably impacted
Thanks a lot!
The text was updated successfully, but these errors were encountered:
It would be nice (hehe ;) ) if you'd use some more real-world benchmark. I suspect that -i and -d combination in your stress command creates a heavy stream of synchronous writes. And since ZFS is very serious about sync guaranties, those requests are propagated to the disk, which just dies under such workload.
I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs.
I checked my Linux IO schedulers, I'm setting all of them to none to see what happens now. They were previously doing mq-deadline for rpool and none for the NVME drive that does ZIL + L2ARC.
I've noticed when using ZFS, certain programs can hog all the IO, and there's no fairness at all. It kind of sucks when there's BIG IO going on in the background that ends up locking up all the foreground programs.
I checked my Linux IO schedulers, I'm setting all of them to none to see what happens now. They were previously doing mq-deadline for rpool and none for the NVME drive that does ZIL + L2ARC.
ionice has no effect on ZFS which is why the io scheduler is set to none on disks used by ZFS because it uses it's own scheduler. ionice, as I understand it, only affects the CFQ scheduler.
Disregard, I'm dumb. Replied thinking this was the systemd issue. :)
Describe the feature would like to see added to OpenZFS
Hello,
the CFQ and BFQ Linux IO schedulers are capable if managing IO classes and priorities with the ionice tool. Using this one can control how the scheduler handles processes. Long-running bulk copy jobs, background backups, maintenance tasks and similar things can be put into the
-c3
(idle) class, so they won't interfere with more interactive loads. Latency-sensitive processes like databases can be put into a higher priority, maybe even the realtime class.Services can be distinguished based on their priority and "need for interactivity/throughput". Basically,
nice
but for io. It's super useful.I'm very surprised to find no issues or discussion regarding ionice in ZFS. It's obviously not using CFQ/BFQ, but its own ZIO scheduler (and leaving the vdev ones at noop/deadline). It does not speak ionice, wasting this precious opportunity. Is there a reason for this? Was it ever considered? Why/why not?
How will this feature improve OpenZFS?
The same way it improves other fs running on disks with the cfq/bfq scheduler. By prioritizing processes, latency and throughput can be greatly improved in mixed-workload cases. Useful on the desktop and the server.
Additional context
Here's a simple test case.
stress -i 3 -d 4
in parallelionice -c3 stress -i 3 -d 4
Thanks a lot!
The text was updated successfully, but these errors were encountered: