-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lack of fairness of sync writes #10110
Comments
Lowering zfs_dirty_data_max significantly (to 100-200M values from default 3G) mitigates the problem for me, but with 50% performance drop. |
After some code investigation the problem appears to be too deeply ingrained in the write path.
I am afraid my workaround is currently the only viable option for acceptable latency under overwhelming fsync load. ZFS is nor designed nor built to be bandwidth-fair to consumer entities. |
This issue has been automatically marked as "stale" because it has not had any activity for a while. It will be closed in 90 days if no further activity occurs. Thank you for your contributions. |
It's a design issue, so I guess a fix probability is effectively zero. Still, a desirable feature in both desktop and multi-tenant headless environments. Let's hear out developers on the subject of complexity and then close/wontfix it. |
Probably related to #11140 |
apparently, we had this in the past and maybe i was wrong that it was resolved? anyhow, what about #11929 (comment) and #11912 ? |
I can still reproduce it on 0.8.4:
dd example from #4603 seems incorrect, I did not see any non-zero numbers in syncq_write column of zpool iostat while running it. The solutions/comments you linked revolve around queue depth limitation and reducing the latency at the cost of bandwidth reduction. This will not replace a theoretical writer-aware fair scheduler, that could fix the latency without making a write queue universally-shallow. |
#4603 was open for 4 years with no activity and then closed by stale bot. not the best way to handle issues, and this is a real one. our whole proxmox (with local zfs storage) migration from xenserver stalls because of this for weeks now and all our effort up to now may go poof if we don't get this resolved. but besides the fsync stall there may be other stalling issues in kvm. i put this bugreport for reference, as it is at least related: https://bugzilla.kernel.org/show_bug.cgi?id=199727 i also think this is a significant one, rendering kvm on top of zfs really unusable when you have fsync centric workloads. thanks for reporting the details about it and for your analysis ! |
Yeah, if you're multi-tenant (many VMs) you'll have better luck with boring qcow2s on ext/xfs+raid. |
no option. i want zfs snapshots and replication and i care about my data. |
@behlendorf wondering what your thoughts on this issue are. I'm on Proxmox as well and have occasionally noticed the same thing as @Boris-Barboris and @devZer0 have. |
this is a real bummer. this following output delays happen for example by simply copying a file inside a virtual machine you can clearly see that sync io from ioping is getting completely starved inside the VM. I guess i have never seen a single IO need 5.35min for completion. [root@gitlab backups]# ioping -WWWYy ioping.dat this long starvation causing the following kernel message : [87720.075195] INFO: task xfsaild/dm-3:871 blocked for more than 120 seconds. i'm not completely sure if this is a zfs problem alone, as with "zpool iostat -w hddpool 1" i would expect to see the outstanding IO from ioping (which hangs for minuts) in syncq_wait queue, but in 137s row, there is no single IO shown. is there a way to make this visible on ZFS layer ?
|
i know this can be mitigated to some point with adding a SLOG, but we have a ssd and a hdd mirror or raidz on each hypervisor server and adding another enterprise ssd just to make the hdd's run without a problem feels a little bit ugly, as you could better switch to "ssd only" system then. |
Any updates? Are there plans to resolve this in the future versions? |
Why hasn't this been escalated as a serious issue? Performance before features imo |
So it's kinda a design limitation: Normally a filesystem is offering a mount and is accessing a disk. The fairness here is provided by the IO-scheduler which is attributing the individual requests to the processes issuing these. ZFS however isn't working that way. The actual IO to the disks are issued by ZFS processes. The scheduler therefore cannot "see" which application is behind individual IO. In addition ZFS has its own scheduler built-in and thus an IO scheduler below isn't considered helpful, as the IO gets optimized for low latency by making sure they get issued in an order which completes individual requests as fast as possible. The scheduler is also sorting the requests to complete synchronous reads first and synchronous writes with second priority, followed by asynchronous reads and then asynchronous writes. These priorities are not super strict however: The amount of concurrent queues for each type of the described IO classes are tuned up and down, based on the outstanding requests. The tuneable you modified is adjusting the limit of how many writes can be cached. Lowering this value increases the amount of writing threads earlier, as the thresholds are percentages. In addition ZFS starts to throttle the incoming IO by applications, by introducing a sleep time if this cache gets fuller. So there are a couple of things you could try to lower the impact of issues you're seeing: Reduce parallel IO jobs per vdevcheck if your disks can keep up with the amount of concurrency:
If it's often above say 15ms (on SSDs)/50ms (HDDs) the disk has trouble keeping up with the amount of concurrent IO. The tuneable Earlier throttling and adjusting the delay introduced for throttlingInstead of lowering the maximum amount of "dirty" data for async writes its better IMHO to adjust on what percentage ZFS starts throttling the writes accepted by applications. The threshold can be configured with In addition It's probably best to adjust that to a mix of random and sequential IO tested on one disk. Depending on your pool layout you need to multiply that:
@ShadowJonathan wrote:
It has, there's a feature request open to implement the missing feature: Balancing IO between processes and creating IONice levels as well, so background IO can be marked as such. See #14151 |
hello, does this new feature of sync parallism help adressing this problem ? |
The cause of this issue is that each fsync() request on a file concatenates all the async write data of that file that are not yet written to a stable storage to a tail of sync write list of the ZIL. That sync write list is strictly ordered to ensure data consistency, that makes later small fsync() to wait for the previous large one. Would they both go to the same file, or be intermixed with some metadata operations, it could have no general case solution, or at least no easy one due to possible dependencies. If they go to a different files, then their ZIL writes could potentially be intermixed to implement some QoS policy, but at this moment it was not done yet. |
@amotin maybe a long shot. But I think sch_cake's algorithm could be used here: I think the solution would be to reconsider the way we handle queuing for processes that send too much requests. Traditionally, requests would be accepted until the queue hits its limit. But this isn't really a fair approach if there are processes which use a lot of IO, as other processes can't skip the long queue - as you explained. So I think the solution would be to avoid queuing new requests much earlier for processes issuing a lot of requests, by accepting requests in a fair manner, like sch_cake is sending packages for conversation partners in a fair manner. So we would assign a latency target to the queue, and accept freely all requests until the latency target is exceeded. Then we start fair queuing, giving each process, based on their used latency "bandwidth" a chance to issue another request. This would require us to create statistic how long each request took from start to completion, per process. If we have no data on that, because the process is new, or the type of request wasn't issued before by the process, we would just accept one request, and accept the second one only if we measured the first one. The system would then track the "cost" (in time) of requests for each pool and thread, roughly in categories of the sizes for each request type - to prevent large requests from distorting latency predictions that are based on many smaller requests and read and write requests to be mixed. To keep the statistics accurate, cache-served reads wouldn't be included and write-modify requests probably would need a separate statistical category, given their higher cost due to Copy-On-Write. ZFS currently has a model of distributing sync reads/async reads/sync writes/async writes with different tiers. This could be accomplished too, by using the QoS method by sch_cake, where upper percentage limits would be defined for each category, so if the queue is full, only so many requests would be accepted. With an three-tier associative queue the request queue could also maintain a fair queuing over pools, processes and threads, so if one process is using a lot of threads, it is still handled fair compared to other processes. This allows ZFS to process the requests still in a linear fashion, to ensure data consistency, but will cut down the size of the queues, so a lower latency for issued requests can be maintained. |
@RubenKelevra You've lost me pretty quick. ;)
It is not not so straight. Lets say there was some process A that has written 1GB of data, but does not care about persistence. ZFS has already throttled its write on the level of ARC dirty data, but not that much and that is a different topic. After that some process B writes 1 byte to the same file, but wants it to be persistent. In this situation ZIL will immediately receive ~1GB of data to write to ensure the 1B write persistence, and there is no other way really to handle it. Just after that some process C writes 1B to another file and also request persistence. From one side this write does not depend on the previous one and could be written to ZIL immediately. But from another, ZIL has only one queue and that queue already has 1GB of data. In this situation process A has already completed and we can't penalize it, plus it didn't do anything bad, so what penalize it for? Process B written only one byte, but at at cost of 1GB -- it is already disproportionally penalized and we can't help it. Process C could theoretically run in parallel, but since all on-disk ZIL structures are sequential, we would need to redesign some in-memory representations so that we could have multiple queues for multiple files when it is possible, but serialize them at some critical points. It is not so much a question of statistics or precise QoS, as redesigning and complicating internal structures. May be some ideas for it could be obtained from #12731, but it is quite a big change to grasp, going too deep into Intel DCPMM specifics. |
ARC-level fairness for per-dataset dirty data seems to me as the cheapest (in man-hours) option (zfs_dirty_data_max_per_dataset tunable or something like that). Redesigning ZIL is imho unfeasible. |
@amotin well, let's try again. In my suggestion the time we completed a write to the ZIL (or the dataset itself, if So no, the plan wasn't to change how the ZIL works. Instead, I want to redesign how we accept read/write requests. The idea is, that there are some guarantees regarding read/write requests, say a processes writes to a file, get's the operation completed back, and at exactly this time we need to provide the new data, even if another process asks for it. But this isn't true, if we haven't even given back a sync write. Which means, we can stall writes and only read the to-be-written data once the queue is sufficiently empty - as long as we haven't returned the write as completed. Same goes for read requests. They can be stalled as necessary. So the idea is to stall process not if the queue is full, like we currently do, but stall the processes if the latency target is full. So instead of accepting say 1 GB to write, because we got 1 GB of space in memory assigned for a write buffer and then struggling to write it out in a timely fashion, we only accept so much writes that we can write it out in a timely fashion, leaving other appliations chances to step in and issue a write request as well. In your example the 1 GB wouldn't have been accepted as a whole, but only parts, until we hit a latency target of say 100 ms. Then this would have been issued to the ZIL, and once it returned, another 100 ms would have been issued and so forth. If another application needs to write, it can issue a 1 byte write, and instead of waiting for 1 GB to be written to the ZIL, it would just take roughly 100 ms to complete, because process A has used a lot of resources in the past, so it has a low priority to get new operations to be accepted. So by stalling applications requests, we can give more concurrency to a linear process, that is basically the same thing sch_cake is doing, which also has a linear process, as the Ethernet wire has no concurrency. |
From some degree you are saying a reasonable thing without actually saying it. ZFS async write latency is effectively a TXG commit time. If the pool is faster than incoming data stream, that latency can be small and everything should be nice already. But the moment your pool is slower, your TXG size grows up to ARC's dirty_data_max, which may take many seconds if your RAM is big and the pool is slow. For async writes we don't care much, but if at that point somebody executes fsync(), it creates a huge spike of ZIL traffic and latency. What we really need to do (and I though about it before) is to limit TXG size not only in terms of used memory, but also in amount of data that pool can write in reasonable time. Doing that could reduce async write performance on bursty workloads by making app to wait when not necessary, but it would also reduce latency effects like this ZIL one. |
System information
Describe the problem you're observing
I am observing unfair sync write scheduling and severe userspace process io starvation in certain situation. It appears to be that fsync call on a file with a lot of unwritten dirty data will stall the system and cause FIFO-like sync write order, where no other processes get their share untill the dirty data is flushed.
On my home system this causes severe stalls when the guest VM with cache=writeback virtio-scsi disk decides to sync the scsi barrier while having a lot of dirty data in the hypervisor's RAM. All other hypervisor writers block completely and userspace starts chimping out with various timeouts and locks. It effectively acts as a DoS.
Describe how to reproduce the problem
1). Prepare a reasonably-default dataset.
2). Prepare 2 terminal tabs, cd to this dataset mount point. In them prepare the following fio commands:
"big-write"
and "small-write"
3). Let them run once to prepare the necessary benchmark files. In the meantime observe the iostat on the pool:
Note that when fio issues 2G of async writes it calls fsync at the very end, wich moves them from async to sync class.
4). When fios are finished, do the following: start "big-write" and then after 2-3 seconds (when "Jobs: 1" appears) start the "small-write". Note that the small 128K write will never finish before the 2G one, and the second fio remains blocked until the first one finishes.
Include any warning/errors/backtraces from the system logs
The text was updated successfully, but these errors were encountered: