Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SLOG tuning for sync writes? #4859

Closed
dswartz opened this issue Jul 18, 2016 · 3 comments
Closed

SLOG tuning for sync writes? #4859

dswartz opened this issue Jul 18, 2016 · 3 comments
Labels
Status: Inactive Not being actively updated

Comments

@dswartz
Copy link
Contributor

dswartz commented Jul 18, 2016

So I have a 4x2 raid10. NL SAS drives. Serving a datastore to vsphere via NFS over a 10gbe link. If I disable sync on the datastore, crystaldiskmark shows a write speed to the pool of about 2/3 the read speed. The writes were maxing out at about 400MB/sec (for 4 NL drives, that seems close to max). I have a ZEUS STEC SAS SSD which I erased and threw in as SLOG. The sequential writes went down to about 50MB/sec. To eliminate the SSD as an issue, I created an 8GB ramdisk using the brd driver, and replaced the STEC as the SLOG. Re-ran the test. Write speed maxes out at about 200MB/sec (about half that of async mode.) Since the SLOG is a ramdisk, it's hard to imagine there is a physical limitation, so I am suspecting some kind of ZIL tuning issue? I want to run this using a pacemaker cluster to give me HA storage, so obviously sync disabled or a ramdisk SLOG ain't gonna cut it :) Any hints/tips welcome...

@DeHackEd
Copy link
Contributor

This is an issue tracker, not a support forum. Please use the mailing list.

@dswartz
Copy link
Contributor Author

dswartz commented Jul 18, 2016

We have had more than one issue here vis-a-vis sync writes (NFS or otherwise.) My suspicion is that this may be in the same category. I have repeated this experiment on a local dataset with sync=always, and it seems like ZIL is feeding the disks on the pool small enough chunks of data that IOPS of the spindles is the limiting factor (which would explain why a ramdisk SLOG didn't help.) I wrote 16GB of data to the dataset in question, for an aggregate write throughput of 142MB/sec. On 4 mirrored vdevs. That's about 35MB/sec per mirrored spindle. If the IOPS is limiting the writes to the pool due to some ZIL tuning default (or whatever), wouldn't it be helpful to have a better default? Or at least have this documented somewhere?

@dswartz
Copy link
Contributor Author

dswartz commented Jul 18, 2016

Digging through the issues, this looks like it might be #1012.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Inactive Not being actively updated
Projects
None yet
Development

No branches or pull requests

3 participants
@DeHackEd @dswartz and others