You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I have a 4x2 raid10. NL SAS drives. Serving a datastore to vsphere via NFS over a 10gbe link. If I disable sync on the datastore, crystaldiskmark shows a write speed to the pool of about 2/3 the read speed. The writes were maxing out at about 400MB/sec (for 4 NL drives, that seems close to max). I have a ZEUS STEC SAS SSD which I erased and threw in as SLOG. The sequential writes went down to about 50MB/sec. To eliminate the SSD as an issue, I created an 8GB ramdisk using the brd driver, and replaced the STEC as the SLOG. Re-ran the test. Write speed maxes out at about 200MB/sec (about half that of async mode.) Since the SLOG is a ramdisk, it's hard to imagine there is a physical limitation, so I am suspecting some kind of ZIL tuning issue? I want to run this using a pacemaker cluster to give me HA storage, so obviously sync disabled or a ramdisk SLOG ain't gonna cut it :) Any hints/tips welcome...
The text was updated successfully, but these errors were encountered:
We have had more than one issue here vis-a-vis sync writes (NFS or otherwise.) My suspicion is that this may be in the same category. I have repeated this experiment on a local dataset with sync=always, and it seems like ZIL is feeding the disks on the pool small enough chunks of data that IOPS of the spindles is the limiting factor (which would explain why a ramdisk SLOG didn't help.) I wrote 16GB of data to the dataset in question, for an aggregate write throughput of 142MB/sec. On 4 mirrored vdevs. That's about 35MB/sec per mirrored spindle. If the IOPS is limiting the writes to the pool due to some ZIL tuning default (or whatever), wouldn't it be helpful to have a better default? Or at least have this documented somewhere?
So I have a 4x2 raid10. NL SAS drives. Serving a datastore to vsphere via NFS over a 10gbe link. If I disable sync on the datastore, crystaldiskmark shows a write speed to the pool of about 2/3 the read speed. The writes were maxing out at about 400MB/sec (for 4 NL drives, that seems close to max). I have a ZEUS STEC SAS SSD which I erased and threw in as SLOG. The sequential writes went down to about 50MB/sec. To eliminate the SSD as an issue, I created an 8GB ramdisk using the brd driver, and replaced the STEC as the SLOG. Re-ran the test. Write speed maxes out at about 200MB/sec (about half that of async mode.) Since the SLOG is a ramdisk, it's hard to imagine there is a physical limitation, so I am suspecting some kind of ZIL tuning issue? I want to run this using a pacemaker cluster to give me HA storage, so obviously sync disabled or a ramdisk SLOG ain't gonna cut it :) Any hints/tips welcome...
The text was updated successfully, but these errors were encountered: