You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Latency has now also regressed somehow. statfs 50000 runs numactl pinned to right NUMA node and interrupt core
XLIO + libnfs -O2 used to be: 82usec
NFS: 48usec
Host<>DPU: 34usec
Now:
Without XLIO: 160usec
With XLIO + libnfs -O2: 137usec
So we lost 55usec somewhere!
Further testing:
With XLIO + libnfs -O3: 126usec
With XLIO + libnfs -O3 + 2 Virtio queues: 112usec
With XLIO + libnfs -O3 + 2 Virtio queues + DPU pinning to core 6,7: 100usec
--enable-latency-measuring
Reports that with the last configuration:
NFS: 68usec
Host<>DPU: 32usec
TODO:
DPU core pinning = good gains
Removing -g3 (shouldn't matter) = no change
NFS remote side things (the latency measuring results indicate that NFS is the culprit)
It would be interesting to test whether doing the NFS polling on the Virtio queue polling thread would improve performance. This requires a non-existent libnfs patch.
Currently with XLIO sequential write performance with
bs=4k, iodepth=16, numjobs=1
is ~244MB/s, while sequential read only gives ~20MB/s.Furthermore blocksizes larger than the page size (4k) don't do anything (i.e. no performance increase).
The text was updated successfully, but these errors were encountered: