-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'zfs list' takes unreasonable long time #4773
Comments
'perf report' if anyone's too lazy to process the perf.data file
|
Jun 18 04:11:40 homerouter kernel: [ 1708.095018] sysrq: SysRq : Show State |
no |
|
so... is it normal to take so long? |
i'm getting similar times on freebsd with lots of ioctls to "5a14" when zfs is spinning 100% cpu 19199 zfs CALL ioctl(0x3,0xc0185a14,0x7fffffff3bb0) on linux strace says ioctl(3, 0x5a14, 0x7ffd116bb330) = 0 |
update to kernel 4.6, first line of |
As suggested by @mailinglists35 it appears |
Metadata-intensive workloads can cause the ARC to become permanently filled with dnode_t objects as they're pinned by the VFS layer. Subsequent data-intensive workloads may only benefit from about 25% of the potential ARC (arc_c_max - arc_meta_limit). In order to help track metadata usage more precisely, the other_size metadata arcstat has replaced with dbuf_size, dnode_size and bonus_size. The new zfs_arc_dnode_limit tunable, which defaults to 10% of zfs_arc_meta_limit, defines the minimum number of bytes which is desirable to be consumed by dnodes. Attempts to evict non-metadata will trigger async prune tasks if the space used by dnodes exceeds this limit. The new zfs_arc_dnode_reduce_percent tunable specifies the amount by which the excess dnode space is attempted to be pruned as a percentage of the amount by which zfs_arc_dnode_limit is being exceeded. By default, it tries to unpin 10% of the dnodes. The problem of dnode metadata pinning was observed with the following testing procedure (in this example, zfs_arc_max is set to 4GiB): - Create a large number of small files until arc_meta_used exceeds arc_meta_limit (3GiB with default tuning) and arc_prune starts increasing. - Create a 3GiB file with dd. Observe arc_mata_used. It will still be around 3GiB. - Repeatedly read the 3GiB file and observe arc_meta_limit as before. It will continue to stay around 3GiB. With this modification, space for the 3GiB file is gradually made available as subsequent demands on the ARC are made. The previous behavior can be restored by setting zfs_arc_dnode_limit to the same value as the zfs_arc_meta_limit. Signed-off-by: Tim Chase <[email protected]> Signed-off-by: Brian Behlendorf <[email protected]> Issue openzfs#4345 Issue openzfs#4512 Issue openzfs#4773 Closes openzfs#4858
Maybe this be fix in 2e5dc44 |
Performance should be considerably improved in master due to the referenced commit. |
0.6.5.7
mount -t tmpfs -o size=1G tmpfs /mnt/test/
truncate -s 1GiB /mnt/test/deleteme
zpool create -O mountpoint=none deleteme /mnt/test/deleteme
for i in
seq 1 10000; do echo zfs create deleteme/$i; done > /tmp/a; time bash /tmp/a
real 1m41.512s
user 0m10.684s
sys 0m31.452s
time perf record -a zfs list > /dev/null 2>&1
real 0m17.519s
user 0m0.736s
sys 0m16.716s
stack.txt dump during
zfs list
perf.data.zip from perf record
free -m
The text was updated successfully, but these errors were encountered: