Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs hang during rsync #4239

Closed
therealdano opened this issue Jan 18, 2016 · 4 comments
Closed

zfs hang during rsync #4239

therealdano opened this issue Jan 18, 2016 · 4 comments

Comments

@therealdano
Copy link

I am unable to do an rysnc of a couple of hundred gigabytes of files to an attached USB drive without zfs hanging. I've updated to zfs 0.6.5.4-1 on CentOS 7 recently but this has not fixed the problem. The first few lines of the kernel message are below:

Jan 15 17:03:22 jootnar kernel: INFO: task arc_reclaim:639 blocked for more than 120 seconds.
Jan 15 17:03:22 jootnar kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 17:03:22 jootnar kernel: arc_reclaim D 0000000000000000 0 639 2 0x00000000

There are several kernel messages indicating tasks that have been blocked for more than 120 seconds.

Jan 15 17:03:22 jootnar kernel: INFO: task arc_reclaim:639 blocked for more than 120 seconds.
Jan 15 17:03:22 jootnar kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 17:03:22 jootnar kernel: arc_reclaim     D 0000000000000000     0   639      2 0x00000000
Jan 15 17:03:22 jootnar kernel: ffff880402ec3328 0000000000000046 ffff880408855c00 ffff880402ec3fd8
Jan 15 17:03:22 jootnar kernel: ffff880402ec3fd8 ffff880402ec3fd8 ffff880408855c00 ffffffffa06926e0
Jan 15 17:03:22 jootnar kernel: ffffffffa06927a0 ffffffffa06926e8 ffffffffa06927c8 0000000000000000
Jan 15 17:03:22 jootnar kernel: Call Trace:
Jan 15 17:03:22 jootnar kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0494775>] cv_wait_common+0x125/0x150 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a6ae0>] ? wake_up_atomic_t+0x30/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffffa04947b5>] __cv_wait+0x15/0x20 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa051d9a7>] arc_get_data_buf.isra.21+0x3e7/0x3f0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05216f5>] arc_read+0x335/0xad0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05279d0>] ? dbuf_rele_and_unlock+0x400/0x400 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa052946d>] dbuf_read+0x2ad/0x930 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0532f10>] dmu_buf_hold+0x50/0x80 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059734b>] zap_get_leaf_byblk+0x8b/0x2d0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0527b3e>] ? dmu_buf_rele+0xe/0x10 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0596de3>] ? zap_idx_to_blk+0x103/0x180 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059760a>] zap_deref_leaf+0x7a/0xa0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059875f>] fzap_remove+0x3f/0xb0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059bf33>] ? zap_name_alloc+0x73/0xd0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059deeb>] zap_remove_norm+0x17b/0x1e0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059df63>] zap_remove+0x13/0x20 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0598e94>] zap_remove_int+0x54/0x80 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05a7b60>] zfs_rmnode+0x1e0/0x360 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff81638ab2>] ? mutex_lock+0x12/0x2f
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05c9278>] zfs_zinactive+0x168/0x180 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05c2987>] zfs_inactive+0x67/0x240 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff811794b9>] ? truncate_pagecache+0x59/0x60
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05d9fe3>] zpl_evict_inode+0x43/0x60 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff811fa017>] evict+0xa7/0x170
Jan 15 17:03:22 jootnar kernel: [<ffffffff811fa11e>] dispose_list+0x3e/0x50
Jan 15 17:03:22 jootnar kernel: [<ffffffff811faff3>] prune_icache_sb+0x163/0x320
Jan 15 17:03:22 jootnar kernel: [<ffffffff811e0c03>] prune_super+0x143/0x170
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117c4b5>] shrink_slab+0x165/0x300
Jan 15 17:03:22 jootnar kernel: [<ffffffff811d5dd1>] ? vmpressure+0x21/0x90
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117f632>] do_try_to_free_pages+0x3c2/0x4e0
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117f84c>] try_to_free_pages+0xfc/0x180
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117354d>] __alloc_pages_nodemask+0x7fd/0xb90
Jan 15 17:03:22 jootnar kernel: [<ffffffff811b43d9>] alloc_pages_current+0xa9/0x170
Jan 15 17:03:22 jootnar kernel: [<ffffffff811be925>] new_slab+0x275/0x300
Jan 15 17:03:22 jootnar kernel: [<ffffffff816320cc>] __slab_alloc+0x315/0x48f
Jan 15 17:03:22 jootnar kernel: [<ffffffff811c2665>] ? kmem_cache_shrink+0x35/0x250
Jan 15 17:03:22 jootnar kernel: [<ffffffff811c1ea8>] __kmalloc+0x1c8/0x230
Jan 15 17:03:22 jootnar kernel: [<ffffffff811c2665>] kmem_cache_shrink+0x35/0x250
Jan 15 17:03:22 jootnar kernel: [<ffffffffa048e719>] spl_kmem_cache_reap_now+0xf9/0x140 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa051c4c5>] arc_kmem_reap_now+0x95/0xe0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0522304>] arc_reclaim_thread+0x1f4/0x230 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0522110>] ? arc_shrink+0xc0/0xc0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa048f881>] thread_generic_wrapper+0x71/0x80 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa048f810>] ? __thread_exit+0x20/0x20 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Jan 15 17:03:22 jootnar kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140

Jan 15 17:03:22 jootnar kernel: INFO: task txg_quiesce:1868 blocked for more than 120 seconds.
Jan 15 17:03:22 jootnar kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 17:03:22 jootnar kernel: txg_quiesce     D 0000000000000000     0  1868      2 0x00000000
Jan 15 17:03:22 jootnar kernel: ffff8803f6623d38 0000000000000046 ffff8803f67e0b80 ffff8803f6623fd8
Jan 15 17:03:22 jootnar kernel: ffff8803f6623fd8 ffff8803f6623fd8 ffff8803f67e0b80 ffff880406cc8070
Jan 15 17:03:22 jootnar kernel: ffff880406cc8038 ffff880406cc8078 ffff880406cc8060 0000000000000000
Jan 15 17:03:22 jootnar kernel: Call Trace:
Jan 15 17:03:22 jootnar kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0494775>] cv_wait_common+0x125/0x150 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a6ae0>] ? wake_up_atomic_t+0x30/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffffa04947b5>] __cv_wait+0x15/0x20 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa058702b>] txg_quiesce_thread+0x2eb/0x400 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0586d40>] ? txg_init+0x2b0/0x2b0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa048f881>] thread_generic_wrapper+0x71/0x80 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa048f810>] ? __thread_exit+0x20/0x20 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Jan 15 17:03:22 jootnar kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140

Jan 15 17:03:22 jootnar kernel: INFO: task nfsd:2982 blocked for more than 120 seconds.
Jan 15 17:03:22 jootnar kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 17:03:22 jootnar kernel: nfsd            D 0000000000000000     0  2982      2 0x00000080
Jan 15 17:03:22 jootnar kernel: ffff8800cfad35b0 0000000000000046 ffff8803ff499700 ffff8800cfad3fd8
Jan 15 17:03:22 jootnar kernel: ffff8800cfad3fd8 ffff8800cfad3fd8 ffff8803ff499700 ffffffffa06926e0
Jan 15 17:03:22 jootnar kernel: ffffffffa06927a0 ffffffffa06926e8 ffffffffa06927c8 0000000000000000
Jan 15 17:03:22 jootnar kernel: Call Trace:
Jan 15 17:03:22 jootnar kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0494775>] cv_wait_common+0x125/0x150 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a6ae0>] ? wake_up_atomic_t+0x30/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffffa04947b5>] __cv_wait+0x15/0x20 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa051d9a7>] arc_get_data_buf.isra.21+0x3e7/0x3f0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0520d1f>] arc_buf_alloc+0x10f/0x190 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0521585>] arc_read+0x1c5/0xad0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa052985f>] ? dbuf_read+0x69f/0x930 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0529d30>] ? __dbuf_hold_impl+0x240/0x500 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa052a26e>] dbuf_prefetch+0x1de/0x2e0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0542034>] dmu_zfetch_dofetch.isra.6+0x114/0x180 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa054278f>] dmu_zfetch+0x4ef/0xe00 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa052997c>] dbuf_read+0x7bc/0x930 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05441fa>] dnode_hold_impl+0xca/0x600 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0544749>] dnode_hold+0x19/0x20 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05319e6>] dmu_bonus_hold+0x36/0x2a0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa056c9ee>] sa_buf_hold+0xe/0x10 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05c8801>] zfs_zget+0x1a1/0x320 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff8151a196>] ? skb_release_data+0xd6/0x110
Jan 15 17:03:22 jootnar kernel: [<ffffffff810d7c60>] ? getboottime+0x30/0x40
Jan 15 17:03:22 jootnar kernel: [<ffffffff81638ab2>] ? mutex_lock+0x12/0x2f
Jan 15 17:03:22 jootnar kernel: [<ffffffffa056b3c1>] ? rrw_enter_read_impl+0xc1/0x170 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05b9bd0>] zfs_vget+0x240/0x3f0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05d7ee7>] zpl_fh_to_dentry+0x97/0xe0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa042cf80>] ? nfsd_proc_getattr+0xa0/0xa0 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffff81263a82>] exportfs_decode_fh+0x72/0x2e0
Jan 15 17:03:22 jootnar kernel: [<ffffffffa03e1b5f>] ? cache_check+0xef/0x380 [sunrpc]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0433610>] ? exp_find+0x100/0x1c0 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffff811c3925>] ? __kmalloc_track_caller+0x55/0x230
Jan 15 17:03:22 jootnar kernel: [<ffffffff8128a4db>] ? selinux_cred_prepare+0x1b/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffff8128a4db>] ? selinux_cred_prepare+0x1b/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffffa042dcf8>] fh_verify+0x338/0x5f0 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa043877c>] nfsd3_proc_getattr+0x7c/0x110 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa042a12b>] nfsd_dispatch+0xbb/0x200 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa03d7183>] svc_process_common+0x453/0x6f0 [sunrpc]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa03d7523>] svc_process+0x103/0x170 [sunrpc]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0429ab7>] nfsd+0xe7/0x150 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa04299d0>] ? nfsd_destroy+0x80/0x80 [nfsd]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Jan 15 17:03:22 jootnar kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140

Jan 15 17:03:22 jootnar kernel: INFO: task systemd-logind:8744 blocked for more than 120 seconds.
Jan 15 17:03:22 jootnar kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 17:03:22 jootnar kernel: systemd-logind  D ffffffff819bfe50     0  8744      1 0x00000084
Jan 15 17:03:22 jootnar kernel: ffff8803cec3fc88 0000000000000082 ffff880405674500 ffff8803cec3ffd8
Jan 15 17:03:22 jootnar kernel: ffff8803cec3ffd8 ffff8803cec3ffd8 ffff880405674500 ffff880405674500
Jan 15 17:03:22 jootnar kernel: ffffffff819bfe40 ffffffff819bfe48 ffffffff00000000 ffffffff819bfe50
Jan 15 17:03:22 jootnar kernel: Call Trace:
Jan 15 17:03:22 jootnar kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan 15 17:03:22 jootnar kernel: [<ffffffff8163c035>] rwsem_down_write_failed+0x115/0x220
Jan 15 17:03:22 jootnar kernel: [<ffffffff81301793>] call_rwsem_down_write_failed+0x13/0x20
Jan 15 17:03:22 jootnar kernel: [<ffffffff812f4680>] ? ida_get_new_above+0x230/0x2a0
Jan 15 17:03:22 jootnar kernel: [<ffffffff81639acd>] ? down_write+0x2d/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117a971>] register_shrinker+0x21/0x50
Jan 15 17:03:22 jootnar kernel: [<ffffffff811e1b6a>] sget+0x38a/0x3d0
Jan 15 17:03:22 jootnar kernel: [<ffffffff811e0d50>] ? get_anon_bdev+0x120/0x120
Jan 15 17:03:22 jootnar kernel: [<ffffffff811821d0>] ? shmem_rename+0x20/0x20
Jan 15 17:03:22 jootnar kernel: [<ffffffff811e1ca0>] mount_nodev+0x30/0xb0
Jan 15 17:03:22 jootnar kernel: [<ffffffff81181e58>] shmem_mount+0x18/0x20
Jan 15 17:03:22 jootnar kernel: [<ffffffff811e2669>] mount_fs+0x39/0x1b0
Jan 15 17:03:22 jootnar kernel: [<ffffffff811fdf0f>] vfs_kern_mount+0x5f/0xf0
Jan 15 17:03:22 jootnar kernel: [<ffffffff8120045e>] do_mount+0x24e/0xa40
Jan 15 17:03:22 jootnar kernel: [<ffffffff8116deee>] ? __get_free_pages+0xe/0x50
Jan 15 17:03:22 jootnar kernel: [<ffffffff81200ce6>] SyS_mount+0x96/0xf0
Jan 15 17:03:22 jootnar kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b

Jan 15 17:03:22 jootnar kernel: INFO: task rsync:8990 blocked for more than 120 seconds.
Jan 15 17:03:22 jootnar kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan 15 17:03:22 jootnar kernel: rsync           D 0000000000000000     0  8990   8968 0x00000080
Jan 15 17:03:22 jootnar kernel: ffff8802b249f2f8 0000000000000082 ffff8803f2211700 ffff8802b249ffd8
Jan 15 17:03:22 jootnar kernel: ffff8802b249ffd8 ffff8802b249ffd8 ffff8803f2211700 ffffffffa06926e0
Jan 15 17:03:22 jootnar kernel: ffffffffa06927a0 ffffffffa06926e8 ffffffffa06927c8 0000000000000000
Jan 15 17:03:22 jootnar kernel: Call Trace:
Jan 15 17:03:22 jootnar kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0494775>] cv_wait_common+0x125/0x150 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffff810a6ae0>] ? wake_up_atomic_t+0x30/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffffa04947b5>] __cv_wait+0x15/0x20 [spl]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa051d9a7>] arc_get_data_buf.isra.21+0x3e7/0x3f0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05216f5>] arc_read+0x335/0xad0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05279d0>] ? dbuf_rele_and_unlock+0x400/0x400 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa052946d>] dbuf_read+0x2ad/0x930 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0532f10>] dmu_buf_hold+0x50/0x80 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059734b>] zap_get_leaf_byblk+0x8b/0x2d0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0527b3e>] ? dmu_buf_rele+0xe/0x10 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0596de3>] ? zap_idx_to_blk+0x103/0x180 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059760a>] zap_deref_leaf+0x7a/0xa0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059875f>] fzap_remove+0x3f/0xb0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059bf33>] ? zap_name_alloc+0x73/0xd0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059deeb>] zap_remove_norm+0x17b/0x1e0 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa059df63>] zap_remove+0x13/0x20 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa0598e94>] zap_remove_int+0x54/0x80 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05a7b60>] zfs_rmnode+0x1e0/0x360 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff81638ab2>] ? mutex_lock+0x12/0x2f
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05c9278>] zfs_zinactive+0x168/0x180 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05c2987>] zfs_inactive+0x67/0x240 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff811794b9>] ? truncate_pagecache+0x59/0x60
Jan 15 17:03:22 jootnar kernel: [<ffffffffa05d9fe3>] zpl_evict_inode+0x43/0x60 [zfs]
Jan 15 17:03:22 jootnar kernel: [<ffffffff811fa017>] evict+0xa7/0x170
Jan 15 17:03:22 jootnar kernel: [<ffffffff811fa11e>] dispose_list+0x3e/0x50
Jan 15 17:03:22 jootnar kernel: [<ffffffff811faff3>] prune_icache_sb+0x163/0x320
Jan 15 17:03:22 jootnar kernel: [<ffffffff811e0c03>] prune_super+0x143/0x170
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117c4b5>] shrink_slab+0x165/0x300
Jan 15 17:03:22 jootnar kernel: [<ffffffff811d5dd1>] ? vmpressure+0x21/0x90
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117f632>] do_try_to_free_pages+0x3c2/0x4e0
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117f84c>] try_to_free_pages+0xfc/0x180
Jan 15 17:03:22 jootnar kernel: [<ffffffff8117354d>] __alloc_pages_nodemask+0x7fd/0xb90
Jan 15 17:03:22 jootnar kernel: [<ffffffff81519681>] ? __kmalloc_reserve.isra.30+0x31/0x90
Jan 15 17:03:22 jootnar kernel: [<ffffffff811b43d9>] alloc_pages_current+0xa9/0x170
Jan 15 17:03:22 jootnar kernel: [<ffffffff8151464f>] sock_alloc_send_pskb+0x32f/0x410
Jan 15 17:03:22 jootnar kernel: [<ffffffff810b0ddf>] ? __wake_up_sync_key+0x4f/0x60
Jan 15 17:03:22 jootnar kernel: [<ffffffff815d186e>] unix_stream_sendmsg+0x29e/0x430
Jan 15 17:03:22 jootnar kernel: [<ffffffff8150fdf7>] sock_aio_write+0x157/0x180
Jan 15 17:03:22 jootnar kernel: [<ffffffff811f28f1>] ? set_fd_set+0x21/0x30
Jan 15 17:03:22 jootnar kernel: [<ffffffff811ddd2d>] do_sync_write+0x8d/0xd0
Jan 15 17:03:22 jootnar kernel: [<ffffffff811de645>] vfs_write+0x1b5/0x1e0
Jan 15 17:03:22 jootnar kernel: [<ffffffff811defef>] SyS_write+0x7f/0xe0
Jan 15 17:03:22 jootnar kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b
@therealdano
Copy link
Author

Here are my settings for ZFS:

Running CentOS 7 release 7.2.1511
Kernel: 3.10.0-327.4.4.el7.x86_64

$ sudo modinfo zfs
filename:       /lib/modules/3.10.0-327.4.4.el7.x86_64/extra/zfs.ko
version:        0.6.5.4-1
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
rhelversion:    7.2
srcversion:     5FC0B558D497732F17F4202
depends:        spl,znvpair,zcommon,zunicode,zavl
vermagic:       3.10.0-327.4.4.el7.x86_64 SMP mod_unload modversions

$ sudo zfs get all
NAME  PROPERTY              VALUE                  SOURCE
data  type                  filesystem             -
data  creation              Fri Sep 11 17:09 2015  -
data  used                  6.49T                  -
data  available             3.72T                  -
data  referenced            6.49T                  -
data  compressratio         1.04x                  -
data  mounted               yes                    -
data  quota                 none                   default
data  reservation           none                   default
data  recordsize            128K                   default
data  mountpoint            /storage               local
data  sharenfs              [email protected]/8         local
data  checksum              on                     default
data  compression           lz4                    local
data  atime                 off                    local
data  devices               on                     default
data  exec                  on                     default
data  setuid                on                     default
data  readonly              off                    default
data  zoned                 off                    default
data  snapdir               hidden                 default
data  aclinherit            passthrough            local
data  canmount              on                     default
data  xattr                 on                     default
data  copies                1                      default
data  version               5                      -
data  utf8only              off                    -
data  normalization         none                   -
data  casesensitivity       sensitive              -
data  vscan                 off                    default
data  nbmand                off                    default
data  sharesmb              off                    default
data  refquota              none                   default
data  refreservation        none                   default
data  primarycache          all                    default
data  secondarycache        all                    default
data  usedbysnapshots       0                      -
data  usedbydataset         6.49T                  -
data  usedbychildren        382M                   -
data  usedbyrefreservation  0                      -
data  logbias               latency                default
data  dedup                 off                    default
data  mlslabel              none                   default
data  sync                  standard               default
data  refcompressratio      1.04x                  -
data  written               6.49T                  -
data  logicalused           6.61T                  -
data  logicalreferenced     6.61T                  -
data  filesystem_limit      none                   default
data  snapshot_limit        none                   default
data  filesystem_count      none                   default
data  snapshot_count        none                   default
data  snapdev               hidden                 default
data  acltype               posixacl               local
data  context               none                   default
data  fscontext             none                   default
data  defcontext            none                   default
data  rootcontext           none                   default
data  relatime              off                    default
data  redundant_metadata    all                    default
data  overlay               off                    default

$ sudo zpool get all
NAME  PROPERTY                    VALUE                       SOURCE
data  size                        14.5T                       -
data  capacity                    61%                         -
data  altroot                     -                           default
data  health                      ONLINE                      -
data  guid                        17330425584660733521        default
data  version                     -                           default
data  bootfs                      -                           default
data  delegation                  on                          default
data  autoreplace                 off                         default
data  cachefile                   -                           default
data  failmode                    wait                        default
data  listsnapshots               off                         default
data  autoexpand                  off                         default
data  dedupditto                  0                           default
data  dedupratio                  1.00x                       -
data  free                        5.57T                       -
data  allocated                   8.93T                       -
data  readonly                    off                         -
data  ashift                      12                          local
data  comment                     -                           default
data  expandsize                  -                           -
data  freeing                     0                           default
data  fragmentation               26%                         -
data  leaked                      0                           default
data  feature@async_destroy       enabled                     local
data  feature@empty_bpobj         enabled                     local
data  feature@lz4_compress        active                      local
data  feature@spacemap_histogram  active                      local
data  feature@enabled_txg         active                      local
data  feature@hole_birth          active                      local
data  feature@extensible_dataset  enabled                     local
data  feature@embedded_data       active                      local
data  feature@bookmarks           enabled                     local
data  feature@filesystem_limits   disabled                    local
data  feature@large_blocks        disabled                    local

@tuxoko
Copy link
Contributor

tuxoko commented Jan 18, 2016

Normally arc_reclaim should not reclaim because we set up PF_FSTRANS in arc_reclaim_thread. But your kernel version happens to do kmalloc in kmem_cache_shrink.

You need to apply openzfs/spl@3e7e6f3 which unfortunately didn't get to 0.6.5.4

@therealdano
Copy link
Author

Thanks tuxoko, I'll give that a try today.

@therealdano
Copy link
Author

I've applied the patch you recommended and re-ran the rsync command. It was able to complete successfully without getting any "blocked for more than 120 seconds" messages.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants