Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INFO: task zfs:11829 blocked for more than 120 seconds. #1301

Closed
byteharmony opened this issue Feb 17, 2013 · 14 comments
Closed

INFO: task zfs:11829 blocked for more than 120 seconds. #1301

byteharmony opened this issue Feb 17, 2013 · 14 comments

Comments

@byteharmony
Copy link

This is happening only on heavier load servers with slower system drives (USB Sticks running the base linux system) centos 6.3, ext4 with raid 1 for system drives. Seems to happen more when raid resync is allowed to go faster.

sysctl.conf:
dev.raid.speed_limit_max = 5000
Helped but still happening, moving to 2000 (which will limit resync speed to 2MBps, pretty slow. This is USB2, usb3 may help, i think it'd be nice to increase the timeouts. This is usually only an issue onsystems with LOTS of snapshots lots of programs listing snapshots to send the right data back and forth.

This machine is on rc13, haven't started workon rc14 yet.
BK

INFO: task zfs:11829 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
zfs D 0000000000000002 0 11829 11827 0x00000080
ffff88068f84b9e8 0000000000000082 ffff88068f84baa8 ffff880841e2f538
0000000000000000 ffff880841e2f500 ffff88086ce60aa0 0000000000000000
ffff880841e2fab8 ffff88068f84bfd8 000000000000fb88 ffff880841e2fab8
Call Trace:
[] rwsem_down_failed_common+0x95/0x1d0
[] rwsem_down_write_failed+0x23/0x30
[] call_rwsem_down_write_failed+0x13/0x20
[] ? down_write+0x32/0x40
[] ? autoremove_wake_function+0x0/0x40
[] dsl_dataset_clone_swap+0x1d9/0x460 [zfs]
[] dmu_recv_end+0xaa/0x220 [zfs]
[] ? dmu_objset_rele+0x11/0x20 [zfs]
[] ? get_zfs_sb+0x61/0xd0 [zfs]
[] zfs_ioc_recv+0x8af/0xf50 [zfs]
[] ? kmem_free_debug+0x4b/0x150 [spl]
[] ? dbuf_rele_and_unlock+0x159/0x200 [zfs]
[] ? kmem_free_debug+0x4b/0x150 [spl]
[] ? spa_name_compare+0xe/0x30 [zfs]
[] ? spa_lookup+0x62/0xc0 [zfs]
[] ? spa_open_common+0x23c/0x370 [zfs]
[] zfsdev_ioctl+0xfd/0x1d0 [zfs]
[] vfs_ioctl+0x22/0xa0
[] do_vfs_ioctl+0x84/0x580
[] ? security_file_permission+0x16/0x20
[] ? kvm_on_user_return+0x73/0x80 [kvm]
[] sys_ioctl+0x81/0xa0
[] system_call_fastpath+0x16/0x1b

@behlendorf
Copy link
Contributor

This looks like contention on the clone->ds_rwlock rwlock. The warning is just advisory here and can be safely ignored but it does suggest that the locking here is too coarse and should be improved.

@byteharmony
Copy link
Author

Thank you for looking at this. I'm afraid this warning leads to an issue I have not yet pinned down which is not simply a warning. When these warnings are on a system they seem to indicate too much zfs activity request. Things like lots of zfs list commands running while snapshots are being taken and destroyed and the root drive as it updates the devices causes so much activity that the result is a system which stops responding to zfs commands all together.

Our root file systems live on top of a MD array and ext4. The kernel does not crash in such a way that I can't access the system. Indeed I can still ssh into a failed system and in some cases the virtual machines running on top of the zvols are still operating. HOWEVER no zfs command ever returns to a command prompt and system load will hover at 70 or so eventually load will grow to the point of system lockup (crash). A reboot resolves this unless the number of snapshots is VERY large (500 - x000) inwhich case many times all the devices will not finish being processed on boot and we use a process I adapted from wonderful information posted on this form:

On systems with less than 1000 snapshots as root run:

# udevadm trigger -v --subsystem-match=block --sysname-match=zd* --action=change

Then wait a few minutes with top open, you should see the machine get pounded like crazy and new devices show up while they system is processing the volumes.


On systems with more than 1000 snapshots the above approach will fail, udev will timeout before processing all the partitions and load will grow to dangerous levels (yes servers have been crashed by this, load levels of 2500 or higher). Splitting the devices to be processed in 10 chunks as shown below has done the trick. This could be adopted to more chunks relatively easily and perhaps should be considered for a boot process for background initialization of snapshots during boot. I would much prefer if the devices could be determined to be snapshots vs volumes that the volumes get processed on boot an snapshots are processed with this kind of slower processes while load is monitored (a while loop that sleeps for 5 seconds at a time if load grows above 5). I'd love to write this code and perhaps within a few months as I hope more time is available. I may do so and contribute it back to the community :).

# udevadm trigger -v --subsystem-match=block --sysname-match=zd0* --action=change 

# udevadm trigger -v --subsystem-match=block --sysname-match=zd1* --action=change 

# udevadm trigger -v --subsystem-match=block --sysname-match=zd2* --action=change 

# udevadm trigger -v --subsystem-match=block --sysname-match=zd3* --action=change 

# udevadm trigger -v --subsystem-match=block --sysname-match=zd4* --action=change 

# udevadm trigger -v --subsystem-match=block --sysname-match=zd5* --action=change


....


# udevadm trigger -v --subsystem-match=block --sysname-match=zd9* --action=change

@byteharmony
Copy link
Author

Sorry for whatever caused that goofy font?? :(

@dajhorn
Copy link
Contributor

dajhorn commented Feb 21, 2013

@byteharmony: Probably the '#' symbol in the transcript.

On Github, the trick is to surround cut-and-pasted material that would usually go in <pre> or [code] tags with three back-ticks instead. (```)

If you wrote that in the Github web editor, then you can click the Edit button to change it.

@byteharmony
Copy link
Author

@dajhorn Thanks for the help, you're right about # symbols, I used them in the post to designate a command prompt. Now I have ``` listed with the same goofy print :(. DId I screw it up?

BK

@dajhorn
Copy link
Contributor

dajhorn commented Feb 21, 2013

Not quite. You need to add newlines like this:

```
stuff
```

@byteharmony
Copy link
Author

@dajhorn Devil is always in the details ;). Thanks for you're help,looking forward tomuch prettier comments :).

BK

@edillmann
Copy link
Contributor

If you don't use snapshot devices you could try this patch :

edillmann@e1a0f01

@olw2005
Copy link

olw2005 commented Jul 14, 2014

Bump. Still an issue with the latest 0.6.3. Managed to avoid it by offsetting start times of cron jobs utilizing "zfs list". Until today, when I had a mental lapse and scheduled two jobs simultaneously. Two "zfs list -H -t snap -o name" processes running at the same time. Neither finishes, other zfs commands lock, zvols went offline. Log file snippet attached:

Jul 14 13:31:37 dtc-san2 kernel: drbd detroitzvol: meta connection shut down by peer.
Jul 14 13:31:37 dtc-san2 kernel: drbd detroitzvol: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
Jul 14 13:31:37 dtc-san2 kernel: drbd detroitzvol: asender terminated
Jul 14 13:31:37 dtc-san2 kernel: drbd detroitzvol: Terminating drbd_a_detroitz
Jul 14 13:33:01 dtc-san2 kernel: drbd archivezvol: meta connection shut down by peer.
Jul 14 13:33:01 dtc-san2 kernel: drbd archivezvol: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
Jul 14 13:33:01 dtc-san2 kernel: drbd archivezvol: asender terminated
Jul 14 13:33:01 dtc-san2 kernel: drbd archivezvol: Terminating drbd_a_archivez
Jul 14 13:33:06 dtc-san2 kernel: drbd vdistorezvol: meta connection shut down by peer.
Jul 14 13:33:06 dtc-san2 kernel: drbd vdistorezvol: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )
Jul 14 13:33:06 dtc-san2 kernel: drbd vdistorezvol: asender terminated
Jul 14 13:33:06 dtc-san2 kernel: drbd vdistorezvol: Terminating drbd_a_vdistore
Jul 14 13:33:23 dtc-san2 kernel: INFO: task arc_adapt:2111 blocked for more than 120 seconds.
Jul 14 13:33:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:33:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:33:23 dtc-san2 kernel: arc_adapt D 0000000000000013 0 2111 2 0x00000000
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff43dbc70 0000000000000046 ffff882ff43dbc60 ffffffffa0153a28
Jul 14 13:33:23 dtc-san2 kernel: ffff883018ff2040 ffff883018ff2040 ffffffffa038b160 ffff883018ff2040
Jul 14 13:33:23 dtc-san2 kernel: ffff883018ff25f8 ffff882ff43dbfd8 000000000000fbc8 ffff883018ff25f8
Jul 14 13:33:23 dtc-san2 kernel: Call Trace:
Jul 14 13:33:23 dtc-san2 kernel: [] ? __cv_destroy+0x78/0x3b0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:33:23 dtc-san2 kernel: [] ? dnode_rele+0x77/0x170 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_destroy+0x17f/0x640 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_clear+0x120/0x330 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_evict+0x55/0x140 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_do_evict+0x97/0x1b0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] arc_do_user_evicts+0x9d/0x2c0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? arc_adapt_thread+0x0/0x6e0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? arc_adapt_thread+0x0/0x6e0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] arc_adapt_thread+0x7c/0x6e0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? set_user_nice+0xc9/0x130
Jul 14 13:33:23 dtc-san2 kernel: [] ? arc_adapt_thread+0x0/0x6e0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] thread_generic_wrapper+0x71/0xd0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? thread_generic_wrapper+0x0/0xd0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] kthread+0x96/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] child_rip+0xa/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? kthread+0x0/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] ? child_rip+0x0/0x20
Jul 14 13:33:23 dtc-san2 kernel: INFO: task zvol/29:2175 blocked for more than 120 seconds.
Jul 14 13:33:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:33:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:33:23 dtc-san2 kernel: zvol/29 D 0000000000000011 0 2175 2 0x00000000
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff3cf1860 0000000000000046 0000000000000000 ffff881787097528
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff3cf1810 ffffffffa031984e 0000000000000000 ffff8818134a3800
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff3cefab8 ffff882ff3cf1fd8 000000000000fbc8 ffff882ff3cefab8
Jul 14 13:33:23 dtc-san2 kernel: Call Trace:
Jul 14 13:33:23 dtc-san2 kernel: [] ? zio_wait_for_children+0x8e/0x160 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:33:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_find+0x7b/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_prefetch+0x105/0x520 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_zfetch_dofetch+0x105/0x1c0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_zfetch+0x9eb/0x1450 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_read+0x60b/0xd20 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_buf_hold_array_by_dnode+0x13f/0x7d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_buf_hold_array+0x65/0x90 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_read_req+0x4f/0x190 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zvol_read+0x67/0xc0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] taskq_thread+0x269/0x650 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? thread_return+0x4e/0x760
Jul 14 13:33:23 dtc-san2 kernel: [] ? default_wake_function+0x0/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? taskq_thread+0x0/0x650 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] kthread+0x96/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] child_rip+0xa/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? kthread+0x0/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] ? child_rip+0x0/0x20
Jul 14 13:33:23 dtc-san2 kernel: INFO: task zvol/31:2177 blocked for more than 120 seconds.
Jul 14 13:33:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:33:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:33:23 dtc-san2 kernel: zvol/31 D 0000000000000001 0 2177 2 0x00000000
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff3cf7940 0000000000000046 0000000000000000 0000000000000061
Jul 14 13:33:23 dtc-san2 kernel: 0000000000000011 0000000000000002 ffff882ff3cf78e0 ffffffff81068fa1
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff3cee5f8 ffff882ff3cf7fd8 000000000000fbc8 ffff882ff3cee5f8
Jul 14 13:33:23 dtc-san2 kernel: Call Trace:
Jul 14 13:33:23 dtc-san2 kernel: [] ? __enqueue_rt_entity+0x2c1/0x300
Jul 14 13:33:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:33:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_find+0x7b/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? wake_up_state+0x10/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? signal_wake_up+0x2d/0x40
Jul 14 13:33:23 dtc-san2 kernel: [] ? __kmalloc+0x20c/0x220
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x11f/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold_impl+0x86/0xc0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold_level+0x1f/0x30 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_tx_check_ioerr+0x4a/0x200 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_tx_count_write+0x589/0x8e0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? __kmalloc+0x20c/0x220
Jul 14 13:33:23 dtc-san2 kernel: [] ? kmem_alloc_debug+0x213/0x520 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_tx_hold_object_impl+0x104/0x1d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_tx_hold_write+0x79/0x180 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zvol_write+0xb0/0x4d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] taskq_thread+0x269/0x650 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? thread_return+0x4e/0x760
Jul 14 13:33:23 dtc-san2 kernel: [] ? default_wake_function+0x0/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? taskq_thread+0x0/0x650 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] kthread+0x96/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] child_rip+0xa/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? kthread+0x0/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] ? child_rip+0x0/0x20
Jul 14 13:33:23 dtc-san2 kernel: INFO: task txg_sync:3749 blocked for more than 120 seconds.
Jul 14 13:33:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:33:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:33:23 dtc-san2 kernel: txg_sync D 0000000000000010 0 3749 2 0x00000000
Jul 14 13:33:23 dtc-san2 kernel: ffff882fbef9bbb0 0000000000000046 ffff882fbef9bb50 ffffffffa0143907
Jul 14 13:33:23 dtc-san2 kernel: ffff88180f28d368 ffffffffa033c59c ffff88180e3cfc80 ffff88180e3cf8c0
Jul 14 13:33:23 dtc-san2 kernel: ffff882ff3dab098 ffff882fbef9bfd8 000000000000fbc8 ffff882ff3dab098
Jul 14 13:33:23 dtc-san2 kernel: Call Trace:
Jul 14 13:33:23 dtc-san2 kernel: [] ? kmem_free_debug+0x57/0x1b0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? prepare_to_wait_exclusive+0x4e/0x80
Jul 14 13:33:23 dtc-san2 kernel: [] cv_wait_common+0x16d/0x3f0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dsl_scan_active+0x9b/0xa0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? autoremove_wake_function+0x0/0x40
Jul 14 13:33:23 dtc-san2 kernel: [] ? txg_list_add+0x87/0x100 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __cv_wait+0x15/0x20 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] rrw_enter_write+0x7d/0x180 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? txg_list_remove+0x82/0x100 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] rrw_enter+0x13/0x30 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] spa_sync+0x8a3/0xd70 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] txg_sync_thread+0x39c/0x6f0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? set_user_nice+0xc9/0x130
Jul 14 13:33:23 dtc-san2 kernel: [] ? txg_sync_thread+0x0/0x6f0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] thread_generic_wrapper+0x71/0xd0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] ? thread_generic_wrapper+0x0/0xd0 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] kthread+0x96/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] child_rip+0xa/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? kthread+0x0/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] ? child_rip+0x0/0x20
Jul 14 13:33:23 dtc-san2 kernel: INFO: task zfs:1218 blocked for more than 120 seconds.
Jul 14 13:33:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:33:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:33:23 dtc-san2 kernel: zfs D 0000000000000001 0 1218 1217 0x00000080
Jul 14 13:33:23 dtc-san2 kernel: ffff88201557cf08 0000000000000082 ffff88049e153480 ffff88181549f540
Jul 14 13:33:23 dtc-san2 kernel: 0000000000011210 ffff88201557cec8 ffffffff8100bb8e ffff88201557cf08
Jul 14 13:33:23 dtc-san2 kernel: ffff883017174638 ffff88201557dfd8 000000000000fbc8 ffff883017174638
Jul 14 13:33:23 dtc-san2 kernel: Call Trace:
Jul 14 13:33:23 dtc-san2 kernel: [] ? apic_timer_interrupt+0xe/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_spin_on_owner+0x9f/0xc0
Jul 14 13:33:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:33:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_find+0x7b/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? __kmalloc+0x20c/0x220
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x11f/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dnode_hold_impl+0x6a2/0x9d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold_impl+0x86/0xc0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold+0x20/0x30 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_buf_hold+0x8f/0x2a0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lockdir+0x5a/0xda0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? refcount_remove_many+0x157/0x290 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_read+0x6fa/0xd20 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_rele_and_unlock+0x189/0x3a0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] zap_cursor_retrieve+0x2f4/0x450 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? kmem_alloc_debug+0x251/0x520 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_value_search+0x94/0xe0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dsl_dataset_get_snapname+0x87/0xa0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dsl_dataset_name+0x38/0x1f0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_create+0x4d2/0x980 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_zfetch_stream_reclaim+0x20/0x2d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_zfetch+0x69a/0x1450 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x381/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold_impl+0x86/0xc0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold+0x20/0x30 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dnode_hold_impl+0x119/0x9d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dnode_hold+0x19/0x20 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_buf_hold+0x4a/0x2a0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lockdir+0x5a/0xda0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lookup_norm+0x4a/0x190 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lookup+0x33/0x40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zvol_get_stats+0x3f/0xf0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_objset_stats+0x5b/0xd0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zfs_ioc_objset_stats_impl+0xb1/0x120 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_objset_from_ds+0x70/0x120 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zfs_ioc_snapshot_list_next+0x181/0x1c0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zfsdev_ioctl+0x4de/0x550 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? __do_page_fault+0x161/0x480
Jul 14 13:33:23 dtc-san2 kernel: [] vfs_ioctl+0x22/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] do_vfs_ioctl+0x84/0x580
Jul 14 13:33:23 dtc-san2 kernel: [] ? do_brk+0x26c/0x350
Jul 14 13:33:23 dtc-san2 kernel: [] sys_ioctl+0x81/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] ? do_device_not_available+0xe/0x10
Jul 14 13:33:23 dtc-san2 kernel: [] system_call_fastpath+0x16/0x1b
Jul 14 13:33:23 dtc-san2 kernel: INFO: task zfs:1222 blocked for more than 120 seconds.
Jul 14 13:33:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:33:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:33:23 dtc-san2 kernel: zfs D 0000000000000000 0 1222 1221 0x00000080
Jul 14 13:33:23 dtc-san2 kernel: ffff8820b8627428 0000000000000082 0000000000000000 0000000000000001
Jul 14 13:33:23 dtc-san2 kernel: 0000000000000000 0000000000000000 ffffffff8100bb8e ffff8820b8627428
Jul 14 13:33:23 dtc-san2 kernel: ffff88215eb49af8 ffff8820b8627fd8 000000000000fbc8 ffff88215eb49af8
Jul 14 13:33:23 dtc-san2 kernel: Call Trace:
Jul 14 13:33:23 dtc-san2 kernel: [] ? apic_timer_interrupt+0xe/0x20
Jul 14 13:33:23 dtc-san2 kernel: [] ? mutex_spin_on_owner+0x8d/0xc0
Jul 14 13:33:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:33:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_find+0xf3/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x11f/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dbuf_find+0x12a/0x220 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x2bc/0xb40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold_impl+0x86/0xc0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dbuf_hold+0x20/0x30 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dnode_hold_impl+0x119/0x9d0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? spl_debug_msg+0x442/0xa30 [spl]
Jul 14 13:33:23 dtc-san2 kernel: [] dnode_hold+0x19/0x20 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] dmu_buf_hold+0x4a/0x2a0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lockdir+0x5a/0xda0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lookup_norm+0x4a/0x190 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zap_lookup+0x33/0x40 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zvol_get_stats+0x3f/0xf0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_objset_stats+0x5b/0xd0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zfs_ioc_objset_stats_impl+0xb1/0x120 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? dmu_objset_from_ds+0x70/0x120 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zfs_ioc_snapshot_list_next+0x181/0x1c0 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] zfsdev_ioctl+0x4de/0x550 [zfs]
Jul 14 13:33:23 dtc-san2 kernel: [] ? __do_page_fault+0x161/0x480
Jul 14 13:33:23 dtc-san2 kernel: [] vfs_ioctl+0x22/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] do_vfs_ioctl+0x84/0x580
Jul 14 13:33:23 dtc-san2 kernel: [] ? do_brk+0x26c/0x350
Jul 14 13:33:23 dtc-san2 kernel: [] sys_ioctl+0x81/0xa0
Jul 14 13:33:23 dtc-san2 kernel: [] ? do_device_not_available+0xe/0x10
Jul 14 13:33:23 dtc-san2 kernel: [] system_call_fastpath+0x16/0x1b
Jul 14 13:35:23 dtc-san2 kernel: INFO: task arc_adapt:2111 blocked for more than 120 seconds.
Jul 14 13:35:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:35:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:35:23 dtc-san2 kernel: arc_adapt D 0000000000000013 0 2111 2 0x00000000
Jul 14 13:35:23 dtc-san2 kernel: ffff882ff43dbc70 0000000000000046 ffff882ff43dbc60 ffffffffa0153a28
Jul 14 13:35:23 dtc-san2 kernel: ffff883018ff2040 ffff883018ff2040 ffffffffa038b160 ffff883018ff2040
Jul 14 13:35:23 dtc-san2 kernel: ffff883018ff25f8 ffff882ff43dbfd8 000000000000fbc8 ffff883018ff25f8
Jul 14 13:35:23 dtc-san2 kernel: Call Trace:
Jul 14 13:35:23 dtc-san2 kernel: [] ? __cv_destroy+0x78/0x3b0 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:35:23 dtc-san2 kernel: [] ? dnode_rele+0x77/0x170 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_destroy+0x17f/0x640 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? dbuf_clear+0x120/0x330 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_evict+0x55/0x140 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_do_evict+0x97/0x1b0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] arc_do_user_evicts+0x9d/0x2c0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? arc_adapt_thread+0x0/0x6e0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? arc_adapt_thread+0x0/0x6e0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] arc_adapt_thread+0x7c/0x6e0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? set_user_nice+0xc9/0x130
Jul 14 13:35:23 dtc-san2 kernel: [] ? arc_adapt_thread+0x0/0x6e0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] thread_generic_wrapper+0x71/0xd0 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] ? thread_generic_wrapper+0x0/0xd0 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] kthread+0x96/0xa0
Jul 14 13:35:23 dtc-san2 kernel: [] child_rip+0xa/0x20
Jul 14 13:35:23 dtc-san2 kernel: [] ? kthread+0x0/0xa0
Jul 14 13:35:23 dtc-san2 kernel: [] ? child_rip+0x0/0x20
Jul 14 13:35:23 dtc-san2 kernel: INFO: task zvol/1:2147 blocked for more than 120 seconds.
Jul 14 13:35:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:35:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:35:23 dtc-san2 kernel: zvol/1 D 0000000000000003 0 2147 2 0x00000000
Jul 14 13:35:23 dtc-san2 kernel: ffff882ff3c8b860 0000000000000046 0000000000000000 ffffffffa0292977
Jul 14 13:35:23 dtc-san2 kernel: ffff882ff3c8b880 ffffffffa02379ba 0000000000000213 0000000000000007
Jul 14 13:35:23 dtc-san2 kernel: ffff882ff3c625f8 ffff882ff3c8bfd8 000000000000fbc8 ffff882ff3c625f8
Jul 14 13:35:23 dtc-san2 kernel: Call Trace:
Jul 14 13:35:23 dtc-san2 kernel: [] ? refcount_remove_many+0x157/0x290 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? dbuf_read+0x6fa/0xd20 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:35:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_find+0x7b/0x220 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_prefetch+0x105/0x520 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_zfetch_dofetch+0x105/0x1c0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_zfetch+0x9eb/0x1450 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_read+0x992/0xd20 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_buf_hold_array_by_dnode+0x13f/0x7d0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_buf_hold_array+0x65/0x90 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_read_req+0x4f/0x190 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] zvol_read+0x67/0xc0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] taskq_thread+0x269/0x650 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] ? thread_return+0x4e/0x760
Jul 14 13:35:23 dtc-san2 kernel: [] ? default_wake_function+0x0/0x20
Jul 14 13:35:23 dtc-san2 kernel: [] ? taskq_thread+0x0/0x650 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] kthread+0x96/0xa0
Jul 14 13:35:23 dtc-san2 kernel: [] child_rip+0xa/0x20
Jul 14 13:35:23 dtc-san2 kernel: [] ? kthread+0x0/0xa0
Jul 14 13:35:23 dtc-san2 kernel: [] ? child_rip+0x0/0x20
Jul 14 13:35:23 dtc-san2 kernel: INFO: task zvol/8:2154 blocked for more than 120 seconds.
Jul 14 13:35:23 dtc-san2 kernel: Tainted: P --------------- 2.6.32-431.20.3.el6.x86_64 #1
Jul 14 13:35:23 dtc-san2 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 14 13:35:23 dtc-san2 kernel: zvol/8 D 0000000000000016 0 2154 2 0x00000000
Jul 14 13:35:23 dtc-san2 kernel: ffff882ff3ca5940 0000000000000046 0000000000000000 0000000000000061
Jul 14 13:35:23 dtc-san2 kernel: 0000000000000016 0000000000000000 ffff882ff3ca58e0 ffffffff81068fa1
Jul 14 13:35:23 dtc-san2 kernel: ffff882ff3ca3af8 ffff882ff3ca5fd8 000000000000fbc8 ffff882ff3ca3af8
Jul 14 13:35:23 dtc-san2 kernel: Call Trace:
Jul 14 13:35:23 dtc-san2 kernel: [] ? __enqueue_rt_entity+0x2c1/0x300
Jul 14 13:35:23 dtc-san2 kernel: [] __mutex_lock_slowpath+0x13e/0x180
Jul 14 13:35:23 dtc-san2 kernel: [] mutex_lock+0x2b/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_find+0x7b/0x220 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? wake_up_state+0x10/0x20
Jul 14 13:35:23 dtc-san2 kernel: [] ? signal_wake_up+0x2d/0x40
Jul 14 13:35:23 dtc-san2 kernel: [] ? __kmalloc+0x20c/0x220
Jul 14 13:35:23 dtc-san2 kernel: [] __dbuf_hold_impl+0x11f/0xb40 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? mutex_lock+0x1e/0x50
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_hold_impl+0x86/0xc0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dbuf_hold_level+0x1f/0x30 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_tx_check_ioerr+0x4a/0x200 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_tx_count_write+0x589/0x8e0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] ? __kmalloc+0x20c/0x220
Jul 14 13:35:23 dtc-san2 kernel: [] ? kmem_alloc_debug+0x213/0x520 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] ? dmu_tx_hold_object_impl+0x104/0x1d0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] dmu_tx_hold_write+0x79/0x180 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] zvol_write+0xb0/0x4d0 [zfs]
Jul 14 13:35:23 dtc-san2 kernel: [] taskq_thread+0x269/0x650 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [] ? thread_return+0x4e/0x760
Jul 14 13:35:23 dtc-san2 kernel: [] ? default_wake_function+0x0/0x20
Jul 14 13:35:23 dtc-san2 kernel: [] ? taskq_thread+0x0/0x650 [spl]
Jul 14 13:35:23 dtc-san2 kernel: [

@olw2005
Copy link

olw2005 commented Jul 14, 2014

Also possibly of interest, there was a "zfs send" operation running prior to the invocation of the two "zfs list" commands. And numerous snapshots on the server (~100), which seems to cause significant delay (around 10 seconds) for "zfs list -t all" to finish.

@olw2005
Copy link

olw2005 commented Jul 14, 2014

Nothing unusual in the history (zdb -h pool), truncated for brevity to just the last several lines:

...
2014-07-14.13:03:46 zfs recv -F pool/chicagorepl
2014-07-14.13:04:15 zfs destroy pool/chicagorepl@hour-20140714080101
2014-07-14.13:12:06 zfs send -I pool/detroit@hour-20140714104601 pool/detroit@hour-20140714125501
2014-07-14.13:12:30 zfs destroy pool/detroit@hour-20140714074601
2014-07-14.13:12:36 zfs destroy pool/detroit@hour-20140714084601
2014-07-14.13:16:03 zfs snapshot pool/archive@hour-20140714131601

@behlendorf
Copy link
Contributor

@olw2005 OK, thanks for letting us know there's still an issue here.

@olw2005
Copy link

olw2005 commented Jul 21, 2014

Posting this mostly for the benefit of anyone coming across this via google:

As this bug has been tagged as 0.7.0, it may be awhile before this issue gets addressed. In the interim I've replaced all instances of "zfs list" commands in my scripts with "zfs_list" which runs a [crude / brute-force / hack] wrapper scripts as shown below:

[root@ctc-san2 ~]# cat /sbin/zfs_list
#!/bin/bash
# Hack wrapper for "zfs list" to prevent lockup from
# multiple "zfs list" commands running concurrently
EXIT_STATUS=0
COUNTER=0
until [ $EXIT_STATUS -eq 1 ]; do
# use [i] to prevent false positive from our own ps command
  ZFS_LIST_RUNNING=`ps -ef | grep "zfs l[i]st"`
  EXIT_STATUS=$?
  sleep $COUNTER
# increase check interval
  if [ $COUNTER -lt 5 ]
  then
    let COUNTER=$COUNTER+1
  fi
done
zfs list "$@"

In addition, where possible I have added the "-s name" option to "zfs list" commands. As stated in another bug report, a "zfs list -o name -s name -t all" is Much Faster (orders of magnitude?!) than "zfs list -o name -t all". To wit:

(zfs list 73 snapshots with "-s name" option. Fraction of a second, all good.)

[root@ctc-san2 zfs-scripts]# time zfs list -o name -s name -t all | wc -l
73

real    0m0.123s
user    0m0.001s
sys     0m0.015s

(and the same zfs list w/o the "-s name" option. Ouch.)

[root@ctc-san2 zfs-scripts]# time zfs list -o name -t all | wc -l
73

real    0m9.368s
user    0m0.011s
sys     0m0.050s

@behlendorf behlendorf added Bug - Minor and removed Bug Type: Documentation Indicates a requested change to the documentation labels Oct 7, 2014
@behlendorf behlendorf removed this from the 0.7.0 milestone Oct 7, 2014
@behlendorf
Copy link
Contributor

Closing. This is no longer believed to be an issue with the latest code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants