Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zpool import spinning on txg_quiesce and txg_sync #10828

Closed
stuartthebruce opened this issue Aug 26, 2020 · 12 comments
Closed

zpool import spinning on txg_quiesce and txg_sync #10828

stuartthebruce opened this issue Aug 26, 2020 · 12 comments
Labels
Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang)

Comments

@stuartthebruce
Copy link

System information

Type Version/Name
Distribution Name Scientific Linux
Distribution Version 7.8
Linux Kernel 3.10.0-1127.el7
Architecture x86_64
ZFS Version 0.8.4
SPL Version 0.8.4

Describe the problem you're observing

zpool import hangs (for at least several hours) with txg_quiesce and txg_sync using CPU cycles to no avail. There are no problems with the storage or syslog errors about hung kernel tasks. After rebooting without auto-importing, a naked "zpool import" discovers all of the devices and gives every indication that an actual import by name should succeed. However, even while that is spinning cpu cycles I am able to run zdb to see all of the devices and pool history.

Is there any other useful diagnostic information beyond that below before I start trying to run "zpool import -T"?

Note, this is also being discussed at https://zfsonlinux.topicbox.com/groups/zfs-discuss/Ta6b683d15084807b/zpool-import-spinning-on-txgquiesce-and-txgsync

Describe how to reproduce the problem

Start 5 concurrent "zfs receive" to a pool with 6 10-drive raidz3 vdev and small mirrored SSD log, and wait for city power sub-station failure to power down server and external SAS storage.

Include any warning/errors/backtraces from the system logs

[root@node809 ~]# zpool import
  pool: jbod2-backup
    id: 12314841084204185915
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

       jbod2-backup                      ONLINE
         raidz3-0                        ONLINE
           35000cca2531dd934             ONLINE
           35000cca2531e2a94             ONLINE
           35000cca2531e3ce8             ONLINE
           35000cca2531e5f38             ONLINE
           35000cca2531e5f68             ONLINE
           35000cca2531e5f6c             ONLINE
           35000cca2531e6404             ONLINE
           35000cca2531e84ec             ONLINE
           35000cca2531e868c             ONLINE
           35000cca2531e87d8             ONLINE
         raidz3-1                        ONLINE
           35000cca2531e9750             ONLINE
           35000cca2531e9764             ONLINE
           35000cca2531e9f48             ONLINE
           35000cca2531eb96c             ONLINE
           35000cca2531ec858             ONLINE
           35000cca2530aa110             ONLINE
           35000cca2530aa424             ONLINE
           35000cca2530aacb4             ONLINE
           35000cca2530e297c             ONLINE
           35000cca2530f661c             ONLINE
         raidz3-2                        ONLINE
           35000cca253100b68             ONLINE
           35000cca253123c08             ONLINE
           35000cca253158878             ONLINE
           35000cca253168af4             ONLINE
           35000cca25316cc20             ONLINE
           35000cca25316d614             ONLINE
           35000cca25316e978             ONLINE
           35000cca253178e50             ONLINE
           35000cca2531bc948             ONLINE
           35000cca2531d7500             ONLINE
         raidz3-3                        ONLINE
           35000cca2530a4188             ONLINE
           35000cca2530a461c             ONLINE
           35000cca2530a4868             ONLINE
           35000cca2530a4918             ONLINE
           35000cca2530a4bc4             ONLINE
           35000cca2530a58a4             ONLINE
           35000cca2530a63f4             ONLINE
           35000cca2530a6adc             ONLINE
           35000cca2530a6f3c             ONLINE
           35000cca2530a7108             ONLINE
         raidz3-4                        ONLINE
           35000cca2530a7130             ONLINE
           35000cca2530a7288             ONLINE
           35000cca2530a7408             ONLINE
           35000cca2530a7428             ONLINE
           35000cca2530a7494             ONLINE
           35000cca253032d58             ONLINE
           35000cca253075af0             ONLINE
           35000cca253084ee4             ONLINE
           35000cca253089a64             ONLINE
           35000cca25308a028             ONLINE
         raidz3-5                        ONLINE
           35000cca253090cbc             ONLINE
           35000cca253091b9c             ONLINE
           35000cca2530925bc             ONLINE
           35000cca253093758             ONLINE
           35000cca25309e6d8             ONLINE
           35000cca2530a0dd0             ONLINE
           35000cca2530a0f64             ONLINE
           35000cca2530a1140             ONLINE
           35000cca2530a21c4             ONLINE
           35000cca2530a23b4             ONLINE
       logs
         mirror-6                        ONLINE
           wwn-0x55cd2e404c21b856-part3  ONLINE
           wwn-0x55cd2e414dcf191d-part3  ONLINE

And after actually trying to import.

top - 12:18:17 up 19 min,  3 users,  load average: 1.50, 1.76, 1.51
Tasks: 643 total,   2 running, 640 sleeping,   0 stopped,   1 zombie
%Cpu(s):  0.1 us,  4.2 sy,  0.0 ni, 91.7 id,  4.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 26387843+total, 25777355+free,  4764504 used,  1340380 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 25793254+avail Mem

 PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
6237 root      20   0       0      0      0 R  95.3  0.0  12:37.39 txg_sync
6236 root      20   0       0      0      0 S   3.7  0.0   0:32.50 txg_quiesce
9855 root      20   0 3358988   9756   2332 D   2.0  0.0   0:17.85 zpool
[root@node809 ~]# cat /proc/6237/stack
[<ffffffffffffffff>] 0xffffffffffffffff

[root@node809 ~]# cat /proc/6236/stack
[<ffffffffc081e325>] cv_wait_common+0x125/0x150 [spl]
[<ffffffffc081e3e5>] __cv_wait_sig+0x15/0x40 [spl]
[<ffffffffc0c15742>] txg_quiesce_thread+0x3b2/0x3c0 [zfs]
[<ffffffffc0825e03>] thread_generic_wrapper+0x73/0x80 [spl]
[<ffffffff85ac6691>] kthread+0xd1/0xe0
[<ffffffff86192d37>] ret_from_fork_nospec_end+0x0/0x39
[<ffffffffffffffff>] 0xffffffffffffffff

[root@node809 ~]# cat /proc/9855/stack
[<ffffffffc081e2b2>] cv_wait_common+0xb2/0x150 [spl]
[<ffffffffc081e388>] __cv_wait_io+0x18/0x20 [spl]
[<ffffffffc0c14e75>] txg_wait_synced_impl+0xe5/0x130 [zfs]
[<ffffffffc0c14ed0>] txg_wait_synced+0x10/0x50 [zfs]
[<ffffffffc0bb3ac5>] dmu_tx_wait+0x275/0x3b0 [zfs]
[<ffffffffc0bb3c91>] dmu_tx_assign+0x91/0x490 [zfs]
[<ffffffffc0c0b85e>] spa_history_log_internal+0xbe/0x120 [zfs]
[<ffffffffc0c0b943>] spa_history_log_version+0x83/0x90 [zfs]
[<ffffffffc0c022e0>] spa_load+0xfa0/0x1390 [zfs]
[<ffffffffc0c02727>] spa_load_best+0x57/0x2f0 [zfs]
[<ffffffffc0c048d4>] spa_import+0x264/0x800 [zfs]
[<ffffffffc0c56c97>] zfs_ioc_pool_import+0x147/0x160 [zfs]
[<ffffffffc0c5b7f4>] zfsdev_ioctl+0x864/0x8c0 [zfs]
[<ffffffff85c62890>] do_vfs_ioctl+0x3a0/0x5b0
[<ffffffff85c62b41>] SyS_ioctl+0xa1/0xc0
[<ffffffff86192ed2>] system_call_fastpath+0x25/0x2a
[<ffffffffffffffff>] 0xffffffffffffffff
[root@node809 ~]# zdb
jbod2-backup:
   version: 5000
   name: 'jbod2-backup'
   state: 0
   txg: 15
   pool_guid: 12314841084204185915
   errata: 0
   hostid: 1872739650
   hostname: 'node809'
   com.delphix:has_per_vdev_zaps
   vdev_children: 7
   vdev_tree:
       type: 'root'
       id: 0
       guid: 12314841084204185915
       create_txg: 4
       children[0]:
...
           children[1]:
               type: 'disk'
               id: 1
               guid: 1373381130126859264
               path: '/dev/disk/by-id/wwn-0x55cd2e414dcf191d-part3'
               whole_disk: 0
               create_txg: 4
               com.delphix:vdev_zap_leaf: 197
   features_for_read:
       com.delphix:hole_birth
       com.delphix:embedded_data
[root@node809 ~]# zdb -hh jbod2-backup

History:
unrecognized record:
 history internal str: 'pool version 5000; software version unknown; uts node809 3.10.0-1127.el7.x86_64 #1 SMP Wed Apr 1 12:25:50 CDT 2020 x86_64'
 internal_name: 'create'
 history txg: 4
 history time: 1598226372
 history hostname: 'node809'
unrecognized record:
 history internal str: 'feature@async_destroy=enabled'
 internal_name: 'set'
 history txg: 4
 history time: 1598226372
 history hostname: 'node809'
unrecognized record:
 history internal str: 'feature@empty_bpobj=enabled'
 internal_name: 'set'
 history txg: 4
 history time: 1598226372
 history hostname: 'node809'
...
2020-08-26.07:27:16 zfs receive -s -F jbod2-backup/home1/sherman.thompson
 history command: 'zfs receive -s -F jbod2-backup/home1/sherman.thompson'
 history zone: 'linux'
 history who: 0
 history time: 1598452036
 history hostname: 'node809'
unrecognized record:
 dsname: 'jbod2-backup/home1/sherman.thompson/%recv'
 dsid: 74012
 history internal str: ''
 internal_name: 'receive'
 history txg: 135873
 history time: 1598452036
 history hostname: 'node809'
@stuartthebruce stuartthebruce added Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang) labels Aug 26, 2020
@stuartthebruce
Copy link
Author

As suggest on zfs-discuss,

It seems blocked on executing a txg. Can you try importing it in readonly mode (via zpool import -o readonly=on <your_pool>)? Does the import complete?

That works, and zpool status reports clean.

If so, please try to export and re-import the pool in read/write mode.

That spins in the same way,

[root@node809 ~]# top
top - 14:19:59 up 21:55,  2 users,  load average: 1.71, 0.76, 0.32
Tasks: 648 total,   3 running, 644 sleeping,   0 stopped,   1 zombie
%Cpu(s):  0.1 us,  4.2 sy,  0.0 ni, 91.6 id,  4.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 26387843+total, 25748360+free,  5137552 used,  1257284 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 25739040+avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
16471 root      20   0       0      0      0 R  94.4  0.0   1:44.98 txg_sync
16470 root      20   0       0      0      0 R   5.0  0.0   0:05.45 txg_quiesce
18547 root      20   0 3358988   9752   2332 D   1.7  0.0   0:04.88 zpool


[root@node809 ~]# cat /proc/16471/stack
[<ffffffffffffffff>] 0xffffffffffffffff

[root@node809 ~]# cat /proc/16470/stack
[<ffffffffc088c325>] cv_wait_common+0x125/0x150 [spl]
[<ffffffffc088c3e5>] __cv_wait_sig+0x15/0x40 [spl]
[<ffffffffc1292742>] txg_quiesce_thread+0x3b2/0x3c0 [zfs]
[<ffffffffc0893e03>] thread_generic_wrapper+0x73/0x80 [spl]
[<ffffffff9eac6691>] kthread+0xd1/0xe0
[<ffffffff9f192d37>] ret_from_fork_nospec_end+0x0/0x39
[<ffffffffffffffff>] 0xffffffffffffffff

[root@node809 ~]# cat /proc/18547/stack
[<ffffffffc088c2b2>] cv_wait_common+0xb2/0x150 [spl]
[<ffffffffffffffff>] 0xffffffffffffffff

@stuartthebruce
Copy link
Author

[root@node809 ~]# echo 1 > /sys/module/zfs/parameters/zfs_dbgmsg_enable

[root@node809 ~]# cat /proc/spl/kstat/zfs/dbgmsg
timestamp    message
1598639372   spa.c:5638:spa_tryimport(): spa_tryimport: importing jbod2-backup
1598639372   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADING
1598639373   vdev.c:125:vdev_dbgmsg(): disk vdev '/dev/mapper/35000cca2530a7408': best uberblock found for 0
1598639373   spa_misc.c:408:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=0
1598639384   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADED
1598639384   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): UNLOADING
1598639384   spa.c:5638:spa_tryimport(): spa_tryimport: importing jbod2-backup
1598639384   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADING
1598639384   vdev.c:125:vdev_dbgmsg(): disk vdev '/dev/mapper/35000cca2530a7408': best uberblock found for 0
1598639384   spa_misc.c:408:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=0
1598639393   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADED
1598639393   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): UNLOADING
1598639394   spa.c:5490:spa_import(): spa_import: importing jbod2-backup
1598639394   spa_misc.c:408:spa_load_note(): spa_load(jbod2-backup, config trusted): LOADING
1598639395   vdev.c:125:vdev_dbgmsg(): disk vdev '/dev/mapper/35000cca2530a7408': best uberblock found for 0
1598639395   spa_misc.c:408:spa_load_note(): spa_load(jbod2-backup, config untrusted): using uberblock with0
1598639408   mmp.c:249:mmp_thread_start(): MMP thread started pool 'jbod2-backup' gethrtime 75574005216829
1598639408   mmp.c:654:mmp_thread(): MMP suspending pool 'jbod2-backup': gethrtime 75574005223863 mmp_last_0
1598639408   spa.c:7592:spa_async_request(): spa=jbod2-backup async request task=1

@stuartthebruce
Copy link
Author

As conjectured on zfs-discuss multihost is enabled,

This sounds like perfect conditions for ZFS calculating an unreasonably high MMP import delay. The stack traces seem consistent with that as well.

From zdb pool history,

[root@node809 ~]# grep multihost zdb.hh
  history internal str: 'multihost=1'
2020-08-23.16:46:31 zpool set multihost=on jbod2-backup
  history command: 'zpool set multihost=on jbod2-backup'

@stuartthebruce
Copy link
Author

This pool has been successfully imported by setting zfs_multihost_fail_intervals=0 once and then subsequent imports work without having to set that. More details available in zfs-discuss at https://zfsonlinux.topicbox.com/groups/zfs-discuss/Ta6b683d15084807b/zpool-import-spinning-on-txgquiesce-and-txgsync

The remaining question for this issue is whether there is enough information to fix the bug/feature that prevented a perfectly healthy pool from importing?

@ofaaland
Copy link
Contributor

ofaaland commented Sep 3, 2020

@stuartthebruce I believe the patch from #10873 will fix the import problem you saw. Thanks for all the debug information.

@stuartthebruce
Copy link
Author

stuartthebruce commented Sep 3, 2020

From zfs-discuss this happened again in a reproducible way, and after a bit more testing I think I found a problem (or at least an opportunity for an enhancement).

After the initial pool recovery discussed previously in this thread where an initial import with zfs_multihost_fail_intervals=0, it was not necessary to modify that parameter for subsequent imports. However, that is no longer true, and I think the initial and current import problem are only loosely related to unscheduled shutdowns.

After confirming a few times that "import -o readonly=on" works with default settings, and a non-readonly import requires zfs_multihost_fail_intervals=0 for both initial and subsequent scheduled imports, I discovered that another solution is to increase zfs_multihost_fail_intervals.

In particular, I am currently reproducibly unable to import this pool with the default setting of zfs_multihost_fail_intervals=20, however, I can reproducibly import it with a value of 40. I think what has changed is that I have pushed more datasets, snapshots, metadata and data into the pool, and the import time has crossed a threshold,

[root@node809 ~]# time zpool import -o readonly=on jbod2-backup

real    1m24.583s
user    0m1.119s
sys     0m21.222s

Please recall that this pool has qty 60 7200RPM SAS drives and I now wonder if the default settings are doing what I thought they where, i.e., try 10 times to write a single sector to one of the leaf devices in the pool and if and only if that fails to complete within 1-sec 20 times then suspend the pool. Unless there are concurrent I/O intensive import threads that are heavily loading down the drives during import time, these 7200RPM drives are all healthy and should not take 1-sec to respond (and certainly not 20 times in a row). For example, after a successful import and during a 2 GByte/sec scrub the latencies are reasonable for a 7200RPM HDD,

[root@node809 ~]# ioping /dev/disk/by-id/dm-uuid-mpath-35000cca2531e6404
4 KiB <<< /dev/disk/by-id/dm-uuid-mpath-35000cca2531e6404 (block device 10.9 TiB): request=1 time=13.2 ms (warmup)
4 KiB <<< /dev/disk/by-id/dm-uuid-mpath-35000cca2531e6404 (block device 10.9 TiB): request=2 time=13.8 ms
4 KiB <<< /dev/disk/by-id/dm-uuid-mpath-35000cca2531e6404 (block device 10.9 TiB): request=3 time=13.4 ms
4 KiB <<< /dev/disk/by-id/dm-uuid-mpath-35000cca2531e6404 (block device 10.9 TiB): request=4 time=20.0 ms
4 KiB <<< /dev/disk/by-id/dm-uuid-mpath-35000cca2531e6404 (block device 10.9 TiB): request=5 time=12.8 ms

What am I missing?

P.S. For reference, here is dbgmsg from a successful non-readonly import with zfs_multihost_fail_intervals=40,

[root@node809 ~]# cat /proc/spl/kstat/zfs/dbgmsg
timestamp    message
1599092310   spa.c:5638:spa_tryimport(): spa_tryimport: importing jbod2-backup
1599092310   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADING
1599092311   vdev.c:125:vdev_dbgmsg(): disk vdev '/dev/mapper/35000cca2530a461c': best uberblock found for spa $import. txg 271999
1599092311   spa_misc.c:408:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=271999
1599092336   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADED
1599092336   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): UNLOADING
1599092336   spa.c:5638:spa_tryimport(): spa_tryimport: importing jbod2-backup
1599092336   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADING
1599092337   vdev.c:125:vdev_dbgmsg(): disk vdev '/dev/mapper/35000cca2530a461c': best uberblock found for spa $import. txg 271999
1599092337   spa_misc.c:408:spa_load_note(): spa_load($import, config untrusted): using uberblock with txg=271999
1599092362   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): LOADED
1599092362   spa_misc.c:408:spa_load_note(): spa_load($import, config trusted): UNLOADING
1599092362   spa.c:5490:spa_import(): spa_import: importing jbod2-backup
1599092362   spa_misc.c:408:spa_load_note(): spa_load(jbod2-backup, config trusted): LOADING
1599092363   vdev.c:125:vdev_dbgmsg(): disk vdev '/dev/mapper/35000cca2530a461c': best uberblock found for spa jbod2-backup. txg 271999
1599092363   spa_misc.c:408:spa_load_note(): spa_load(jbod2-backup, config untrusted): using uberblock with txg=271999
1599092388   mmp.c:249:mmp_thread_start(): MMP thread started pool 'jbod2-backup' gethrtime 737578257533
1599092388   spa.c:7592:spa_async_request(): spa=jbod2-backup async request task=1
1599092389   spa_misc.c:408:spa_load_note(): spa_load(jbod2-backup, config trusted): LOADED
1599092389   spa_history.c:319:spa_history_log_sync(): txg 272001 open pool version 5000; software version unknown; uts node809 3.10.0-1127.18.2.el7.x86_64 #1 SMP Thu Jul 30 10:36:16 CDT 2020 x86_64
1599092389   spa.c:7592:spa_async_request(): spa=jbod2-backup async request task=32
1599092389   spa_history.c:319:spa_history_log_sync(): txg 272003 import pool version 5000; software version unknown; uts node809 3.10.0-1127.18.2.el7.x86_64 #1 SMP Thu Jul 30 10:36:16 CDT 2020 x86_64
1599092399   spa_history.c:306:spa_history_log_sync(): command: zpool import jbod2-backup

@ofaaland
Copy link
Contributor

ofaaland commented Sep 10, 2020

Hi @stuartthebruce

After the initial pool recovery discussed previously in this thread where an initial import with zfs_multihost_fail_intervals=0, it was not necessary to modify that parameter for subsequent imports. However, that is no longer true, and I think the initial and current import problem are only loosely related to unscheduled shutdowns.

After you imported with zfs_multihost_fail_intervals=0, then cleanly exported the pool, then reset zfs_multihost_fail_intervals to the default value, did imports fail with a console log message indicating the pool had been suspended?

If the pool was suspended, then you are almost certainly hitting the bug I fixed in #10873 . The bug is not specific to imports after an unclean shutdown. If you have the ability to build zfs, please apply that patch and see if you're able to import this pool normally.

Please recall that this pool has qty 60 7200RPM SAS drives and I now wonder if the default settings are doing what I thought they where, i.e., try 10 times to write a single sector to one of the leaf devices in the pool and if and only if that fails to complete within 1-sec 20 times then suspend the pool.

That's not quite what the setting means. When multihost=enabled, ZFS chooses a random leaf device every (zfs_multihost_interval/#leaves) period and issues a multihost write to it (not the same device over and over). It does not wait for it to land. Separately, ZFS tracks the most recent time any multihost write landed successfully. If zfs_multihost_fail_intervals>0, ZFS checks that some time during the last (zfs_multihost_fail_intervals*zfs_multihost_interval) period, a multihost write landed successfully. So the requirement is just that one of the the multihost writes landed, and any device will do.

The problem #10873 fixed is that the timer calculation had an error, so the drop-dead, suspend-the-pool time was passed during the import before MMP writes are even issued.

@stuartthebruce
Copy link
Author

stuartthebruce commented Sep 10, 2020

After you imported with zfs_multihost_fail_intervals=0, then cleanly exported the pool, then reset zfs_multihost_fail_intervals to the default value, did imports fail with a console log message indicating the pool had been suspended?

Early on, subsequent imports succeeded with the default value after an initial import with zfs_multihost_fail_intervals=0. It was only after I pushed more data and metadata into additional datasets and snapshots that import with default value started failing again. When it failed the pool suspend message was recorded in /proc/spl/kstat/zfs/dbgmsg.

If the pool was suspended,

Yes.

then you are almost certainly hitting the bug I fixed in #10873 . The bug is not specific to imports after an unclean shutdown. If you have the ability to build zfs, please apply that patch and see if you're able to import this pool normally.

It has been quite a while since I had to build the Linux kernel (or kernel modules), but I will give that a try if I can find the time before 0.8.5. is hopefully released.

The problem #10873 fixed is that the timer calculation had an error, so the drop-dead, suspend-the-pool time was passed during the import before MMP writes are even issued.

Got it, and thanks for the explanation. That certainly sounds consistent with my observation that once zpool import started taking too long that enabling MMP would suspend the pool before it could complete.

@stuartthebruce
Copy link
Author

After upgrading to 0.8.5 this system is able to import with multihost=on and default kernel module settings. However, it takes ~5 min, which seems a bit long for a pool with 60 HDD.

@ofaaland
Copy link
Contributor

ofaaland commented Oct 8, 2020

Thank you for confirming that the import is successful now.

I'm surprised the import takes 5 minutes also. If you have time to create a new issue with steps to reproduce (I'm guessing the same) and attach the /proc/spl/kstat/zfs/dbgmsg contents, I'll take a look.

@stuartthebruce
Copy link
Author

Initial boot import time reproduced with export/import, so I will open another ticket with additional information. Thanks.

@stuartthebruce
Copy link
Author

Slow import ticket is #11034

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Triage Needed New issue which needs to be triaged Type: Defect Incorrect behavior (e.g. crash, hang)
Projects
None yet
Development

No branches or pull requests

2 participants