Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zpool iostat -v segfault when a vdev is added #6748

Closed
Alucardfh opened this issue Oct 10, 2017 · 2 comments · Fixed by #6872
Closed

Zpool iostat -v segfault when a vdev is added #6748

Alucardfh opened this issue Oct 10, 2017 · 2 comments · Fixed by #6872
Assignees

Comments

@Alucardfh
Copy link

System information

Type Version/Name
Distribution Name Centos
Distribution Version 7.2.1511
Linux Kernel 3.10.0-693.1.1.el7.x86_64
Architecture x86_64
ZFS Version 0.7.0-58_gcf7684b
SPL Version 0.7.0-12_g9df9692

Describe the problem you're observing

zpool iostat -v segfault when a vdev is added to a pool :

              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
LSI005-OST  3.16M  43.5T     17    401  1.14M  10.5M
  raidz2    3.16M  21.7T      0    304      0  4.66M
    mpatha      -      -      0    103      0  1.55M
    mpathb      -      -      0    100      0  1.55M
    mpathc      -      -      0    100      0  1.55M
Segmentation fault (core dumped)
[root@lsi005 ~]# zpool iostat -v 1
              capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
LSI005-OST  7.43G  87.0T      0  1.02K  8.44K  1013M
  raidz2    1.86G  21.7T      0    626  2.11K   608M
    mpatha      -      -      0    208    720   203M
    mpathb      -      -      0    208    720   203M
    mpathc      -      -      0    209    720   203M
  raidz2    1.86G  21.7T      0    610  4.36K   587M
    mpathd      -      -      0    203  1.45K   196M
    mpathe      -      -      0    203  1.45K   196M
    mpathf      -      -      0    203  1.45K   196M

Describe how to reproduce the problem

Create a zpool
run zpool iostat -v 1
Add a new vdev to the pool while zpool iostat -v 1 is running
zpool iostat segfault

Include any warning/errors/backtraces from the system logs

Oct 10 16:15:35 lsi005 kernel: [10696.250879] zpool[65607]: segfault at 100000008 ip 00007fe38eb0012b sp 00007ffc2b8f1840 error 4 in libnvpair.so.1.0.1[7fe38eaf5000+13000]
Oct 10 16:15:35 lsi005 kernel: zpool[65607]: segfault at 100000008 ip 00007fe38eb0012b sp 00007ffc2b8f1840 error 4 in libnvpair.so.1.0.1[7fe38eaf5000+13000]
(gdb) core /var/spool/abrt/ccpp-2017-10-10-16:59:02-48245/coredump
[New LWP 48245]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `zpool iostat -v 1'.
Program terminated with signal 11, Segmentation fault.
#0  nvlist_lookup_common (nvl=0x100000000, name=0x41cc6d "vdev_stats", type=DATA_TYPE_UINT64_ARRAY, nelem=0x7ffff3c85408, data=0x7ffff3c85420) at ../../module/nvpair/nvpair.c:1348
1348                (priv = (nvpriv_t *)(uintptr_t)nvl->nvl_priv) == NULL)
(gdb) bt
#0  nvlist_lookup_common (nvl=0x100000000, name=0x41cc6d "vdev_stats", type=DATA_TYPE_UINT64_ARRAY, nelem=0x7ffff3c85408, data=0x7ffff3c85420) at ../../module/nvpair/nvpair.c:1348
#1  0x00007f36e2957672 in nvlist_lookup_common (data=<optimized out>, nelem=<optimized out>, type=DATA_TYPE_UINT64_ARRAY, name=<optimized out>, nvl=<optimized out>) at ../../module/nvpair/nvpair.c:1347
#2  nvlist_lookup_uint64_array (nvl=<optimized out>, name=<optimized out>, a=<optimized out>, n=<optimized out>) at ../../module/nvpair/nvpair.c:1518
#3  0x0000000000412476 in print_vdev_stats (zhp=zhp@entry=0x2496620, name=name@entry=0x2536790 "raidz2", oldnv=0x100000000, newnv=0x2504bc8, cb=cb@entry=0x7ffff3c858e0, depth=depth@entry=2) at zpool_main.c:3657
#4  0x0000000000412c6d in print_vdev_stats (zhp=zhp@entry=0x2496620, name=<optimized out>, oldnv=oldnv@entry=0x248f280, newnv=newnv@entry=0x2502d20, cb=cb@entry=0x7ffff3c858e0, depth=depth@entry=0) at zpool_main.c:3775
#5  0x00000000004136a9 in print_iostat (zhp=0x2496620, data=0x7ffff3c858e0) at zpool_main.c:3881
#6  0x00000000004062f8 in pool_list_iter (zlp=zlp@entry=0x248ac30, unavail=unavail@entry=0, func=func@entry=0x413610 <print_iostat>, data=data@entry=0x7ffff3c858e0) at zpool_iter.c:178
#7  0x0000000000411ef2 in zpool_do_iostat (argc=0, argv=0x7ffff3c89aa0) at zpool_main.c:4694
#8  0x0000000000405a94 in main (argc=4, argv=0x7ffff3c89a88) at zpool_main.c:8042

Not that this bug is critical ... just that I was doing some automated performance testing, and have been hitting it quite a few time ^^.

@h1z1
Copy link

h1z1 commented Nov 5, 2017

Had this happen today. 0.7.1. In my case I was moving a zil around from one pool to another.

@tonyhutter
Copy link
Contributor

This works for me:

diff --git a/cmd/zpool/zpool_main.c b/cmd/zpool/zpool_main.c
index 052b429cc..50d70a74e 100644
--- a/cmd/zpool/zpool_main.c
+++ b/cmd/zpool/zpool_main.c
@@ -3736,6 +3736,12 @@ children:
            &oldchild, &c) != 0)
                return (ret);
 
+       /*
+        * Use MIN() here to deal with VDEVs being added while we're viewing
+        * the stats.
+        */
+       children = MIN(c, children);
+
        for (c = 0; c < children; c++) {
                uint64_t ishole = B_FALSE, islog = B_FALSE;
 
@@ -3794,6 +3800,12 @@ children:
            &oldchild, &c) != 0)
                return (ret);
 
+       /*
+        * Use MIN() here to deal with VDEVs being added while we're viewing
+        * the stats.
+        */
+       children = MIN(c, children);
+
        if (children > 0) {
                if ((!(cb->cb_flags & IOS_ANYHISTO_M)) && !cb->cb_scripted &&
                    !cb->cb_vdev_names) {

I'll put together a PR.

@tonyhutter tonyhutter self-assigned this Nov 15, 2017
tonyhutter added a commit to tonyhutter/zfs that referenced this issue Nov 15, 2017
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
tonyhutter added a commit to tonyhutter/zfs that referenced this issue Nov 16, 2017
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
tonyhutter added a commit to tonyhutter/zfs that referenced this issue Nov 16, 2017
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
tonyhutter added a commit to tonyhutter/zfs that referenced this issue Dec 4, 2017
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
tonyhutter added a commit to tonyhutter/zfs that referenced this issue Dec 5, 2017
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
behlendorf pushed a commit that referenced this issue Dec 6, 2017
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes #6748 
Closes #6872
Nasf-Fan pushed a commit to Nasf-Fan/zfs that referenced this issue Jan 29, 2018
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
Closes openzfs#6872
Nasf-Fan pushed a commit to Nasf-Fan/zfs that referenced this issue Feb 13, 2018
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748
Closes openzfs#6872
FransUrbo pushed a commit to FransUrbo/zfs that referenced this issue Apr 28, 2019
Fix a segfault when running 'zpool iostat -v 1' while adding
a VDEV.

Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Tony Hutter <[email protected]>
Closes openzfs#6748 
Closes openzfs#6872
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants