Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zpool iostat show ssd #5117

Closed
wants to merge 1 commit into from
Closed

Conversation

inkdot7
Copy link
Contributor

@inkdot7 inkdot7 commented Sep 16, 2016

Some questions:

  • Currently the output is narrow, only using one character and the header overlaps the column separating spaces:
    zpool_iostat_ssd
    Should more space be used, and the info spelled out (yes, no and mix)?
  • Since it adds a field and thus may break scripts, should it depend on a command-line option to zpool?
  • The question in the second commit for module/zfs/vdev.c

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from 5316348 to a41306f Compare September 20, 2016 04:58
Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be "rotational" rather than "rotary", to match the Linux kernel.

@inkdot7
Copy link
Contributor Author

inkdot7 commented Sep 20, 2016

@rlaager thanks for the suggestion. I have changed in a temporary branch zpool_iostat_show_ssd_2 of my zfs clone. Do I just forcefully push to the branch for the pull request? Wondering, as I guess that will kill some history the suggestion is based on. (Although this is a rather harmless one.)

@rlaager
Copy link
Member

rlaager commented Sep 20, 2016

I believe a force push will work. It's fine to throw away the history. That's normal in pull requests.

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from b15ec3a to c194ebd Compare September 20, 2016 22:24
@kernelOfTruth
Copy link
Contributor

@inkdot7 force-pushing is fine, I and the others do the same :)

@inkdot7
Copy link
Contributor Author

inkdot7 commented Sep 21, 2016

@rlaager new version pushed.

Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rotary -> nonrotational changes look good.

Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm currently at the OpenZFS Hackathon. I spoke to a couple people, and they had the following feedback:

  1. This probably makes more sense in zpool status. The rotational state of disks is not changing second-by-second.
  2. This is, in practice, Linux-specific, which may argue against its acceptance at all.
  3. The kernel's idea of what is non-rotational may not be accurate.
  4. Why not just use standard OS tools (e.g. hdparm) and interfaces (e.g. /proc) to determine this? Why does this need to exist in the ZFS tools at all?

Can you comment in light of this feedback? If you really think this still belongs in ZFS, how do you feel about moving it to zpool status?

@inkdot7
Copy link
Contributor Author

inkdot7 commented Sep 27, 2016

Thanks @rlaager for investigating this!

  1. The reason it was put in zpool iostat is that I find it very useful there when developing Rotor vector allocation (small records favour SSD) #4365 (which btw has reached an operational state, perhaps warranting some discussion too :-) . These commits are an non-required part of that development).
  2. Advantage is that one does not have to remember/cross-check which device is what kind (did the SDD become sda or sdb during boot today...?), the interesting ones are gotten directly.
  3. Being Linux-specific could be another reason to hide it unless zpool is given an option flag?
  4. Well... would that be a reason to be able to see what ZFS thinks (as told by the kernel)?
    I would also be happy with it in zpool status, but found most use in iostat so far.

@inkdot7
Copy link
Contributor Author

inkdot7 commented Sep 27, 2016

Also zpool iostat shows the individual vdev capacities/allocations, making this a useful location. (And I thought it would disturb less here than in status.)

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from c194ebd to e5010bf Compare October 2, 2016 07:21
@Rudd-O
Copy link
Contributor

Rudd-O commented Oct 5, 2016

Seconding the "4) Why not just use standard OS tools (e.g. hdparm) and interfaces (e.g. /proc) to determine this? Why does this need to exist in the ZFS tools at all?".

@rlaager
Copy link
Member

rlaager commented Oct 6, 2016

@behlendorf, I recommend rejecting this pull request, primarily on the basis that it duplicates standard OS tool functionality without sufficient reason.

@inkdot7
Copy link
Contributor Author

inkdot7 commented Oct 6, 2016

Well, whatever happens to this PR, I'll keep the branch around. It could be that once dedicated vdev metadata / small block allocation functionality is included, it is found to be good to give the user a convenient way too see that the vdevs configured are of the intended nonrotational kinds.

(I have a pending update of the mixed-case logic - it did not do the 'right thing' for deeply nested devices (e.g. the top-level). But that is simpler once vdev_open_children() has condensed its logic.)

@rlaager
Copy link
Member

rlaager commented Oct 6, 2016

If you end up merging it with that work, that's another argument in favor of showing this in zpool status instead of zpool iostat. That patch shows the metadata types in zpool status.

@inkdot7
Copy link
Contributor Author

inkdot7 commented Oct 6, 2016

The allocation class version shows them in both status and iostat -v.

@behlendorf
Copy link
Contributor

behlendorf commented Oct 6, 2016

  1. This probably makes more sense in zpool status. The rotational state of disks is not changing second-by-second.

In the past I've wanted this exact functionality in both commands. It's helpful when you want to see at a glance if a device is performing roughly within expectations. For example, I don't expect many IOPs out of a HDD but I do from an SSD.

  1. This is, in practice, Linux-specific, which may argue against its acceptance at all.

That's alright, it just needs to be optional. We shouldn't have to limit the Linux implementation due to the lack of functionality on other platforms.

  1. The kernel's idea of what is non-rotational may not be accurate.

This is exactly the reason why this should be exposed. Internally ZFS uses this value as one parameter in controlling how/when/if requests are merged. Knowing what the kernel actually thinks about this disk can be insightful.

  1. Why not just use standard OS tools (e.g. hdparm) and interfaces (e.g. /proc) to determine this? Why does this need to exist in the ZFS tools at all?

Because often it's laborious to map the vdev name back to the device to determine this information. We didn't need to add the -gLP options either but they're convenient, d2f3e29.

That all being said I'm not a huge fan of the current output. Can we do something more subtle like leave the default output as is, and optionally add a * (or other symbol) after the vdev name to indicate it's non-rotational. Better suggestions welcome!

$ sudo zpool iostat -Pvs rpool
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
rpool*       9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdd1* 9.74G  20.0G      0      1  1.32K  15.0K

$ sudo zpool status -Ps rpool
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 0h4m with 0 errors on Sun Sep 11 00:28:51 2016
config:

    NAME         STATE     READ WRITE CKSUM
    rpool*       ONLINE       0     0     0
      /dev/sdd1* ONLINE       0     0     0

errors: No known data errors

@rlaager
Copy link
Member

rlaager commented Oct 6, 2016

I'm marking this approved so my "Requested changes" is not blocking this.

As long as the default stays as is, this shouldn't be a problem. If the default were to change, then we'd have a big problem for GRUB and others that parse the output of zpool status. It might be worth considering that, even in the modified output, the * not be directly cuddled up next to the device name. That is, use /dev/sda1 * rather than /dev/sda1*.

One of the concerns above was about how to fit "ssd" in the zpool iostat output. One option would be to use "NR" instead, which could be written with N on the top line and the R on the bottom line. Then, you could have values of Y, N, and M (for mixed).

@tonyhutter
Copy link
Contributor

Crazy idea:

Add a 'command' (-c) option to zpool [status|iostat] that runs an arbitrary command for each vdev and prints the result in the last column. $VDEV_PATH is passed as an environment var for the command to use. For example, this command would print out if the drive was rotational or not along with some other info:

$ zpool status -c 'lsblk --nodeps -no size,rota,model,name $VDEV_PATH'

    NAME        STATE     READ WRITE CKSUM  CMD
    mypool      ONLINE       0     0     0  
      mirror-0  ONLINE       0     0     0  
        A0      ONLINE       0     0     0  "7.3T    1       43000b40024812332"
        A1      ONLINE       0     0     0  "7.3T    1       43000b4002482222a"
        A2      ONLINE       0     0     0  "7.3T    1       43000b400248b2afa"
        A3      ONLINE       0     0     0  "7.3T    1       43000b400248dcca2"
        A4      ONLINE       0     0     0  "7.3T    1       43000b400248ba456"
       sda      ONLINE       0     0     0  "7.3T    1 ST8000NM0075"
       sdb      ONLINE       0     0     0  "7.3T    1 ST8000NM0075"
       sdc      ONLINE       0     0     0  "120G    0 SV300S37A120G"

Another example:

$ zpool status -c 'smartctl -a $VDEV_PATH | grep "Current Drive Temperature:"'

    NAME        STATE     READ WRITE CKSUM  CMD
    mypool      ONLINE       0     0     0  
      mirror-0  ONLINE       0     0     0  
        A0      ONLINE       0     0     0  "Current Drive Temperature: 40C"
        A1      ONLINE       0     0     0  "Current Drive Temperature: 32C"
        A2      ONLINE       0     0     0  "Current Drive Temperature: 33C"
        A3      ONLINE       0     0     0  "Current Drive Temperature: 43C"
        A4      ONLINE       0     0     0  "Current Drive Temperature: 28C"
       sda      ONLINE       0     0     0  "Current Drive Temperature: 20C"
       sdb      ONLINE       0     0     0  "Current Drive Temperature: 20C"
       sdc      FAULTED      0     0     0  "Current Drive Temperature: 109C"

Yes, it's a huge, giant hack, and not very elegant, but it's very extensible and would work on all platforms. You would want zpool to run the commands in parallel for each vdev, of course.

@Rudd-O
Copy link
Contributor

Rudd-O commented Oct 7, 2016

I would strongly prefer @tonyhutter 's suggestion instead. It's more flexible, it can give out much better information, and it doesn't alter the default output of zpool, upon which many scripts already depend. It's the first proposal in this thread that is (a) risk-free (b) highly interesting to me, as I would find tons of uses for it.

For showing whether a device is rotational or not, there could be a separate switch that adds a new column, or a separate utility that shows what ZFS thinks about the device. It's looking more and more like we need a utility that will show us properties of the vdevs themselves, just like we have zfs get and zpool get. zvdev get pool/path/to/vdev?

@behlendorf
Copy link
Contributor

I think we're all in agreement that the default output shouldn't be changed. And I kinda like @tonyhutter's crazy idea too.

@richardelling
Copy link
Contributor

There is a thin line between crazy and clever, I think @tonyhutter has a clever idea here.
From a practical perspective, real monitoring is done separately and not scraped from zpool output. But for a quick glance, there are many times where it is convenient to run commands against the VDEV_PATH and join the output.

That said, actual implementation can be tricky. When a device is unhealthy, it can take a long time (minutes) to respond to any command. We do not want to hold any locks when this happens. Some reasonable timeout and feedback to the user for command failures is needed.

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from e5010bf to fd9e41a Compare October 8, 2016 11:38
@inkdot7
Copy link
Contributor Author

inkdot7 commented Oct 8, 2016

Added the option -s, as that looked like a suggestion in @behlendorf's example.

Looked through the existing flags of zpool iostat, and found none that could be used to unhide this output. Same for zpool status. I.e. a new flag seems needed.

Have not done any arrangements for zpool status yet. In zpool import the flag -s is already in use. But adding this kind of display there has no value. However, I could imagine that it could be useful in zpool add, together with the dryrun -n to see what a pool would look like before actually adding. -s is not taken there. (I'm just trying to think ahead, not suggesting it.) Stick with -s, or use -S or other suggestion?

Placed the marks in the rightmost column of the pool (name) field, with a blank before. Did try to put it one space after the name, but that was harder to quickly read when the names had different lengths and thus got misaligned.

The alternatives so far, I'm happy to change... The * is for nonrotational (solid-state), and ^ for a mixed vdev.

@behlendorf's example, I think one would have to add one character to the width of the name though; extending into the column separator could complicate things:

$ sudo zpool iostat -Pvs rpool
                capacity     operations    bandwidth
pool         alloc   free   read  write   read  write
-----------  -----  -----  -----  -----  -----  -----
rpool*       9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdd1* 9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdz1  9.74G  20.0G      0      1  1.32K  15.0K

Current patch:

$ sudo zpool iostat -Pvs rpool
                  capacity     operations    bandwidth
pool           alloc   free   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
rpool       ^  9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdd1 *  9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdz1    9.74G  20.0G      0      1  1.32K  15.0K

This felt harder to read:

$ sudo zpool iostat -Pvs rpool
                  capacity     operations    bandwidth
pool           alloc   free   read  write   read  write
-------------  -----  -----  -----  -----  -----  -----
rpool ^        9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdd1 *  9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdz1    9.74G  20.0G      0      1  1.32K  15.0K

Column of its own:

$ sudo zpool iostat -Pvs rpool
             N     capacity     operations    bandwidth
pool         R  alloc   free   read  write   read  write
-----------  -  -----  -----  -----  -----  -----  -----
rpool        m  9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdd1  y  9.74G  20.0G      0      1  1.32K  15.0K
  /dev/sdz1  n  9.74G  20.0G      0      1  1.32K  15.0K

When looking at the output and checking that it allocates the right amount of space, I noticed that max_width() calls zpool_vdev_name() with VDEV_NAME_TYPE_ID unconditionally. (Which is to to format like mirror-0 instead of mirror.) But when actually printing the names, iostat does not use that flag (status and import do). So far it was not showing any difference, as the names even with an added number are 8 characters, and with an depth indent of 2, it made the minimum 10 character name width. But now it adds some more space before the marks if the names are short. Output is still aligned, so no real issue. Added a commit adjusting max_width() and its invocations last in this stack. Should I make it a PR of its own?

@behlendorf
Copy link
Contributor

Upon seeing the options I actually think the dedicated column is probably best. Even though I originally proposed otherwise. It's going to be the simplest and most consistent way to do this. Using -S seems reasonable to avoid a conflict with zpool import if this option gets extended to be available there. Again, similar to -PLg.

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from fd9e41a to fd440b8 Compare October 8, 2016 17:31
@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from 044579a to d823d67 Compare December 14, 2016 00:23
@inkdot7
Copy link
Contributor Author

inkdot7 commented Dec 14, 2016

@tonyhutter @behlendorf @richardelling I believe to have handled the outstanding issues. Changing cb_kind to cb_flag turned out nicely. Also reworded the man page entry; became smaller. Did it become better?

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch 3 times, most recently from b4fc936 to 08be93c Compare December 21, 2016 06:47
@tonyhutter
Copy link
Contributor

@inkdot7 apologies for not looking at this earlier. I'm off for Christmas starting now and won't be back in the office until early January. I can take a look at it then, or if @behlendorf approves it in the meantime, I'm fine with that too.

@tonyhutter
Copy link
Contributor

Just gave this a test:

./cmd/zpool/zpool iostat -vk
                         capacity     operations     bandwidth       
pool                   alloc   free   read  write   read  write  kind
---------------------  -----  -----  -----  -----  -----  -----  ----
mypool                  336K  1008M      0      9  61.3K   252K   hdd
  mirror                336K  1008M      0      8  46.0K   204K   mix
    /tmp/inkdot/file1      -      -      0      3  15.3K  68.0K  file
    A0                     -      -      0      2  15.3K  68.0K   hdd
    A1                     -      -      0      2  15.3K  68.0K   hdd
logs                       -      -      -      -      -      -     -
  A2                       0  7.25T      0      0  15.3K  48.1K   hdd
cache                      -      -      -      -      -      -     -
  A3                     30K  7.28T      0      0  4.42K  1.76K   hdd
  A4                     10K  7.28T      0      0  4.43K  1.11K   hdd
---------------------  -----  -----  -----  -----  -----  -----  ----

You'll want to fix it so that mypool shows a kind of "mix" since it has a mix of disks and a file. Or alternatively, just print the kind for the leaf vdevs, and print a dash (-) for the pool and parent vdevs.

Also, can you rebase this on top of master?

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from 08be93c to 66509f6 Compare January 10, 2017 22:10
@inkdot7
Copy link
Contributor Author

inkdot7 commented Jan 10, 2017

Thanks @tonyhutter for looking at this. Rebased onto master.

It showing hdd for mypool is due to the log device being of hdd kind. (log devices are apparently included in vdev_open_children(), while cache devices are not.) As then some vdev (here log, but still) only has hdd performance, the entire pool is marked such. Also, in practice it should make no difference, as one would only use ssd devices both for log and cache, and they do not on their own 'downgrade' the kind determined for the entire pool. I think it makes sense this way?

@chrisrd
Copy link
Contributor

chrisrd commented Jan 10, 2017

Late to the party and completely bike shedding so feel free to ignore or disparage, but...

The "kind" column name feels quite foreign to me, could it be "type" instead? For what it's worth, both zpool status and zpool iostat have the -t flag available.

In the context of computing, "type" is something I'm used to seeing but I don't recall ever seeing "kind" used elsewhere - that's actually what caught my eye scanning the output example above, I did a bit of a double take on seeing "kind" and had to investigate further to see what it meant.

From a purely English language point of view, I would suggest a "type" is a more precise category versus a "kind" being a somewhat more broad, vague or fuzzy category.

@inkdot7
Copy link
Contributor Author

inkdot7 commented Jan 11, 2017

@chrisrd , an easy change, but I'd need some consensus.
-k and kind was used as a result of the comment on Oct 20. Also, it was attempted to use a flag which is free also in import and add (which also can show device lists), see comment on Oct 8.

@tonyhutter
Copy link
Contributor

As then some vdev (here log, but still) only has hdd performance, the entire pool is marked such.

So it sounds to me like "kind" for non-leaf-vdevs means "the slowest drive in the group"? Is that right? If so, then mirror should be "hdd" instead of "mix", correct? Just trying to understand the definition of "kind" for non-leaf-vdevs:

mypool                  336K  1008M      0      9  61.3K   252K   hdd
  mirror                336K  1008M      0      8  46.0K   204K   mix
    /tmp/inkdot/file1      -      -      0      3  15.3K  68.0K  file
    A0                     -      -      0      2  15.3K  68.0K   hdd
    A1                     -      -      0      2  15.3K  68.0K   hdd
...

Also, I'm fine with the column being called "type" but still using -k.

Can you also squash your commits?

@inkdot7
Copy link
Contributor Author

inkdot7 commented Jan 11, 2017

Since a file dev is considered to have ssd performance, the mirror is marked mix: Mix is for mirrors with hdds but at least one ssd, which thanks to #4334 give ssd-like performance when reading. (Entire pools inherit mix too, if they have nothing worse in them.) Perhaps "mix" could be changed to something better? (But I do not have any suggestion.) Squashed, but did not change kind yet, will do that when I'm at a computer where I can test.

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from 66509f6 to 4f487f3 Compare January 11, 2017 21:40
@inkdot7
Copy link
Contributor Author

inkdot7 commented Jan 11, 2017

Changed "kind" -> "type" when printed, kept -k. For some consistency, also changed constants etc. in the code from KIND to ROT_TYPE. Just TYPE was not good, as there are already many kinds of TYPE. I am not sure if this really was an improvement. I have the previous commit, if needed.

@chrisrd
Copy link
Contributor

chrisrd commented Jan 12, 2017

Thanks, my brain no longer does a double take on seeing kind :-)

More bike shedding: it's not really only about rotational devices any more (e.g. file) how about MEDIA_TYPE rather than ROT_TYPE?

@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch 3 times, most recently from d71b9ac to 289a948 Compare January 12, 2017 10:36
@inkdot7
Copy link
Contributor Author

inkdot7 commented Jan 12, 2017

Thanks, MEDIA_TYPE looks better in the code. I also changed to print the headings as "media" and use the flag -M, which was free. (But do have the previous commit printing "type" by -k handy).

@richardelling
Copy link
Contributor

@kpande agree, the -c option offers a large superset of this functionality

@chrisrd
Copy link
Contributor

chrisrd commented Jan 13, 2017

agree, -c is a superset, making this option unnecessary. The only thing missing from -c is a nicely formatted heading for the extra column. Perhaps a -C string: column heading for -c option would be nice there? Sadly I don't have the time to try that myself.

@inkdot7
Copy link
Contributor Author

inkdot7 commented Jan 13, 2017

What -c does not show is the inheritance for non-leaf-vdevs, nor does it expose the internal state of the zfs module, see comment Oct 7. That's the reason I continued adjusting this.

@tonyhutter
Copy link
Contributor

I think the folks who are going to look at MEDIA fall into two categories:

  1. People working on the various storage classes patches (like Rotor vector allocation (small records favour SSD) #4365 and Metadata Allocation Class #3779 ), who want to know what ZFS internally thinks the drive type is. For those people this patch makes sense, since there's no other way to get at that internal data.

  2. People who are interested in what type of drives are in the pool from an administrative standpoint. For those people -c is a better option, since it requires no code changes, and is easier to extend to newer drive types as they come along (XPoint, XPoint DIMM, NVMe, multipath, etc).

Other random points:

  • I just tested with two pools: all HDD behind multipath, and all SSD behind multipath, and was happy to see that the patch correctly identified the disks as HDD or SDD.
	NAME        STATE     READ WRITE CKSUM MEDIA
	jet1        ONLINE       0     0     0   ssd
	  mirror-0  ONLINE       0     0     0   ssd
	    U0      ONLINE       0     0     0   ssd
	    U1      ONLINE       0     0     0   ssd
	    L2      ONLINE       0     0     0   ssd
...
	NAME        STATE     READ WRITE CKSUM MEDIA
	mypool5     ONLINE       0     0     0   hdd
	  mirror-0  ONLINE       0     0     0   hdd
	    U10     ONLINE       0     0     0   hdd
	    U11     ONLINE       0     0     0   hdd
...
  • I really think you should only print the MEDIA values for leaf vdevs. That's all that people are going to care about anyway. It would make for much simpler code, and remove any confusion over what "mixed" means.

  • I do see down the road that we might change the definition of "ssd" to be something more specific. For example, what is reported now as "ssd" now may be reported as "NVMe" in a later release, and that could break scripts. I suppose you could add a line in the man page to say "these definitions may change in the future and should not be considered a stable API" or something like that.

…e or mixed.

Keep track of mixed nonrotational (ssd+hdd) devices.
Only mirrors are mixed.  If a pool consist of several mixed vdevs, it is
mixed if all vdevs are either mixed, or ssd (fully nonrotational).

Pass media type info to zpool cmd (mainly whether devices are solid state, or rotational).
Info is passed in ZPOOL_CONFIG_VDEV_STATS_EX -> ZPOOL_CONFIG_VDEV_MEDIA_TYPE.
@inkdot7 inkdot7 force-pushed the zpool_iostat_show_ssd branch from 289a948 to fc7d355 Compare February 1, 2017 09:20
@inkdot7
Copy link
Contributor Author

inkdot7 commented Feb 1, 2017

I do see down the road that we might change the definition of "ssd" to be something more specific. For example, what is reported now as "ssd" now may be reported as "NVMe" in a later release, and that could break scripts. I suppose you could add a line in the man page to say "these definitions may change in the future and should not be considered a stable API" or something like that.

Thanks @tonyhutter , added to the man page.

I really think you should only print the MEDIA values for leaf vdevs. That's all that people are going to care about anyway. It would make for much simpler code, and remove any confusion over what "mixed" means.

Before removing that, I'd like to suggest that a mixed mirror vdev not is a common setup. Admins that do such an setup would be likely to know what it means, perhaps even appreciate the confirmation. Others will never see such output. Unless they misconfigured, in which case I think they will be happy to have been alerted after figuring what it meant.

@behlendorf
Copy link
Contributor

Closing. This functionality is being added to zpool iostat/status -c in PR #6121.

@behlendorf behlendorf closed this May 19, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants