Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vdev_iokit branch #178

Closed
lundman opened this issue May 9, 2014 · 108 comments
Closed

vdev_iokit branch #178

lundman opened this issue May 9, 2014 · 108 comments

Comments

@lundman
Copy link
Contributor

lundman commented May 9, 2014

Thanks for trying out the iokit replacement. It does indeed appear to function well, and receives almost identical benchmark score as master.

https://github.com/openzfsonosx/zfs/blob/vdev-iokit/module/zfs/vdev_iokit.c#L88

Better watchout with those comments, you just took out the clearing on line 93. But since it is the free function, it doesn't matter ;)

https://github.com/openzfsonosx/zfs/blob/vdev-iokit/module/zfs/vdev_iokit.c#L340
https://github.com/illumos/illumos-gate/blob/master/usr/src/uts/common/fs/zfs/vdev_disk.c#L416

Is the plan to extend it to attempt to open_by_guid as well? I see you implemented something in vdev_iokit_util.cpp, is it part of your roadmap to add that ability? If it can handle the device /dev/diskX moving, then it would make it quite attractive.

:)

@evansus
Copy link
Contributor

evansus commented May 16, 2014

@lundman open_by_guid and find_by_guid are now implemented - called in vdev_iokit_open after attempting the usual vdev_path and vdev_physpath.
Also added vdev_iokit_find_pool that can check all disks for a pool with matching name.

In both, if multiple vdevs are found with a matching guid or pool name, the one with the best txg number is used. by_path simply checks for a matching BSD name and fails if it can't be found or opened.

@ilovezfs
Copy link
Contributor

@evansus
Are these secondary/tertiary/etc. code paths followed if the primary path for opening does not work? If so, some questions:

  1. What is the primary path?
  2. Have you "forced" the code to exercise the secondary/tertiary/etc. code paths by artificially causing the primary path to fail? If so, how?
  3. If you have not done (2), how have you tested this code?
  4. Have you tested any "failure" (e.g., the doggy unplugs a usb cable) scenarios yet?
  5. Where does this code originate? Is FreeBSD the ultimate upstream for it, or was their code based on something in illumos?
  6. Does our immediate upstream ZFS on Linux have analogous code, or are they solely relying on udev to provide the /dev/disk/by-* paths?

@evansus
Copy link
Contributor

evansus commented May 22, 2014

Primary path is as close to Illumos vdev_disk.c as possible -
First for the case of whole disk, tries to open by path+s0, then by path. If the vdev is not labeled as whole disk, try to open by path, then physpath, and last by guid.
That way the Illumos-compliant code path is followed, and failing that, we tried to find the disk by it's guid. FreeBSD's implementation is very similar, following vdev_disk.c and only using the guid search after all known routes have been explored.

2 and 3)
I have tested by commenting out the find by path and by physpath routes, and verified that find by guid succeeds. Also, some of the tests within a VM showed that find by guid was being used at times - for example performing a hard reset of the VMWare fusion VM can sometimes cause disk0 and disk1 to swap places/names. Using IOLog to debug and watch the progress, I could see find by path match on one attempt, and find by guid match on another attempt (e.g. because find by path checked disk0s2, but it was not the same disk as last reboot).

Haven't tested failures and unexpected disconnects as yet. I verified that mirrored and raidZ were functional at one point, but haven't retested with some of the more recent changes.
Although one thing I have tested, and have not run into issues with, is sleep/wake. Internal and USB disks have resumed operations without a problem.

Most of the vdev_iokit.c code is following Illumos vdev_disk.c. I started with a proof-of-concept and then reworked this file to follow upstream as much as possible. vdev_iokit_util.cpp is more loosely based on bits from vdev_geom.c from FreeBSD.
Again it follows the standard route, and only attempts to find by guid when necessary. The files vdev_iokit_util.cpp and vdev_iokit_context.cpp provide the backend for functions that do not exist on OS X - replacements for vdev_disk_ldi_physio etc.

ZoL is similar to upstream vdev_disk.c, but then diverges in vdev_disk_open:
https://github.com/zfsonlinux/zfs/blob/master/module/zfs/vdev_disk.c#L269-295

Instead using vdev_bdev_open (actually a #define macro) and vdev_disk_rrpart to detect and open disks. And yes, relying on udev by-id or custom udev rules to locate changed disks.

... you can provide your own udev rule to flexibly map the drives as you see fit.
It is not advised that you use the /dev/[hd]d devices which may be reordered
due to probing order.

I'm running this on my retina MacBook Pro, and haven't had any issues. Running 10.9.3 and using a GPT partition on the internal SSD as a single-disk pool. For testing I use external USB 2 and USB 3 hard drives and flash drives, as well as VMWare fusion VMs.

Please let me know if you have additional questions!
Thanks,
Evan

@lundman
Copy link
Contributor Author

lundman commented May 23, 2014

Yes the code similarities between vdev_iokit, geom and Solaris are comforting, and it is good you have exercised the paths. I have testing this branch as well, and have not experienced any issues. (But I don't have path renumbering issue though).

at some point though, I am hoping you will remove much of the commented out code, and work on indenting so we can give it a proper review :) I am of course guilty of such things too...

@evansus
Copy link
Contributor

evansus commented May 23, 2014

True. I recently browsed these on github.com and saw that the whitespace etc are all over the place - have been working from Xcode mostly. I'd be happy to do some cleanups etc.

@lundman
Copy link
Contributor Author

lundman commented May 23, 2014

I used to have emacs set to "uboot"'s coding standard, which is quite unusual (space, not tabs). Only recently changed emacs to use tabs again (last week) as ZFS uses tabs.

We could consider following ZOLs practise, I think they have a script to check code-standard. Dunno if we have to be so strict, but it could be something we should aim toward;

# ./scripts/cstyle.pl module/zfs/*.c | wc -l
    5771

heh all me :)

@evansus
Copy link
Contributor

evansus commented May 23, 2014

About testing changed device paths- I may be restating the obvious, but in case it helps:

Simplest test is creating two disk images and opening one after the other, then reverse the order.

You can also simulate device renumbering by either connecting/reconnecting USB devices in different orders, or by making any change to the disk list between connects. Opening or creating a disk image, ramdisk, zvol, etc.

@lundman
Copy link
Contributor Author

lundman commented May 23, 2014

Ahh so simple, and so beautiful. Yes, that should have been obvious :)

@evansus
Copy link
Contributor

evansus commented May 24, 2014

Tested mirrored and raidz with latest vdev-iokit branch. With either:

  • zpool create, export, import, and destroy are OK if all disks are present.
  • with a device missing, import panics.

Haven't tested device failure while in use, or path renumbering.

I'll look into resolving the missing vdev issue.
Meanwhile I committed several whitespace cleanups - tabs instead of spaces - and other changes to conform to cstyle.pl, with a few exceptions.

@evansus
Copy link
Contributor

evansus commented May 24, 2014

Fixed, was due to an addition to vdev_get_size.

@lundman
Copy link
Contributor Author

lundman commented May 26, 2014

This is much better, thanks. Also appreciated you brought it up in line with master. This can probably be merged into master soon - do people feel ready for it?

@ilovezfs
Copy link
Contributor

I count seven added #if 0's. Any chance we can clean out the dead code before merging?

@rottegift
Copy link
Contributor

I've been beating vdev-iokit head (+ f0a31c6) quite a bit and it seems pretty solid.

@evansus
Copy link
Contributor

evansus commented May 29, 2014

@rottegift That sounds like unreleased snapshot holds, most likely from an interrupted zfs send/recv?

I use this one-liner to check all snapshots for holds recursively:

zfs list -Hrt snapshot -o name | while read snap; do zfs holds -H "$snap"; done | more

@rottegift
Copy link
Contributor

Yeah I realised it was unreleased snapshots after I left the comment, so deleted the comment, but probably not fast enough to stop you from seeing an email copy.

(The unreleased snapshots go away across an export/import or reboot, thus the confusion).

c.f. #173

P.S.: This will be faster

zfs list -H -r -t snap -o name,userrefs | grep -v '[^0-9]0$'  | awk '{ print $1 }' | xargs -n 10 zfs holds -H

(Didn't really examine closely, but zfs holds is unhappy with more than about ten args).

@evansus
Copy link
Contributor

evansus commented May 31, 2014

@rottegift, @lundman, and @ilovezfs (and the whole OpenZFS on OS X community),
Thanks for testing the vdev-iokit branch, glad to hear it's working well for others as well!

I just committed some enhancements and cleanups. This addressed a few issues, mostly minor, some fairly important.
ace3df2

I believe there are a few areas to review, as well as future enhancement areas.

1) flush write cache
For example, DKIOCFLUSHWRITECACHE is issued as an asynchronous ioctl on Illumos, ZoL, and FreeBSD. I used the IOMedia::synchronizeCache, which is synchronous.

When called async, we can return ZIO_PIPELINE_STOP, then when the op completes, call zio_interrupt. Instead, we wait for the sync to complete, call zio_interrupt, then return pipeline_stop just after.

This is probably OK - should have the same end result, but I'm not sure if there are any negative implications. It may be as simple as adding a 'cacheflush' taskq to perform the sync and callback- I'm open to suggestions.

2) ashift
Also, ashift is determined the same way as vdev_disk.c from other distributions, but is not assigned to vdev->vdev_ashift at this time. I left it commented out for now.

Haven't experimented with the ashift set recently, but in the past setting the ashift would cause pool import to fail (sees 'corrupt' vdevs). I'd like to review this and set it correctly.

vdev.c assigns ashift according to what is in the pool's configuration, and checks if child vdevs have their own ashift property.

This is working fine with every pool I've tried - including from vdev-iokit, master, and other distributions. Also I tested zpool create with no ashift specified (uses 9), -o ashift=9, and -o ashift=12, and verified using zdb that all worked as expected.

I'm not sure what would happen with a more complex pool layout - for example a zpool created as ashift=12, but with one vdev that was inadvertantly added with default ashift of 9, and vice-versa.

However for the common use-cases - pools with default ashift, ashift=9, and ashift=12, this has been working fine with every pool I've tried. I've imported and used pools created on FreeBSD, and created on the master branch of OpenZFS on OS X.

3) simple block devices
The only bug that I can think of - and probably a non-issue anyway - is the case of block devices that are not published in IOKit. I don't know of any software doing this - except possibly some MacPorts / homebrew apps for Linux nbd and/or iSCSI? I doubt this would even come up, but if anyone is using or is aware of a software that would use this, please let me know.

As a potential workaround, I know that userland uses the vdev_file_ops for both files and block devices, so I'm sure we could find another way to interface with standard block devices if needed.

@lundman
Copy link
Contributor Author

lundman commented Jun 2, 2014

_2) ashift
Also, ashift is determined the same way as vdev_disk.c from other distributions, but is not assigned to vdev->vdev_ashift at this time. I left it commented out for now.
_

There is indeed a difference here and with other distributions. It took us quite a while to work out why. We need to use the vdev_ashift value on the blocknum when translating offset requests, when upstream always uses 512. We eventually drilled down to that Darwin is actually large block aware in the lowest layer, when upstream stick with 512 for block to offset translation. Some upstream distributions use "byte offset" to avoid this, Darwin still uses "block number".

As in, the code:
bp->b_lblkno = lbtodb(offset); // IllumOS

would always go to 512 blocks (lbtodb). Whereas we use
buf_setblkno(bp, zio->io_offset >> dvd->vd_ashift);
because the underlying code knows the device block size in Darwin - and translates it back up to offset. (sigh).

but I see in the iokit code, the equivalent call is;
https://github.com/openzfsonosx/zfs/blob/vdev-iokit/module/zfs/vdev_iokit_util.cpp#L1348
result = iokit_hl->IOMedia::read(zfs_hl, offset, buffer, 0, &actualByteCount);
which takes offset (bytes) and should have no vdev_ashift logic at all. Can avoid that whole thing entirely.

I've been out of commission for a few days, but hope to get back into code review and testing this branch.

@lundman
Copy link
Contributor Author

lundman commented Jun 2, 2014

Each call to vdev_iokit_strategy() will allocate a context, and free it when we are done. Since strategy is a pretty frequent called operator, have we explored the idea of embedding the context struct into struct zio ?
Although, my quick tests of putting "iokit_context" as char[32] into zio, to avoid alloc and free, didnt show any immediate improvement - but the 2min benchmarks are probably too small to show that.

@rottegift
Copy link
Contributor

It's been good with f0a31c6 over the past couple of days and with a97f499 for the past ca. 90 min of roughhousing (a couple of reboots and playing around with having to roll back ~80GiB of interrupted zfs recv into a dedup=sha256 dataset (just for fun and stress testing)).

@evansus
Copy link
Contributor

evansus commented Jun 3, 2014

@rottegift Glad to hear that!

@lundman yes I agree.

At least the vdev_iokit_context_t is currently a struct of just a few pointers:

typedef struct vdev_iokit_context {
    IOMemoryDescriptor * buffer;
    IOStorageCompletion completion;
} vdev_iokit_context_t;

The completion struct is defined in IOStorage.h:

struct IOStorageCompletion
{
    void *                    target;
    IOStorageCompletionAction action;
    void *                    parameter;
};

and IOStorageCompletionAction is also just a pointer to the callback function (from IOStorage.h):

typedef void (*IOStorageCompletionAction)(void *   target,
                                          void *   parameter,
                                          IOReturn status,
                                          UInt64   actualByteCount);

But yes it still is an alloc per IO, in this rev. I don't know if you noticed the IOCommandPools I tested in the previous rev, bit of an experiment: ccde144

Along with the allocation, the bigger issue is IOBufferMemoryDescriptor::withAddress and ::prepare being called from vdev_iokit_strategy. (called from vdev_iokit_io_start). At least afterwards it can issue the read() and write() calls async.

The best way to optimize this would be changing vdev_iokit_strategy so that all of its actual work is performed asynchronously, correct? (Perhaps using taskq?)

Other Async/sync function calls:

Currently this vdev-iokit branch (partially) issues async reads and writes, with completion callback.

However it would also be good to issue the flush cache requests 'async' with a completion callback, since it's async on all upstream repos.

Down the line, possibly Unmap, too - if/when we have an acceptable upstream ZFS trim patch to merge. :)

Looking at master, it seems vdev_disk.c has a synchronous vdev_disk_io_start, blocking for the duration of reads and writes.

Aside from that are a few minor issues with vdev_disk which I addressed in this branch:
https://github.com/openzfsonosx/zfs/tree/vdev-disk-fixes
Have not tested it though.

Another example is doAsyncReadWrite in zvolIO.cpp.

Would taskq's be an appropriate way to handle these function calls asynchronously? (By calling C -> C++ extern/wrapper functions like we're already doing?)

@lundman
Copy link
Contributor Author

lundman commented Jun 4, 2014

Yes, I estimated your vdev_iokit_context_t to be about 16 bytes in size, which is why I made a char [32] area in zio_t and used that to assign context. It worked well enough as proof of concept, and possibly shaved a second off, but hard to tell with my small test case. Either way, since we have APPLE only entries in znode_t and zfsvfs_t, it isn't too odd to have them in zio_t. The biggest hassles is figuring out the header dependency (depending on the size impact of including iokit, or using generic ptrs and casting).

As for IOBufferMemoryDescriptor::withAddress - I was under the impression this call takes an existing address (and buffer) and maps it to iokit space. Ie, no actual allocations happen. Not that a map operation is free or anything, but it is not as heavy as actual allocations.

Similarly prepare is used to page-in memory, if required. But I believe zio uses wired memory, so they should not/never be paged out.

3rd: I was under the impression that calling IOMedia::write and handing over a complete callback is already async. You mention flush cache requests as a possible place we can enhance it but I don't know if that is really worth doing. Not to discourage anyone for trying it and finding out though. :)

But I am new to iokit, so expect confusion :)

@evansus
Copy link
Contributor

evansus commented Jun 4, 2014

Yes, the IOMedia::write is async, however that is called after the IOBufferMemoryDescriptor is allocated and prepared.

In vdev_iokit_io_start, I'd like to issue an async call to vdev_iokit_strategy

The io_context and BufferMD allocation/prepare are already being done in vdev_iokit_strategy - but only the IOMedia::read/write is actually async.

About the cache flush, first I noticed that other platforms are issuing an async call. So I was thinking it would be best to handle this in the same way as other platforms.
But I guess my question should be: Is it OK to call this in this order?
IOMedia::syncronizeCache (synchronously) then return ZIO_PIPELINE_STOP

rather than an async flush, and cleanup in vdev_disk_ioctl_done.

Perhaps this would be a good question to post to the OpenZFS mailing list.

I have noticed that on a working system (not experiencing obvious issues), spindump shows some pretty long stack traces, where a whole stack is blocked waiting on other calls. This included vdev_iokit_io_start, vdev_iokit_strategy, vdev_iokit_sync, etc.

From some off-cpu flame graphs, it appears vdev_iokit_io_start and vdev_iokit_sync calls are intermittently spending a while off-cpu. Some are 10-20 _micro_seconds, but intermittently vdev_iokit_sync takes up to 80 _milli_seconds

https://gist.github.com/evansus/e0e34b60ba6dd993b4be
dtrace output appears to have automatically sorted shortest to longest
One caveat - this machine is also running other experimental patches including zvol-unmap, which I believe has a similar async/sync issue. May be contributing to the long times.

I'm new to the flame graphs as well though, so I could be using a poor dtrace script.

I wouldn't be surprised if my dtrace script is poor, I started with a basic script from Brendan Gregg's blog, and changed it to profile some vdev-iokit functions:
https://gist.github.com/evansus/5dbb9082c4f1f336e47f

How does that look?

Also, from the IOBufferMediaDescriptor documentation, I was under the impression that bufferMD does allocate it's own memory.

IOBufferMemoryDescriptor

Overview
Provides a simple memory descriptor that allocates its own buffer memory.

I had hoped to use IOMemoryDescriptor instead, but MD::withAddress hasn't successfully read or written data into the zio->io_data buffer, and bufferMD::withAddress works fine.

Edit: Nevermind, read the number wrong. 80 milliseconds, not 800.

@lundman
Copy link
Contributor Author

lundman commented Jun 4, 2014

The flamegraphs are neat, but not sure that I can help there, it's a bit like black magic.. I'm sure you saw my wiki entry on them already, which is pretty much the culmination of my knowledge. Once I started to grep out just ZFS calls it became more useful though.

I didn't check the iokit sources, but the description is


Create an IOMemoryDescriptor to describe one virtual range of the kernel task.

Which implies to me that we create a new descriptor, using the given address, but do not allocate more memory. We will need to peek in the sources to know for sure.

Anyway, explore anything that catches your fancy, I was hoping you would move context into zio since it seems undesirable to allocate context in strategy, and worse, dealing with failure of said allocation. :)

@rottegift
Copy link
Contributor

@evansus : when cache vdevs are missing at import, vdev_iokit and master don't automatically deal with the vdevs appearing subsequently, even if the devices match what zpool status -v expects. In master, if the devices have not been renamed, a zpool online pool dev / zpool clear pool dev seems to work. In vdev-iokit I had to zpool remove dev... the cache vdevs and zpool add cache dev... them with their new names.

Maybe you could add a fast usb3 thumb drive to one of your test setups to use as a cache vdev as you carry on with the iokit work - they work well for l2arc in front of spinny disks.

@evansus
Copy link
Contributor

evansus commented Jun 14, 2014

@rottegift Please try the latest commits to the vdev-iokit branch, cache and log devices survive renumbering after the latest changes (also updated from the current master branch).

It seems to function on my end with a few tests. I tried both manual import and import from cachefile after opening disk images in different sequence, etc.

Pool import is successful in all cases I've tried, with one minor issue to resolve:
Importing from cachefile does not update the displayed diskNsN names (manual import does). The devices work normally, though - it's just showing the previous pathname in zpool status.

The solution to this will be updating vdev_path whenever it is necessary to import by physpath or guid, which I'm looking into.

@evansus
Copy link
Contributor

evansus commented Jun 14, 2014

@rottegift Ah, I re-read your comment and see what you mean. I just tested importing a pool with missing cache devices, then attached the disk image at a new device node - and I ran into the scenario you described.

zpool online, zpool replace, and even zpool replace -f couldn't resolve the pathname issue.

zpool remove tank disk3s1 then zpool add tank cache disk2s1 was necessary. So with USB devices that are attached after pool import, this could still be an issue.

Renumbered disks that are all available at import time should be fine though. @ilovezfs this does address some of the issues we discussed recently, but broke zpool split, which I'm working on resolving.

@evansus
Copy link
Contributor

evansus commented Jun 14, 2014

zpool split is resolved, tested split both with and without -R altroot (import new pool, or leave exported)

@evansus
Copy link
Contributor

evansus commented Jun 25, 2014

Interesting, a ram issue is unlikely in that case. On another note, have you tested with cpus=1 in boot.plist (or just set in nvram)?

@rottegift
Copy link
Contributor

"have you tested with cpus=1"

No, not at all. If the panics and/or hangs recur, I will.

@rottegift
Copy link
Contributor

In looking around I believe that some builds I did were done with the wrong clang (macports 3.4 latest built from source rather than xcode's clang). I've now quintuple-checked everything on that front, and will hope for no-recurrence.

@rottegift
Copy link
Contributor

Hmmm, nope. Another one (two scrubs were in progress).

I'll try the cpus=1 boot arg now.

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Thu Jun 26 11:54:33 2014
panic(cpu 4 caller 0xffffff802cedbf5e): Kernel trap at 0xffffff7fad41cbea, type 14=page fault, registers:
CR0: 0x000000008001003b, CR2: 0x0000000000000340, CR3: 0x000000002f3d5000, CR4: 0x00000000001606e0
RAX: 0x0000000000000340, RBX: 0x0000000000000340, RCX: 0xffffff82d5747d18, RDX: 0xffffff82d5761ad0
RSP: 0xffffff823061bc30, RBP: 0xffffff823061bc40, RSI: 0xffffff82e6265e48, RDI: 0x0000000000000340
R8:  0x000000000000003f, R9:  0x0000000000000000, R10: 0xffffff802d470800, R11: 0x0000000000000000
R12: 0xffffff821b7a07e8, R13: 0x000000000001fa74, R14: 0xffffff821b7a07c8, R15: 0xffffff8232d0b5b8
RFL: 0x0000000000010202, RIP: 0xffffff7fad41cbea, CS:  0x0000000000000008, SS:  0x0000000000000000
Fault CR2: 0x0000000000000340, Error code: 0x0000000000000000, Fault CPU: 0x4

Backtrace (CPU 4), Frame : Return Address
0xffffff823061b8c0 : 0xffffff802ce22fa9 mach_kernel : _panic + 0xc9
0xffffff823061b940 : 0xffffff802cedbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff823061bb10 : 0xffffff802cef3456 mach_kernel : _return_from_trap + 0xe6
0xffffff823061bb30 : 0xffffff7fad41cbea net.lundman.spl : _spl_mutex_enter + 0xa
0xffffff823061bc40 : 0xffffff7fad4c5efc net.lundman.zfs : _vdev_mirror_scrub_done + 0x6c
0xffffff823061bc70 : 0xffffff7fad5311f9 net.lundman.zfs : _zio_done + 0xff9
0xffffff823061bd90 : 0xffffff7fad52c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff823061bdd0 : 0xffffff7fad5313d2 net.lundman.zfs : _zio_done + 0x11d2
0xffffff823061bef0 : 0xffffff7fad52c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff823061bf30 : 0xffffff7fad52c655 net.lundman.zfs : _zio_execute + 0x15
0xffffff823061bf50 : 0xffffff7fad41e217 net.lundman.spl : _taskq_thread + 0xc7
0xffffff823061bfb0 : 0xffffff802ced7127 mach_kernel : _call_continuation + 0x17
      Kernel Extensions in backtrace:
         net.lundman.spl(1.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7fad41a000->0xffffff7fad42afff
         net.lundman.zfs(1.0)[CA9C82FD-1FA0-3927-A8B9-5DFB3141B3FD]@0xffffff7fad42b000->0xffffff7fad63cfff
            dependency: com.apple.iokit.IOStorageFamily(1.9)[9B09B065-7F11-3241-B194-B72E5C23548B]@0xffffff7fad3ec000
            dependency: net.lundman.spl(1.0.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7fad41a000

BSD process name corresponding to current thread: kernel_task
Boot args: -v keepsyms=y darkwake=0

@rottegift
Copy link
Contributor

Biting bullet and doing extended testing in apple hardware test.

Memtest all 2 was ok in single user mode.

@rottegift
Copy link
Contributor

"Test results: No trouble found" "Total Time Testing: 1 hour 9 mins 42 secs"

So I think we can rule out hardware issues.

@ilovezfs
Copy link
Contributor

@rottegift Is this limited to vdev_iokit?

@rottegift
Copy link
Contributor

@ilovezfs: no.

I'm now trying with cpus=1 as suggested above. It's very slow. What will come first, panic or death-by-old-age? :-)

Assuming this does crash the next step is to build a totally pristine new boot drive with nothing but zfs (ideally the latest ftp build) on it and try to replicate conditions in which crashes happen (which mostly seems to be lots of small writes onto slow media, especially into zvols).

@rottegift
Copy link
Contributor

Hm, I don't suppose you already know some bare minimum set of LaunchDaemons needed to make zfs work (pool import/export, dataset mount/unmount/etc) in single user mode?

@ilovezfs
Copy link
Contributor

I'd just do sudo make install.

I wonder if this is zed/zpool.cache related.

int zfs_autoimport_disable = 0;

Maybe try building with zfs_autoimport_disable = 1, and do not install the plists.

@rottegift
Copy link
Contributor

I'll try zfs_autoimport_disable = 1 (and master) after running for a bit with cpus=1.

The same vdev_iokit source code is on another mac (I'm typing on it now) running 10.8 and with a vastly different load pattern, and has significantly longer uptimes in recent days. Autoimport is working just fine on this machine.

On the crashier machine (which was the subject of the hw tests above) vdev-iokit happily autoimports and runs for hours.

So I'm not sure if it will make much difference, but I'll try to just to eliminate your lines of thought in suggesting it.

@rottegift
Copy link
Contributor

Well, cpus=1 is nice for showing off how good Mac OS X (and zfs) are at multithreading and multiprocessing.

$ sysctl vm.loadavg
vm.loadavg: { 336.91 322.35 320.98 }

Those sha256 checksums sure keep a single processor busy.

@rottegift
Copy link
Contributor

@evansus I think the only thing cpus=1 is telling me is that even a light load on this machine cannot be approached by a one-cpu system.

It's only: imported ssdpool (which is doing a scrub, so lots of sha256), imported Trinity (ditto), fired up the UI and Safari,Terminal,Activity Monitor,Console and a couple of others, and has been waiting about ten minutes for "$ sudo -s" to even display a Password: prompt. I'm going to attempt a graceful shutdown when it gives me the sudo shell, since ssh-ing in is hopeless and there is nothing giving any useful ability to inspect the workings of the system.

@evansus
Copy link
Contributor

evansus commented Jun 26, 2014

Interesting - looks like the vdev_mirror_scrub_done panic is somewhat repeatable, maybe only at the end of a scrub.

I wonder if that could be reproduced by creating a small test pool and running zpool scrub on it, first without cpus=1, then with if replicated (and of course master vs vdev-iokit). Otherwise yes, scrubbing the large pools with cpus=1 would take an eternity to complete. :)

I still haven't replicated that panic, but incidentally I don't think I've been using checksum=sha256 on test pools. Typically I use something close to zpool create -o ashift=12 -o failmode=continue -O canmount=off -O mountpoint=none -O atime=off where it defaults to checksum=fletcher4. I'm modifying my test scripts now.

But I take it there haven't been any panics while running single-cpu, which might indicate a synchronization issue. It's tricky since replicating one of the specific panics while using cpus=1 would confirm it is not a multi-threading issue.

@rottegift
Copy link
Contributor

Yes, I understand the idea of eliminating a whole range of synchronization and contention problems in going with one cpu, however the system hasn't even managed to mount all three of the pools it normally does, let alone reach a point at which I can do one of the tasks that I think are most likely to correlate with panics.

Yes, checksum=sha256 on practically everything, and compression=lz4 on everything.

A typ9ical create for me is

zpool create -O normalization=formD -O checksum=sha256 -O casesensitivity=insensitive -o ashift=12 homepool mirror disk3s2 disk27s2 log mirror disk25s1 disk4s1 cache disk7s2 disk24

I even do that sort of thing on non-rpool pools on illumos

2013-12-25.04:25:00 zpool create -f -O normalization=formD -O casesensitivity=insensitive -O compression=lz4 -O checksum=sha256 Newmis raidz c9t0d0s1 c9t9d0s1 c9t8d0s1 cache c9t11d0s0

for example.

@evansus
Copy link
Contributor

evansus commented Jun 26, 2014

Actually I take that back, laptop and mini's pools are checksum=sha256, only the test pools created in a VM had default checksum.

Also I did experience a panic when attempting a scrub on my laptop, but it is a unique scenario:
Panic log shows vdev_hold attempts to call vdev_op_hold, which is NULL (unimplemented vdev_iokit_hold and rele). vdev_hold/rele are only called in the case of a root pool, and this laptop is a bit 'unique' in that way ;)
https://gist.github.com/evansus/910f208cb6c2cfd9bcab

That issue is fixable, but unrelated. I need to test this on my Mac Mini, after updating it to ToT vdev-iokit, and in VMs with my updated hotspares script.

@rottegift
Copy link
Contributor

@ilovezfs : " zfs_autoimport_disable = 1 "

I'm going to try your latest commits in master for a bit without this, and then with it. I'll raise any problems in another issue, and leave this one only for vdev_iokit for now (unless you prefer it to all be here until we figure out what's causing these panics and hangs).

@rottegift
Copy link
Contributor

A couple more while dealing with the aftermath of the comments attached at the end of 95ff805

These two were in vdev-iokit as I was replacing devices that had gotten stomped with bad labels.

The first one was perhaps triggered by a zfs send sourced from one of the DEGRADED pools.

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Fri Jun 27 00:01:18 2014
panic(cpu 0 caller 0xffffff7f85d6dda5): "VERIFY(" "nvlist_add_nvlist(nv, propname, propval) == 0" ") failed\n"@dsl_prop.c:961
Backtrace (CPU 0), Frame : Return Address
0xffffff81ff03b660 : 0xffffff8004022fa9 mach_kernel : _panic + 0xc9
0xffffff81ff03b6e0 : 0xffffff7f85d6dda5 net.lundman.zfs : _dsl_prop_get_all_impl + 0x535
0xffffff81ff03b9e0 : 0xffffff7f85d6d3f2 net.lundman.zfs : _dsl_prop_get_all_ds + 0xf2
0xffffff81ff03bb40 : 0xffffff7f85d6d2f5 net.lundman.zfs : _dsl_prop_get_all + 0x25
0xffffff81ff03bb60 : 0xffffff7f85de732d net.lundman.zfs : _zfs_ioc_objset_stats_impl + 0x4d
0xffffff81ff03bba0 : 0xffffff7f85de413b net.lundman.zfs : _zfs_ioc_snapshot_list_next + 0x1ab
0xffffff81ff03bc20 : 0xffffff7f85ddfc64 net.lundman.zfs : _zfsdev_ioctl + 0x664
0xffffff81ff03bcf0 : 0xffffff800420d63f mach_kernel : _spec_ioctl + 0x11f
0xffffff81ff03bd40 : 0xffffff80041fe000 mach_kernel : _VNOP_IOCTL + 0x150
0xffffff81ff03bdc0 : 0xffffff80041f3e51 mach_kernel : _utf8_normalizestr + 0x971
0xffffff81ff03be10 : 0xffffff80043c1303 mach_kernel : _fo_ioctl + 0x43
0xffffff81ff03be40 : 0xffffff80043f2c66 mach_kernel : _ioctl + 0x466
0xffffff81ff03bf50 : 0xffffff8004440653 mach_kernel : _unix_syscall64 + 0x1f3
0xffffff81ff03bfb0 : 0xffffff80040f3c56 mach_kernel : _hndl_unix_scall64 + 0x16
      Kernel Extensions in backtrace:
         net.lundman.zfs(1.0)[CA9C82FD-1FA0-3927-A8B9-5DFB3141B3FD]@0xffffff7f85d1c000->0xffffff7f85f2cfff
            dependency: com.apple.iokit.IOStorageFamily(1.9)[9B09B065-7F11-3241-B194-B72E5C23548B]@0xffffff7f84604000
            dependency: net.lundman.spl(1.0.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7f845f3000

BSD process name corresponding to current thread: zfs
Boot args: -v keepsyms=y darkwake=0

and

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Fri Jun 27 00:35:00 2014
panic(cpu 0 caller 0xffffff80036dbf5e): Kernel trap at 0x0000000000000400, type 14=page fault, registers:
CR0: 0x000000008001003b, CR2: 0x0000000000000400, CR3: 0x0000000005ba2000, CR4: 0x00000000001606e0
RAX: 0x0000000000000400, RBX: 0xffffff81f25d6d58, RCX: 0xffffff8234326328, RDX: 0xffffff81f90c8d78
RSP: 0xffffff8217b73958, RBP: 0xffffff8217b739b0, RSI: 0xffffff823d2f5ad8, RDI: 0xffffff8210500ba0
R8:  0x0000000000000000, R9:  0xffffff8003c01910, R10: 0x00000000000003ff, R11: 0xffffffffffffffff
R12: 0xffffff81f25d6d98, R13: 0x000000000003cd02, R14: 0xffffff81f25d6d78, R15: 0xffffff81f91aa148
RFL: 0x0000000000010206, RIP: 0x0000000000000400, CS:  0x0000000000000008, SS:  0x0000000000000010
Fault CR2: 0x0000000000000400, Error code: 0x0000000000000010, Fault CPU: 0x0

Backtrace (CPU 0), Frame : Return Address
0xffffff8217b735e0 : 0xffffff8003622fa9 mach_kernel : _panic + 0xc9
0xffffff8217b73660 : 0xffffff80036dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff8217b73830 : 0xffffff80036f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff8217b73850 : 0x400 
0xffffff8217b739b0 : 0xffffff7f83d311f9 net.lundman.zfs : _zio_done + 0xff9
0xffffff8217b73ad0 : 0xffffff7f83d2c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8217b73b10 : 0xffffff7f83d313d2 net.lundman.zfs : _zio_done + 0x11d2
0xffffff8217b73c30 : 0xffffff7f83d2c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8217b73c70 : 0xffffff7f83d313d2 net.lundman.zfs : _zio_done + 0x11d2
0xffffff8217b73d90 : 0xffffff7f83d2c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8217b73dd0 : 0xffffff7f83d313d2 net.lundman.zfs : _zio_done + 0x11d2
0xffffff8217b73ef0 : 0xffffff7f83d2c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8217b73f30 : 0xffffff7f83d2c655 net.lundman.zfs : _zio_execute + 0x15
0xffffff8217b73f50 : 0xffffff7f83c1e217 net.lundman.spl : _taskq_thread + 0xc7
0xffffff8217b73fb0 : 0xffffff80036d7127 mach_kernel : _call_continuation + 0x17
      Kernel Extensions in backtrace:
         net.lundman.spl(1.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7f83c1a000->0xffffff7f83c2afff
         net.lundman.zfs(1.0)[CA9C82FD-1FA0-3927-A8B9-5DFB3141B3FD]@0xffffff7f83c2b000->0xffffff7f83e3cfff
            dependency: com.apple.iokit.IOStorageFamily(1.9)[9B09B065-7F11-3241-B194-B72E5C23548B]@0xffffff7f83bec000
            dependency: net.lundman.spl(1.0.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7f83c1a000

BSD process name corresponding to current thread: kernel_task
Boot args: -v keepsyms=y darkwake=0

@lundman
Copy link
Contributor Author

lundman commented Jun 26, 2014

0xffffff8217b73850 : 0x400 
0xffffff8217b739b0 : 0xffffff7f83d311f9 net.lundman.zfs : _zio_done + 0xff9

So zio_done jumps to 0x400. But zio has no function-ptrs that it calls, only symbols. Certainly seems to support memory corruption. But is it from code bugs or not, is the question. These weird panics are only in vdev-iokit?

@rottegift
Copy link
Contributor

I got a panic in master earlier but unfortunately it did not leave a crashdump. :-(

@evansus
Copy link
Contributor

evansus commented Jun 27, 2014

@lundman Re:#201 (comment) yep about a week ago I updated find_by_path to validate vdev GUIDs unless creating or splitting the pool. If the validation fails there, then find_by_guid can kick in. I also had to update the logic for cache and spare devices, since they are labeled differently.

See ed1463a, 4857136, 0c6e5cc, 69dcad8 and 170ee29.
The last one, 170ee29, fixed content missing from an old merge from upstream - you might want to consider cherrypicking that into master. At least when caches and spares are out of order, the validation in vdev_label.c / vdev.c should prevent it from being used.

That resolved the remaining degraded or faulted imports due to renumbered disks, since the devices are checked when there's still a chance to search for them. I've tested with renumbered data, log, cache, and spare vdevs and haven't had issues recently.

@rottegift has reported some panics that I haven't been able to replicate or completely decipher. But his pools, with several vdevs of each type per pool, are great test cases.
Perhaps you've already seen it, but I've been testing by running this script hotspares.sh -k in a VM. The 'outer' pool is on a vmware sata disk.
I also should clone that script and test it with only the 'inner' pool, backed by disk images or better yet VMware sata disks.

The only two issues I have run into had to due with zpool scrub on a root pool, on the iokit-boot branch. Also, adding a log device caused the boot-time import to fail, haven't debugged it much beyond removing the log device from that pool temporarily- I believe I just need to update vdev_iokit_find_pool slightly.

@rottegift
Copy link
Contributor

I got a different panic in vdev-iokit with zfs_vdev_async_write_active_min_dirty_percent -> 5 and zfs_dirty_data_max_percent -> 5 (see #201 (comment) ). Qualitatively it took longer to crash, but not by a huge amount, and it happened during the multi-zfs-send task mentioned above that comment, but afaict little or no zvol load.

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Fri Jun 27 12:03:35 2014
panic(cpu 4 caller 0xffffff80206dbf5e): Kernel trap at 0xffffff7fa2622f68, type 14=page fault, registers:
CR0: 0x000000008001003b, CR2: 0x0000000000000011, CR3: 0x000000001db1e001, CR4: 0x00000000001606e0
RAX: 0x0000000000000000, RBX: 0x0000000000000000, RCX: 0x0000000000000001, RDX: 0xffffff804f056180
RSP: 0xffffff823fc63d10, RBP: 0xffffff823fc63d50, RSI: 0xffffff8223abb200, RDI: 0xffffff804f056180
R8:  0x0000000000000000, R9:  0xffffff8020c01910, R10: 0x00000000000003ff, R11: 0xffffffffffffffff
R12: 0x0000000000000000, R13: 0xffffff8052a3ea00, R14: 0xffffff8052a3ea18, R15: 0xffffff805f08f780
RFL: 0x0000000000010286, RIP: 0xffffff7fa2622f68, CS:  0x0000000000000008, SS:  0x0000000000000010
Fault CR2: 0x0000000000000011, Error code: 0x0000000000000000, Fault CPU: 0x4

Backtrace (CPU 4), Frame : Return Address
0xffffff823fc639a0 : 0xffffff8020622fa9 mach_kernel : _panic + 0xc9
0xffffff823fc63a20 : 0xffffff80206dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff823fc63bf0 : 0xffffff80206f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff823fc63c10 : 0xffffff7fa2622f68 net.lundman.zfs : _zfs_fsync + 0x48
0xffffff823fc63d50 : 0xffffff7fa2617e4a net.lundman.zfs : _zfs_sync_callback + 0x5a
0xffffff823fc63d90 : 0xffffff80207d8e0d mach_kernel : _vnode_iterate + 0x22d
0xffffff823fc63e00 : 0xffffff7fa2617cbd net.lundman.zfs : _zfs_vfs_sync + 0x8d
0xffffff823fc63e70 : 0xffffff80207fa876 mach_kernel : _VFS_SYNC + 0xc6
0xffffff823fc63ea0 : 0xffffff80207e4ee3 mach_kernel : _sync + 0x73
0xffffff823fc63ed0 : 0xffffff80207de21a mach_kernel : _vfs_iterate + 0x10a
0xffffff823fc63f40 : 0xffffff80207e4e87 mach_kernel : _sync + 0x17
0xffffff823fc63f50 : 0xffffff8020a40653 mach_kernel : _unix_syscall64 + 0x1f3
0xffffff823fc63fb0 : 0xffffff80206f3c56 mach_kernel : _hndl_unix_scall64 + 0x16
      Kernel Extensions in backtrace:
         net.lundman.zfs(1.0)[EE505563-3BB5-3FDC-B129-2535402BFFF3]@0xffffff7fa2542000->0xffffff7fa2752fff
            dependency: com.apple.iokit.IOStorageFamily(1.9)[9B09B065-7F11-3241-B194-B72E5C23548B]@0xffffff7fa0bec000
            dependency: net.lundman.spl(1.0.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7fa2531000

BSD process name corresponding to current thread: launchd
Boot args: -v keepsyms=y darkwake=0

@rottegift
Copy link
Contributor

On reboot importing is stuck on a zvol:

   Device Identifier:        disk35
   Device Node:              /dev/disk35
   Part of Whole:            disk35
   Device / Media Name:      ZVOL Donkey/TMMIS Media
    0  1152    45   0 12:04pm ??         0:00.01 /System/Library/Filesystems/hfs.fs/Contents/Resources/./hfs.util -p disk35s2 removable readonly
Jun 27 12:03:37 cla.use.net zed[485]: ZFS Event Daemon 0.6.3-1
Jun 27 12:03:37 cla.use.net zed[485]: Processing events since eid=0
Jun 27 12:03:39 cla.use.net zed[510]: eid=1 class=statechange 
Jun 27 12:03:40 cla.use.net zed[541]: eid=2 class=statechange 
Jun 27 12:03:41 cla.use.net zed[569]: eid=3 class=statechange 
Jun 27 12:03:42 cla.use.net zed[581]: eid=4 class=statechange 
Jun 27 12:03:43 cla.use.net zed[592]: eid=5 class=statechange 
Jun 27 12:03:44 cla.use.net zed[610]: eid=6 class=statechange 
Jun 27 12:03:45 cla.use.net zed[619]: eid=7 class=statechange 
Jun 27 12:03:45 cla.use.net zed[643]: eid=8 class=statechange 
Jun 27 12:03:46 cla.use.net zed[674]: eid=9 class=statechange 
Jun 27 12:03:47 cla.use.net zed[689]: eid=10 class=statechange 
Jun 27 12:03:49 cla.use.net zed[728]: eid=11 class=statechange 
Jun 27 12:04:00 cla.use.net zed[923]: eid=12 class=zvol.create pool=Donkey
Jun 27 12:04:00 cla.use.net zed[931]: eid=12 class=zvol.create pool=Donkey/TM symlinked disk33
Jun 27 12:04:03 cla.use.net zed[1040]: eid=13 class=zvol.create pool=Donkey
Jun 27 12:04:03 cla.use.net zed[1053]: eid=13 class=zvol.create pool=Donkey/Caching symlinked disk34
Jun 27 12:04:09 cla.use.net zed[1145]: eid=14 class=zvol.create pool=Donkey
Jun 27 12:04:09 cla.use.net zed[1155]: eid=14 class=zvol.create pool=Donkey/TMMIS symlinked disk35
Jun 27 12:04:09 cla.use.net zed[1157]: eid=15 class=statechange 
Jun 27 12:04:09 cla.use.net zed[1168]: eid=16 class=statechange 
Jun 27 12:04:10 cla.use.net zed[1174]: eid=17 class=statechange 
Jun 27 12:04:10 cla.use.net zed[1177]: eid=18 class=statechange 
Jun 27 12:04:10 cla.use.net zed[1180]: eid=19 class=statechange 

A spindump is at https://gist.github.com/rottegift/9611fa933259ec45a7cd

@rottegift
Copy link
Contributor

Sadly another one of these

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Sat Jun 28 16:22:47 2014
panic(cpu 2 caller 0xffffff8025cdbf5e): Kernel trap at 0xffffff7fa632a5f4, type 14=page fault, registers:
CR0: 0x000000008001003b, CR2: 0xfffffffffffffff0, CR3: 0x00000000281dc000, CR4: 0x00000000001606e0
RAX: 0xfffffffffffffff0, RBX: 0xffffff8215383bc8, RCX: 0xffffff8238463908, RDX: 0xfffffffffffffff0
RSP: 0xffffff8210833c40, RBP: 0xffffff8210833c70, RSI: 0xffffff823532a448, RDI: 0xffffff8238463a08
R8:  0x0000000000000001, R9:  0xffffff8026201910, R10: 0x00000000000003ff, R11: 0xffffffffffffffff
R12: 0xffffff8215383c08, R13: 0x000000000001686b, R14: 0xffffff8215383be8, R15: 0xffffff822e70b748
RFL: 0x0000000000010286, RIP: 0xffffff7fa632a5f4, CS:  0x0000000000000008, SS:  0x0000000000000010
Fault CR2: 0xfffffffffffffff0, Error code: 0x0000000000000000, Fault CPU: 0x2

Backtrace (CPU 2), Frame : Return Address
0xffffff82108338d0 : 0xffffff8025c22fa9 mach_kernel : _panic + 0xc9
0xffffff8210833950 : 0xffffff8025cdbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff8210833b20 : 0xffffff8025cf3456 mach_kernel : _return_from_trap + 0xe6
0xffffff8210833b40 : 0xffffff7fa632a5f4 net.lundman.zfs : _zio_walk_parents + 0x94
0xffffff8210833c70 : 0xffffff7fa6331273 net.lundman.zfs : _zio_done + 0x1073
0xffffff8210833d90 : 0xffffff7fa632c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8210833dd0 : 0xffffff7fa63313d2 net.lundman.zfs : _zio_done + 0x11d2
0xffffff8210833ef0 : 0xffffff7fa632c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8210833f30 : 0xffffff7fa632c655 net.lundman.zfs : _zio_execute + 0x15
0xffffff8210833f50 : 0xffffff7fa621e217 net.lundman.spl : _taskq_thread + 0xc7
0xffffff8210833fb0 : 0xffffff8025cd7127 mach_kernel : _call_continuation + 0x17
      Kernel Extensions in backtrace:
         net.lundman.spl(1.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7fa621a000->0xffffff7fa622afff
         net.lundman.zfs(1.0)[EE505563-3BB5-3FDC-B129-2535402BFFF3]@0xffffff7fa622b000->0xffffff7fa643cfff
            dependency: com.apple.iokit.IOStorageFamily(1.9)[9B09B065-7F11-3241-B194-B72E5C23548B]@0xffffff7fa61ec000
            dependency: net.lundman.spl(1.0.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7fa621a000

BSD process name corresponding to current thread: kernel_task
Boot args: -v keepsyms=y darkwake=0

@rottegift
Copy link
Contributor

Got home to this. Don't even know what could have done it. It was happily scrubbing a pool and doing not much else while I was out.

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Sun Jun 29 00:30:23 2014
panic(cpu 2 caller 0xffffff80098dbf5e): Kernel trap at 0xffffff7f89ec125c, type 14=page fault, registers:
CR0: 0x000000008001003b, CR2: 0x0000000100000000, CR3: 0x000000000bddc000, CR4: 0x00000000001606e0
RAX: 0xffffff7f89ec1230, RBX: 0xffffff83540fd5b8, RCX: 0x0000000000020000, RDX: 0x0000000000000000
RSP: 0xffffff82197c3a20, RBP: 0xffffff82197c3a40, RSI: 0xffffff822dea5598, RDI: 0x0000000100000000
R8:  0x0000000000020000, R9:  0x0000000000000000, R10: 0xffffff8009e70800, R11: 0xffffff7f8a3b8994
R12: 0xffffff822dea5598, R13: 0x0000000000000001, R14: 0x0000000000020000, R15: 0x0000000000000000
RFL: 0x0000000000010282, RIP: 0xffffff7f89ec125c, CS:  0x0000000000000008, SS:  0x0000000000000010
Fault CR2: 0x0000000100000000, Error code: 0x0000000000000000, Fault CPU: 0x2

Backtrace (CPU 2), Frame : Return Address
0xffffff82197c36b0 : 0xffffff8009822fa9 mach_kernel : _panic + 0xc9
0xffffff82197c3730 : 0xffffff80098dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff82197c3900 : 0xffffff80098f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff82197c3920 : 0xffffff7f89ec125c net.lundman.zfs : _vdev_iokit_io_intr + 0x2c
0xffffff82197c3a40 : 0xffffff7f89defab2 com.apple.iokit.IOStorageFamily : __ZN20IOBlockStorageDriver24prepareRequestCompletionEPvS0_iy + 0xc0
0xffffff82197c3a90 : 0xffffff7f8a3072ca com.apple.iokit.IOSCSIBlockCommandsDevice : __ZN22IOBlockStorageServices22AsyncReadWriteCompleteEPviy + 0x11a
0xffffff82197c3ae0 : 0xffffff7f8a30ac3a com.apple.iokit.IOSCSIBlockCommandsDevice : __ZN25IOSCSIBlockCommandsDevice24AsyncReadWriteCompletionEP8OSObject + 0x16e
0xffffff82197c3b30 : 0xffffff7f8a2da160 com.apple.iokit.IOSCSIArchitectureModelFamily : __ZN22IOSCSIProtocolServices20ProcessCompletedTaskEP8OSObject19SCSIServiceResponse14SCSITaskStatus + 0x18e
0xffffff82197c3b80 : 0xffffff7f8a2da21c com.apple.iokit.IOSCSIArchitectureModelFamily : __ZN22IOSCSIProtocolServices16CommandCompletedEP8OSObject19SCSIServiceResponse14SCSITaskStatus + 0x42
0xffffff82197c3bb0 : 0xffffff7f8a3d5faa com.apple.iokit.IOFireWireSerialBusProtocolTransport : __ZN36IOFireWireSerialBusProtocolTransport12StatusNotifyEP18FWSBP2NotifyParams + 0x152
0xffffff82197c3c10 : 0xffffff7f8a3b8c7e com.apple.iokit.IOFireWireSBP2 : __ZN19IOFireWireSBP2Login16statusBlockWriteEtR9IOFWSpeed15FWAddressStructjPKvPv + 0x2ea
0xffffff82197c3cb0 : 0xffffff7f8a3b7690 com.apple.iokit.IOFireWireSBP2 : __ZN19IOFireWireSBP2Login22statusBlockWriteStaticEPvtR9IOFWSpeed15FWAddressStructjPKvS0_ + 0x36
0xffffff82197c3cd0 : 0xffffff7f8a346400 com.apple.iokit.IOFireWireFamily : __ZN22IOFWPseudoAddressSpace7doWriteEtR9IOFWSpeed15FWAddressStructjPKvPv + 0xa2
0xffffff82197c3d30 : 0xffffff7f8a32e7c6 com.apple.iokit.IOFireWireFamily : __ZN20IOFireWireController19processWriteRequestEtjPjPvi9IOFWSpeed + 0xa6
0xffffff82197c3db0 : 0xffffff7f8a32db24 com.apple.iokit.IOFireWireFamily : __ZN20IOFireWireController16processRcvPacketEPji9IOFWSpeed + 0x180
0xffffff82197c3e60 : 0xffffff7f8a80b172 Backtrace (CPU 2), Frame : Return Address
0xffffff82197c3070 : 0xffffff800982320d mach_kernel : __consume_panic_args + 0x19d
0xffffff82197c30a0 : 0xffffff8009822f2f mach_kernel : _panic + 0x4f
0xffffff82197c3120 : 0xffffff80098dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff82197c32f0 : 0xffffff80098f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff82197c3310 : 0xffffff80098e09f0 mach_kernel : _machine_boot_info + 0x160
0xffffff82197c3460 : 0xffffff80098e074f mach_kernel : _panic_i386_backtrace + 0x31f
0xffffff82197c3670 : 0xffffff80098e0294 mach_kernel : _Debugger + 0xa4
0xffffff82197c36b0 : 0xffffff8009822fa9 mach_kernel : _panic + 0xc9
0xffffff82197c3730 : 0xffffff80098dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff82197c3900 : 0xffffff80098f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff82197c3920 : 0xffffff7f89ec125c net.lundman.zfs : _vdev_iokit_io_intr + 0x2c
0xffffff82197c3a40 : 0xffffff7f89defab2 com.apple.iokit.IOStorageFamily : __ZN20IOBlockStorageDriver24prepareRequestCompletionEPvS0_iy + 0xc0
0xffffff82197c3a90 : 0xffffff7f8a3072ca com.apple.iokit.IOSCSIBlockCommandsDevice : __ZN22IOBlockStorageServices22AsyncReadWriteCompleteEPviy + 0x11a
0xffffff82197c3ae0 : 0xffffff7f8a30ac3a com.apple.iokit.IOSCSIBlockCommandsDevice : __ZN25IOSCSIBlockCommandsDevice24AsyncReadWriteCompletionEP8OSObject + 0x16e
0xffffff82197c3b30 : 0xffffff7f8a2da160 com.apple.iokit.IOSCSIArchitectureModelFamily : __ZN22IOSCSIProtocolServices20ProcessCompletedTaskEP8OSObject19SCSIServiceResponse14SCSITaskStatus + 0x18e
0xffffff82197c3b80 : 0xffffff7f8a2da21c com.apple.iokit.IOSCSIArchitectureModelFamily : __ZN22IOSCSIProtocolServices16CommandCompletedEP8OSObject19SCSIServiceResponse14SCSITaskStatus + 0x42
0xffffff82197c3bb0 : 0xffffff7f8a3d5faa com.apple.iokit.IOFireWireSerialBusProtocolTransport : __ZN36IOFireWireSerialBusProtocolTransport12StatusNotifyEP18FWSBP2NotifyParams + 0x152
0xffffff82197c3c10 : 0xffffff7f8a3b8c7e com.apple.iokit.IOFireWireSBP2 : __ZN19IOFireWireSBP2Login16statusBlockWriteEtR9IOFWSpeed15FWAddressStructjPKvPv + 0x2ea
0xffffff82197c3cb0 : 0xffffff7f8a3b7690 com.apple.iokit.IOFireWireSBP2 : __ZN19IOFireWireSBP2Login22statusBlockWriteStaticEPvtR9IOFWSpeed15FWAddressStructjPKvS0_ + 0x36
0xffffff82197c3cd0 : 0xffffff7f8a346400 com.apple.iokit.IOFireWireFamily : __ZN22IOFWPseudoAddressSpace7doWriteEtR9IOFWSpeed15FWAddressStructjPKvPv + 0xa2
0xffffff82197c3d30 : 0xffffff7f8a32e7c6 com.apple.iokit.IOFireWireFamily : __ZN20IOFireWireController19processWriteRequestEtjPjPvi9IOFWSpeed + 0xa6
0xffffff82197c3db0 : 0xffffff7f8a32db24 com.apple.iokit.IOFireWireFamily : __ZN20IOFireWireController16processRcvPacketEPji9IOFWSpeed + 0x180
0xffffff82197c3e60 : 0xffffff7f8a80b172 Backtrace (CPU 2), Frame : Return Address
0xffffff82197c2a30 : 0xffffff800982320d mach_kernel : __consume_panic_args + 0x19d
0xffffff82197c2a60 : 0xffffff8009822f2f mach_kernel : _panic + 0x4f
0xffffff82197c2ae0 : 0xffffff80098dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff82197c2cb0 : 0xffffff80098f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff82197c2cd0 : 0xffffff80098e09f0 mach_kernel : _machine_boot_info + 0x160
0xffffff82197c2e20 : 0xffffff80098e074f mach_kernel : _panic_i386_backtrace + 0x31f
0xffffff82197c3030 : 0xffffff80098e0294 mach_kernel : _Debugger + 0xa4
0xffffff82197c3070 : 0xffffff800982320d mach_kernel : __consume_panic_args + 0x19d
0xffffff82197c30a0 : 0xffffff8009822f2f mach_kernel : _panic + 0x4f
0xffffff82197c3120 : 0xffffff80098dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff82197c32f0 : 0xffffff80098f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff82197c3310 : 0xffffff80098e09f0 mach_kernel : _machine_boot_info + 0x160
0xffffff82197c3460 : 0xffffff80098e074f mach_kernel : _panic_i386_backtrace + 0x31f
0xffffff82197c3670 : 0xffffff80098e0294 mach_kernel : _Debugger + 0xa4
0xffffff82197c36b0 : 0xffffff8009822fa9 mach_kernel : _panic + 0xc9
0xffffff82197c3730 : 0xffffff80098dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff82197c3900 : 0xffffff80098f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff82197c3920 : 0xffffff7f89ec125c net.lundman.zfs : _vdev_iokit_io_intr + 0x2c
0xffffff82197c3a40 : 0xffffff7f89defab2 com.apple.iokit.IOStorageFamily : __ZN20IOBlockStorageDriver24prepareRequestCompletionEPvS0_iy + 0xc0
0xffffff82197c3a90 : 0xffffff7f8a3072ca com.apple.iokit.IOSCSIBlockCommandsDevice : __ZN22IOBlockStorageServices22AsyncReadWriteCompleteEPviy + 0x11a
0xffffff82197c3ae0 : 0xffffff7f8a30ac3a com.apple.iokit.IOSCSIBlockCommandsDevice : __ZN25IOSCSIBlockCommandsDevice24AsyncReadWriteCompletionEP8OSObject + 0x16e
0xffffff82197c3b30 : 0xffffff7f8a2da160 com.apple.iokit.IOSCSIArchitectureModelFamily : __ZN22IOSCSIProtocolServices20ProcessCompletedTaskEP8OSObject19SCSIServiceResponse14SCSITaskStatus + 0x18e
0xffffff82197c3b80 : 0xffffff7f8a2da21c com.apple.iokit.IOSCSIArchitectureModelFamily : __ZN22IOSCSIProtocolServices16CommandCompletedEP8OSObject19SCSIServiceResponse14SCSITaskStatus + 0x42
0xffffff82197c3bb0 : 0xffffff7f8a3d5faa com.apple.iokit.IOFireWireSerialBusProtocolTransport : __ZN36IOFireWireSerialBusProtocolTransport12StatusNotifyEP18FWSBP2NotifyParams + 0x152
0xffffff82197c3c10 : 0xffffff7f8a3b8c7e com.apple.iokit.IOFireWireSBP2 : __ZN19IOFireWireSBP2Login16statusBlockWriteEtR9IOFWSpeed15FWAddressStructjPKvPv + 0x2ea
0xffffff82197c3cb0 : 0xffffff7f8a3b7690 com.apple.iokit.IOFireWireSBP2 : __ZN19IOFireWireSBP2Login22statusBlockWriteStaticEPvtR9IOFWSpeed15FWAddressStructjPKvS0_ + 0x36
0xffffff82197c3cd0 : 0xffffff7f8a346400 com.apple.iokit.IOFireWireFamily : __ZN22IOFWPseudoAddressSpace7doWriteEtR9IOFWSpeed15FWAddressStructjPKvPv + 0xa2
0xffffff82197c3d30 : 0xffffff7f8a32e7c6 com.apple.iokit.IOFireWireFamily : __ZN20IOFireWireController19processWriteRequestEtjPjPvi9IOFWSpeed + 0xa6
0xfff
Model: Macmini6,2, BootROM MM61.0106.B03, 4 processors, Intel Core i7, 2.6 GHz, 16 GB, SMC 2.8f0
Graphics: Intel HD Graphics 4000, Intel HD Graphics 4000, Built-In
Memory Module: BANK 0/DIMM0, 8 GB, DDR3, 1600 MHz, 0x80AD, 0x484D5434314753364D465238432D50422020
Memory Module: BANK 1/DIMM0, 8 GB, DDR3, 1600 MHz, 0x80AD, 0x484D5434314753364D465238432D50422020
AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x10E), Broadcom BCM43xx 1.0 (5.106.98.100.22)
Bluetooth: Version 4.2.4f1 13674, 3 services, 23 devices, 1 incoming serial ports
Network Service: Built-in Ethernet, Ethernet, en0
Network Service: VLAN (vlan0), Ethernet, vlan0
Network Service: AirPort, AirPort, en1
Network Service: VLAN (vlan1), Ethernet, vlan1
PCI Card: pci1b21,612, AHCI Controller, Thunderbolt@189,0,0
PCI Card: pci1b21,612, AHCI Controller, Thunderbolt@193,0,0
PCI Card: pci11c1,5901, IEEE 1394 Open HCI, Thunderbolt@197,0,0
Serial ATA Device: APPLE SSD SM128E, 121.33 GB
Serial ATA Device: APPLE HDD HTS541010A9E662, 1 TB
Serial ATA Device: Samsung SSD 840 Series, 500.11 GB
Serial ATA Device: Samsung SSD 840 Series, 500.11 GB
USB Device: USB 3.0 HUB

USB Device: Patriot Memory
USB Device: USB 3.0 HUB

USB Device: Hub
USB Device: Hub
USB Device: Hub
USB Device: BRCM20702 Hub
USB Device: Bluetooth USB Host Controller
USB Device: IR Receiver
USB Device: Hub in Apple Pro Keyboard
USB Device: USB Receiver
USB Device: Apple Pro Keyboard
USB Device: USB 2.0 HUB

USB Device: Logitech USB Headset
USB Device: USB 2.0 HUB

FireWire Device: EyeTV 410, Elgato Systems, Up to 400 Mb/sec
FireWire Device: unknown_device, Unknown
FireWire Device: unknown_device, Iomega HDD, Up to 800 Mb/sec
FireWire Device: unknown_device, Iomega HDD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: My Book 111D, WD, Up to 800 Mb/sec
FireWire Device: unknown_device, Unknown
FireWire Device: unknown_device, Unknown
Thunderbolt Bus: Mac mini, Apple Inc., 23.4
Thunderbolt Device: GoFlex Desk Adapter Thunderbolt, Seagate, 1, 26.0

@rottegift
Copy link
Contributor

And another. Unfortunately no time yet to pop onto IRC.

Anonymous UUID:       EA3E4DC2-8F4D-9BF6-7D16-4BB6CA19A914

Mon Jun 30 09:24:54 2014
panic(cpu 2 caller 0xffffff80088dbf5e): Kernel trap at 0xffffff7f88f2c243, type 14=page fault, registers:
CR0: 0x000000008001003b, CR2: 0x0000000000000050, CR3: 0x000000000addc000, CR4: 0x00000000001606e0
RAX: 0x0000000000000000, RBX: 0x0000000000000001, RCX: 0xffffff8267454b68, RDX: 0xffffff8041180200
RSP: 0xffffff8226cd3780, RBP: 0xffffff8226cd3870, RSI: 0xffffff825d5bedc8, RDI: 0xffffff8041180200
R8:  0xffffff826746dbe0, R9:  0x0000000000000600, R10: 0x00000000003e0000, R11: 0xffffff826746d950
R12: 0xffffff825d578000, R13: 0xffffff825d97b2f8, R14: 0x0000000000000004, R15: 0x0000000000040470
RFL: 0x0000000000010286, RIP: 0xffffff7f88f2c243, CS:  0x0000000000000008, SS:  0x0000000000000010
Fault CR2: 0x0000000000000050, Error code: 0x0000000000000000, Fault CPU: 0x2

Backtrace (CPU 2), Frame : Return Address
0xffffff8226cd3410 : 0xffffff8008822fa9 mach_kernel : _panic + 0xc9
0xffffff8226cd3490 : 0xffffff80088dbf5e mach_kernel : _kernel_trap + 0x7fe
0xffffff8226cd3660 : 0xffffff80088f3456 mach_kernel : _return_from_trap + 0xe6
0xffffff8226cd3680 : 0xffffff7f88f2c243 net.lundman.zfs : _zio_vdev_child_io + 0x1c3
0xffffff8226cd3870 : 0xffffff7f88ec5026 net.lundman.zfs : _vdev_mirror_io_start + 0x146
0xffffff8226cd3930 : 0xffffff7f88f2f8bc net.lundman.zfs : _zio_vdev_io_start + 0x7c
0xffffff8226cd39a0 : 0xffffff7f88f2c78a net.lundman.zfs : ___zio_execute + 0x12a
0xffffff8226cd39e0 : 0xffffff7f88f2bccb net.lundman.zfs : _zio_nowait + 0x5b
0xffffff8226cd3a00 : 0xffffff7f88e81be5 net.lundman.zfs : _dsl_scan_scrub_cb + 0x475
0xffffff8226cd3ab0 : 0xffffff7f88e7dff6 net.lundman.zfs : _dsl_scan_ddt_entry + 0x106
0xffffff8226cd3ba0 : 0xffffff7f88e7f8b7 net.lundman.zfs : _dsl_scan_ddt + 0xf7
0xffffff8226cd3d70 : 0xffffff7f88e7f134 net.lundman.zfs : _dsl_scan_visit + 0x64
0xffffff8226cd3dd0 : 0xffffff7f88e7e6f2 net.lundman.zfs : _dsl_scan_sync + 0x5d2
0xffffff8226cd3e30 : 0xffffff7f88ea3b12 net.lundman.zfs : _spa_sync + 0x4b2
0xffffff8226cd3ee0 : 0xffffff7f88eb40d6 net.lundman.zfs : _txg_sync_thread + 0x3e6
0xffffff8226cd3fb0 : 0xffffff80088d7127 mach_kernel : _call_continuation + 0x17
      Kernel Extensions in backtrace:
         net.lundman.zfs(1.0)[EE505563-3BB5-3FDC-B129-2535402BFFF3]@0xffffff7f88e2b000->0xffffff7f8903cfff
            dependency: com.apple.iokit.IOStorageFamily(1.9)[9B09B065-7F11-3241-B194-B72E5C23548B]@0xffffff7f88dec000
            dependency: net.lundman.spl(1.0.0)[205406D0-4396-3572-B257-19B5A81B1084]@0xffffff7f88e1a000

BSD process name corresponding to current thread: kernel_task
Boot args: -v keepsyms=y darkwake=0

@rottegift
Copy link
Contributor

@evansus : Import badness : 1f77701 (and with spl @ issue201) and with #define DEBUG 1 in zfs/cmd/zfs_util/zfs_util.c :

Jul  5 14:11:31 localhost bootlog[0]: BOOT_TIME 1404565891 0
Jul  5 14:11:34 localhost kernel[0]: MAC Framework successfully initialized
Jul  5 14:11:39 localhost zfs.util[63]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[62]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[62]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[63]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[66]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[62]: argv[2]: disk6s1
Jul  5 14:11:39 localhost zfs.util[62]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[66]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[63]: argv[2]: disk7s1
Jul  5 14:11:39 localhost zfs.util[62]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[66]: argv[2]: disk9s1
Jul  5 14:11:39 localhost zfs.util[63]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[62]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[66]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[67]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[66]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[67]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[66]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[63]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[62]: blockdevice is /dev/disk6s1
Jul  5 14:11:39 localhost zfs.util[66]: blockdevice is /dev/disk9s1
Jul  5 14:11:39 localhost zfs.util[63]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[62]: +zfs_probe : devpath /dev/rdisk6s1
Jul  5 14:11:39 localhost zfs.util[66]: +zfs_probe : devpath /dev/rdisk9s1
Jul  5 14:11:39 localhost zfs.util[67]: argv[2]: disk10s1
Jul  5 14:11:39 localhost zfs.util[67]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[63]: blockdevice is /dev/disk7s1
Jul  5 14:11:39 localhost zfs.util[67]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[67]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[63]: +zfs_probe : devpath /dev/rdisk7s1
Jul  5 14:11:39 localhost zfs.util[67]: blockdevice is /dev/disk10s1
Jul  5 14:11:39 localhost zfs.util[67]: +zfs_probe : devpath /dev/rdisk10s1
Jul  5 14:11:39 localhost zfs.util[70]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[70]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[70]: argv[2]: disk12s1
Jul  5 14:11:39 localhost zfs.util[70]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[70]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[70]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[70]: blockdevice is /dev/disk12s1
Jul  5 14:11:39 localhost zfs.util[70]: +zfs_probe : devpath /dev/rdisk12s1
Jul  5 14:11:39 localhost zfs.util[68]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[68]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[68]: argv[2]: disk11s1
Jul  5 14:11:39 localhost zfs.util[68]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[72]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[68]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[72]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[68]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[72]: argv[2]: disk14s1
Jul  5 14:11:39 localhost zfs.util[68]: blockdevice is /dev/disk11s1
Jul  5 14:11:39 localhost zfs.util[72]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[72]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[68]: +zfs_probe : devpath /dev/rdisk11s1
Jul  5 14:11:39 localhost zfs.util[72]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[72]: blockdevice is /dev/disk14s1
Jul  5 14:11:39 localhost zfs.util[72]: +zfs_probe : devpath /dev/rdisk14s1
Jul  5 14:11:39 localhost zfs.util[71]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[74]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[71]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[74]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[75]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[71]: argv[2]: disk13s1
Jul  5 14:11:39 localhost zfs.util[74]: argv[2]: disk16s1
Jul  5 14:11:39 localhost zfs.util[75]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[71]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[74]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[71]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[75]: argv[2]: disk17s1
Jul  5 14:11:39 localhost zfs.util[74]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[75]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[71]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[74]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[75]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[71]: blockdevice is /dev/disk13s1
Jul  5 14:11:39 localhost zfs.util[74]: blockdevice is /dev/disk16s1
Jul  5 14:11:39 localhost zfs.util[71]: +zfs_probe : devpath /dev/rdisk13s1
Jul  5 14:11:39 localhost zfs.util[75]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[74]: +zfs_probe : devpath /dev/rdisk16s1
Jul  5 14:11:39 localhost zfs.util[75]: blockdevice is /dev/disk17s1
Jul  5 14:11:39 localhost zfs.util[75]: +zfs_probe : devpath /dev/rdisk17s1
Jul  5 14:11:39 localhost zfs.util[79]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[79]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[79]: argv[2]: disk21s1
Jul  5 14:11:39 localhost zfs.util[79]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[79]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[79]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[79]: blockdevice is /dev/disk21s1
Jul  5 14:11:39 localhost zfs.util[79]: +zfs_probe : devpath /dev/rdisk21s1
Jul  5 14:11:39 localhost zfs.util[80]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[80]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[80]: argv[2]: disk22s1
Jul  5 14:11:39 localhost zfs.util[80]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[80]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[80]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[80]: blockdevice is /dev/disk22s1
Jul  5 14:11:39 localhost zfs.util[80]: +zfs_probe : devpath /dev/rdisk22s1
Jul  5 14:11:39 localhost zfs.util[85]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[85]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[85]: argv[2]: disk27s1
Jul  5 14:11:39 localhost zfs.util[85]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[85]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[85]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[85]: blockdevice is /dev/disk27s1
Jul  5 14:11:39 localhost zfs.util[85]: +zfs_probe : devpath /dev/rdisk27s1
Jul  5 14:11:39 localhost zfs.util[62]: guid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[66]: guid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[62]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[66]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[68]: guid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[62]: FSUC_PROBE /dev/disk6s1 : FSUR_RECOGNIZED : poolguid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[68]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[66]: FSUC_PROBE /dev/disk9s1 : FSUR_RECOGNIZED : poolguid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[67]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[68]: FSUC_PROBE /dev/disk11s1 : FSUR_RECOGNIZED : poolguid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[63]: guid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[67]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[63]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[67]: FSUC_PROBE /dev/disk10s1 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[63]: FSUC_PROBE /dev/disk7s1 : FSUR_RECOGNIZED : poolguid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[70]: guid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[70]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[70]: FSUC_PROBE /dev/disk12s1 : FSUR_RECOGNIZED : poolguid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[72]: guid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[90]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[72]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[90]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[72]: FSUC_PROBE /dev/disk14s1 : FSUR_RECOGNIZED : poolguid 426320761368465395
Jul  5 14:11:39 localhost zfs.util[90]: argv[2]: disk32s1
Jul  5 14:11:39 localhost zfs.util[90]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[90]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[90]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[71]: guid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[90]: blockdevice is /dev/disk32s1
Jul  5 14:11:39 localhost zfs.util[71]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[90]: +zfs_probe : devpath /dev/rdisk32s1
Jul  5 14:11:39 localhost zfs.util[71]: FSUC_PROBE /dev/disk13s1 : FSUR_RECOGNIZED : poolguid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[74]: guid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[74]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[74]: FSUC_PROBE /dev/disk16s1 : FSUR_RECOGNIZED : poolguid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[75]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[75]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[75]: FSUC_PROBE /dev/disk17s1 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[90]: -zfs_probe : ret -2
Jul  5 14:11:39 localhost zfs.util[90]: FSUC_PROBE /dev/disk32s1 : FSUR_UNRECOGNIZED
Jul  5 14:11:39 localhost zfs.util[85]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[85]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[85]: FSUC_PROBE /dev/disk27s1 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[79]: guid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[79]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[79]: FSUC_PROBE /dev/disk21s1 : FSUR_RECOGNIZED : poolguid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[137]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[137]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[137]: argv[2]: disk19s2
Jul  5 14:11:39 localhost zfs.util[137]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[137]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[137]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[137]: blockdevice is /dev/disk19s2
Jul  5 14:11:39 localhost zfs.util[137]: +zfs_probe : devpath /dev/rdisk19s2
Jul  5 14:11:39 localhost zfs.util[138]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[138]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[138]: argv[2]: disk20s2
Jul  5 14:11:39 localhost zfs.util[138]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[138]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[138]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[138]: blockdevice is /dev/disk20s2
Jul  5 14:11:39 localhost zfs.util[138]: +zfs_probe : devpath /dev/rdisk20s2
Jul  5 14:11:39 localhost zfs.util[139]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[139]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[139]: argv[2]: disk23s2
Jul  5 14:11:39 localhost zfs.util[139]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[139]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[139]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[139]: blockdevice is /dev/disk23s2
Jul  5 14:11:39 localhost zfs.util[140]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[139]: +zfs_probe : devpath /dev/rdisk23s2
Jul  5 14:11:39 localhost zfs.util[140]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[140]: argv[2]: disk28s2
Jul  5 14:11:39 localhost zfs.util[140]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[140]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[140]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[140]: blockdevice is /dev/disk28s2
Jul  5 14:11:39 localhost zfs.util[140]: +zfs_probe : devpath /dev/rdisk28s2
Jul  5 14:11:39 localhost zfs.util[141]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[141]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[141]: argv[2]: disk30s2
Jul  5 14:11:39 localhost zfs.util[141]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[141]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[141]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[141]: blockdevice is /dev/disk30s2
Jul  5 14:11:39 localhost zfs.util[141]: +zfs_probe : devpath /dev/rdisk30s2
Jul  5 14:11:39 localhost zfs.util[142]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[142]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[142]: argv[2]: disk31s2
Jul  5 14:11:39 localhost zfs.util[142]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[142]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[142]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[142]: blockdevice is /dev/disk31s2
Jul  5 14:11:39 localhost zfs.util[142]: +zfs_probe : devpath /dev/rdisk31s2
Jul  5 14:11:39 localhost zfs.util[138]: guid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[138]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[138]: FSUC_PROBE /dev/disk20s2 : FSUR_RECOGNIZED : poolguid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[142]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[142]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[142]: FSUC_PROBE /dev/disk31s2 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[137]: guid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[137]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[137]: FSUC_PROBE /dev/disk19s2 : FSUR_RECOGNIZED : poolguid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[139]: guid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[139]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[141]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[139]: FSUC_PROBE /dev/disk23s2 : FSUR_RECOGNIZED : poolguid 15701963728217618441
Jul  5 14:11:39 localhost zfs.util[141]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[141]: FSUC_PROBE /dev/disk30s2 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[140]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[140]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[140]: FSUC_PROBE /dev/disk28s2 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[154]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[154]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[154]: argv[2]: disk29s2
Jul  5 14:11:39 localhost zfs.util[154]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[154]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[154]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[154]: blockdevice is /dev/disk29s2
Jul  5 14:11:39 localhost zfs.util[154]: +zfs_probe : devpath /dev/rdisk29s2
Jul  5 14:11:39 localhost zfs.util[154]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[154]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[154]: FSUC_PROBE /dev/disk29s2 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[162]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:39 localhost zfs.util[162]: argv[1]: -p
Jul  5 14:11:39 localhost zfs.util[162]: argv[2]: disk25s2
Jul  5 14:11:39 localhost zfs.util[162]: argv[3]: removable
Jul  5 14:11:39 localhost zfs.util[162]: argv[4]: readonly
Jul  5 14:11:39 localhost zfs.util[162]: zfs.util called with option p
Jul  5 14:11:39 localhost zfs.util[162]: blockdevice is /dev/disk25s2
Jul  5 14:11:39 localhost zfs.util[162]: +zfs_probe : devpath /dev/rdisk25s2
Jul  5 14:11:39 localhost zfs.util[162]: guid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[162]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[162]: FSUC_PROBE /dev/disk25s2 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:39 localhost zfs.util[80]: guid 15380268902317402435
Jul  5 14:11:39 localhost zfs.util[80]: -zfs_probe : ret -1
Jul  5 14:11:39 localhost zfs.util[80]: FSUC_PROBE /dev/disk22s1 : FSUR_RECOGNIZED : poolguid 15380268902317402435
Jul  5 14:11:43 cla.use.net zed[320]: ZFS Event Daemon 0.6.3-1
Jul  5 14:11:43 cla.use.net syslog[323]: zed started
Jul  5 14:11:43 cla.use.net zed[320]: Processing events since eid=0
Jul  5 14:11:44 cla.use.net zfs.util[352]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:44 cla.use.net zfs.util[352]: argv[1]: -p
Jul  5 14:11:44 cla.use.net zfs.util[352]: argv[2]: disk26s2
Jul  5 14:11:44 cla.use.net zfs.util[352]: argv[3]: removable
Jul  5 14:11:44 cla.use.net zfs.util[352]: argv[4]: readonly
Jul  5 14:11:44 cla.use.net zfs.util[352]: zfs.util called with option p
Jul  5 14:11:44 cla.use.net zfs.util[352]: blockdevice is /dev/disk26s2
Jul  5 14:11:44 cla.use.net zfs.util[352]: +zfs_probe : devpath /dev/rdisk26s2
Jul  5 14:11:44 cla.use.net zfs.util[352]: guid 5933279091430968458
Jul  5 14:11:44 cla.use.net zfs.util[352]: -zfs_probe : ret -1
Jul  5 14:11:44 cla.use.net zfs.util[352]: FSUC_PROBE /dev/disk26s2 : FSUR_RECOGNIZED : poolguid 5933279091430968458
Jul  5 14:11:44 cla.use.net xcscredd[362]: Caught an exception trying to contact collabd Remote service call failed: Response{0ms succeeded=0 responseStatus=failed response=Error Domain=NSURLErrorDomain Code=-1004 "Could not connect to the server." UserInfo=0x7ff523c07670 {NSUnderlyingError=0x7ff523c06f00 "Could not connect to the server.", NSErrorFailingURLStringKey=http://localhost:4444/svc, NSErrorFailingURLKey=http://localhost:4444/svc, NSLocalizedDescription=Could not connect to the server.}}
Jul  5 14:11:45 cla.use.net apsd[280]: Unrecognized leaf certificate
Jul  5 14:11:47 cla.use.net zfs.util[472]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:47 cla.use.net zfs.util[472]: argv[1]: -p
Jul  5 14:11:47 cla.use.net zfs.util[472]: argv[2]: disk32s3
Jul  5 14:11:47 cla.use.net zfs.util[472]: argv[3]: removable
Jul  5 14:11:47 cla.use.net zfs.util[472]: argv[4]: readonly
Jul  5 14:11:47 cla.use.net zfs.util[472]: zfs.util called with option p
Jul  5 14:11:47 cla.use.net zfs.util[472]: blockdevice is /dev/disk32s3
Jul  5 14:11:47 cla.use.net zfs.util[472]: +zfs_probe : devpath /dev/rdisk32s3
Jul  5 14:11:47 cla.use.net zfs.util[472]: -zfs_probe : ret -2
Jul  5 14:11:47 cla.use.net zfs.util[472]: FSUC_PROBE /dev/disk32s3 : FSUR_UNRECOGNIZED
Jul  5 14:11:47 cla.use.net zfs.util[473]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:47 cla.use.net zfs.util[473]: argv[1]: -p
Jul  5 14:11:47 cla.use.net zfs.util[473]: argv[2]: disk32s4
Jul  5 14:11:47 cla.use.net zfs.util[473]: argv[3]: removable
Jul  5 14:11:47 cla.use.net zfs.util[473]: argv[4]: readonly
Jul  5 14:11:47 cla.use.net zfs.util[473]: zfs.util called with option p
Jul  5 14:11:47 cla.use.net zfs.util[473]: blockdevice is /dev/disk32s4
Jul  5 14:11:47 cla.use.net zfs.util[473]: +zfs_probe : devpath /dev/rdisk32s4
Jul  5 14:11:47 cla.use.net zfs.util[473]: -zfs_probe : ret -2
Jul  5 14:11:47 cla.use.net zfs.util[473]: FSUC_PROBE /dev/disk32s4 : FSUR_UNRECOGNIZED
Jul  5 14:11:47 cla.use.net zfs.util[474]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:47 cla.use.net zfs.util[474]: argv[1]: -p
Jul  5 14:11:47 cla.use.net zfs.util[474]: argv[2]: disk32s5
Jul  5 14:11:47 cla.use.net zfs.util[474]: argv[3]: removable
Jul  5 14:11:47 cla.use.net zfs.util[474]: argv[4]: readonly
Jul  5 14:11:47 cla.use.net zfs.util[474]: zfs.util called with option p
Jul  5 14:11:47 cla.use.net zfs.util[474]: blockdevice is /dev/disk32s5
Jul  5 14:11:47 cla.use.net zfs.util[474]: +zfs_probe : devpath /dev/rdisk32s5
Jul  5 14:11:47 cla.use.net zfs.util[474]: -zfs_probe : ret -2
Jul  5 14:11:47 cla.use.net zfs.util[474]: FSUC_PROBE /dev/disk32s5 : FSUR_UNRECOGNIZED
Jul  5 14:11:47 cla.use.net zfs.util[475]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:47 cla.use.net zfs.util[475]: argv[1]: -p
Jul  5 14:11:47 cla.use.net zfs.util[475]: argv[2]: disk32s2
Jul  5 14:11:47 cla.use.net zfs.util[475]: argv[3]: removable
Jul  5 14:11:47 cla.use.net zfs.util[475]: argv[4]: readonly
Jul  5 14:11:47 cla.use.net zfs.util[475]: zfs.util called with option p
Jul  5 14:11:47 cla.use.net zfs.util[475]: blockdevice is /dev/disk32s2
Jul  5 14:11:47 cla.use.net zfs.util[475]: +zfs_probe : devpath /dev/rdisk32s2
Jul  5 14:11:47 cla.use.net zfs.util[475]: -zfs_probe : ret -2
Jul  5 14:11:47 cla.use.net zfs.util[475]: FSUC_PROBE /dev/disk32s2 : FSUR_UNRECOGNIZED
Jul  5 14:11:47 cla.use.net zfs.util[479]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:47 cla.use.net zfs.util[479]: argv[1]: -p
Jul  5 14:11:47 cla.use.net zfs.util[479]: argv[2]: disk33s1
Jul  5 14:11:47 cla.use.net zfs.util[479]: argv[3]: removable
Jul  5 14:11:47 cla.use.net zfs.util[479]: argv[4]: readonly
Jul  5 14:11:47 cla.use.net zfs.util[479]: zfs.util called with option p
Jul  5 14:11:47 cla.use.net zfs.util[479]: blockdevice is /dev/disk33s1
Jul  5 14:11:47 cla.use.net zfs.util[479]: +zfs_probe : devpath /dev/rdisk33s1
Jul  5 14:11:47 cla.use.net zfs.util[479]: -zfs_probe : ret -2
Jul  5 14:11:47 cla.use.net zfs.util[479]: FSUC_PROBE /dev/disk33s1 : FSUR_UNRECOGNIZED
Jul  5 14:11:48 cla.use.net zfs.util[490]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:48 cla.use.net zfs.util[490]: argv[1]: -p
Jul  5 14:11:48 cla.use.net zfs.util[490]: argv[2]: disk34s4
Jul  5 14:11:48 cla.use.net zfs.util[490]: argv[3]: removable
Jul  5 14:11:48 cla.use.net zfs.util[490]: argv[4]: readonly
Jul  5 14:11:48 cla.use.net zfs.util[490]: zfs.util called with option p
Jul  5 14:11:48 cla.use.net zfs.util[490]: blockdevice is /dev/disk34s4
Jul  5 14:11:48 cla.use.net zfs.util[490]: +zfs_probe : devpath /dev/rdisk34s4
Jul  5 14:11:48 cla.use.net zfs.util[490]: -zfs_probe : ret -2
Jul  5 14:11:48 cla.use.net zfs.util[490]: FSUC_PROBE /dev/disk34s4 : FSUR_UNRECOGNIZED
Jul  5 14:11:48 cla.use.net zfs.util[491]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:48 cla.use.net zfs.util[491]: argv[1]: -p
Jul  5 14:11:48 cla.use.net zfs.util[491]: argv[2]: disk34s5
Jul  5 14:11:48 cla.use.net zfs.util[491]: argv[3]: removable
Jul  5 14:11:48 cla.use.net zfs.util[491]: argv[4]: readonly
Jul  5 14:11:48 cla.use.net zfs.util[491]: zfs.util called with option p
Jul  5 14:11:48 cla.use.net zfs.util[491]: blockdevice is /dev/disk34s5
Jul  5 14:11:48 cla.use.net zfs.util[491]: +zfs_probe : devpath /dev/rdisk34s5
Jul  5 14:11:48 cla.use.net zfs.util[491]: -zfs_probe : ret -2
Jul  5 14:11:48 cla.use.net zfs.util[491]: FSUC_PROBE /dev/disk34s5 : FSUR_UNRECOGNIZED
Jul  5 14:11:48 cla.use.net zfs.util[492]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:48 cla.use.net zfs.util[492]: argv[1]: -p
Jul  5 14:11:48 cla.use.net zfs.util[492]: argv[2]: disk34s3
Jul  5 14:11:48 cla.use.net zfs.util[492]: argv[3]: removable
Jul  5 14:11:48 cla.use.net zfs.util[492]: argv[4]: readonly
Jul  5 14:11:48 cla.use.net zfs.util[492]: zfs.util called with option p
Jul  5 14:11:48 cla.use.net zfs.util[492]: blockdevice is /dev/disk34s3
Jul  5 14:11:48 cla.use.net zfs.util[492]: +zfs_probe : devpath /dev/rdisk34s3
Jul  5 14:11:48 cla.use.net zfs.util[492]: -zfs_probe : ret -2
Jul  5 14:11:48 cla.use.net zfs.util[492]: FSUC_PROBE /dev/disk34s3 : FSUR_UNRECOGNIZED
Jul  5 14:11:48 cla.use.net zfs.util[495]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:11:48 cla.use.net zfs.util[495]: argv[1]: -p
Jul  5 14:11:48 cla.use.net zfs.util[495]: argv[2]: disk34s2
Jul  5 14:11:48 cla.use.net zfs.util[495]: argv[3]: removable
Jul  5 14:11:48 cla.use.net zfs.util[495]: argv[4]: readonly
Jul  5 14:11:48 cla.use.net zfs.util[495]: zfs.util called with option p
Jul  5 14:11:48 cla.use.net zfs.util[495]: blockdevice is /dev/disk34s2
Jul  5 14:11:48 cla.use.net zfs.util[495]: +zfs_probe : devpath /dev/rdisk34s2
Jul  5 14:11:48 cla.use.net zfs.util[495]: -zfs_probe : ret -2
Jul  5 14:11:48 cla.use.net zfs.util[495]: FSUC_PROBE /dev/disk34s2 : FSUR_UNRECOGNIZED
Jul  5 14:11:49 cla.use.net SmartwareServiceApp[293]: file names: ((.+\.)((vmwarevm)|(vmdk)|(vmem)|(mem)|(vhd)|(hdd)))|(((\.DS_Store)|(\.localized)|(\.Trash)))
Jul  5 14:11:55 cla.use.net SafariForWebKitDevelopment[565]: WebKit r170816 initialized.
Jul  5 14:12:59 cla.use.net zed[1229]: eid=1 class=statechange 
Jul  5 14:12:59 cla.use.net zed[1231]: eid=2 class=statechange 
Jul  5 14:12:59 cla.use.net zed[1238]: eid=3 class=statechange 
Jul  5 14:12:59 cla.use.net zed[1240]: eid=4 class=statechange 
Jul  5 14:13:00 cla.use.net zed[1247]: eid=5 class=statechange 
Jul  5 14:13:00 cla.use.net zed[1249]: eid=6 class=statechange 
Jul  5 14:13:02 cla.use.net zed[1259]: eid=7 class=statechange 
Jul  5 14:13:03 cla.use.net zed[1265]: eid=8 class=statechange 
Jul  5 14:13:03 cla.use.net zed[1267]: eid=9 class=statechange 
Jul  5 14:13:03 cla.use.net zed[1269]: eid=10 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1271]: eid=11 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1273]: eid=12 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1275]: eid=13 class=config.sync pool=CLATM
Jul  5 14:13:04 cla.use.net zed[1279]: eid=13 class=config.sync pool=CLATM
Jul  5 14:13:04 cla.use.net zed[1282]: eid=14 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1285]: eid=15 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1290]: eid=16 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1292]: eid=17 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1294]: eid=18 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1296]: eid=19 class=statechange 
Jul  5 14:13:04 cla.use.net zed[1298]: eid=20 class=checksum pool=ssdpool
Jul  5 14:13:05 cla.use.net zed[1302]: eid=21 class=statechange 
Jul  5 14:13:05 cla.use.net zed[1304]: eid=22 class=statechange 
Jul  5 14:13:05 cla.use.net zed[1306]: eid=23 class=statechange 
Jul  5 14:13:05 cla.use.net zed[1312]: eid=24 class=statechange 
Jul  5 14:13:05 cla.use.net zed[1314]: eid=25 class=statechange 
Jul  5 14:13:05 cla.use.net zed[1316]: eid=26 class=statechange 
Jul  5 14:13:06 cla.use.net zfs.util[79]: cannot import '15380268902317402435': no such pool available
Jul  5 14:13:06 cla.use.net zfs.util[79]: zpool import error 1
Jul  5 14:13:06 cla.use.net zed[1319]: eid=27 class=config.sync pool=ssdpool
Jul  5 14:13:06 cla.use.net zfs.util[1320]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:06 cla.use.net zfs.util[1320]: argv[1]: -k
Jul  5 14:13:06 cla.use.net zfs.util[1320]: argv[2]: disk21s1
Jul  5 14:13:06 cla.use.net zfs.util[1320]: zfs.util called with option k
Jul  5 14:13:06 cla.use.net zfs.util[1320]: blockdevice is /dev/disk21s1
Jul  5 14:13:06 cla.use.net zfs.util[1320]: FSUC_GETUUID
Jul  5 14:13:06 cla.use.net zed[1325]: eid=27 class=config.sync pool=ssdpool
Jul  5 14:13:06 cla.use.net zfs.util[1324]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:06 cla.use.net zfs.util[1324]: argv[1]: -p
Jul  5 14:13:06 cla.use.net zfs.util[1324]: argv[2]: disk21s9
Jul  5 14:13:06 cla.use.net zfs.util[1324]: argv[3]: removable
Jul  5 14:13:06 cla.use.net zfs.util[1324]: argv[4]: readonly
Jul  5 14:13:06 cla.use.net zfs.util[1324]: zfs.util called with option p
Jul  5 14:13:06 cla.use.net zfs.util[1324]: blockdevice is /dev/disk21s9
Jul  5 14:13:06 cla.use.net zfs.util[1324]: +zfs_probe : devpath /dev/rdisk21s9
Jul  5 14:13:06 cla.use.net zfs.util[1324]: -zfs_probe : ret -2
Jul  5 14:13:06 cla.use.net zfs.util[1324]: FSUC_PROBE /dev/disk21s9 : FSUR_UNRECOGNIZED
Jul  5 14:13:06 cla.use.net zed[1331]: eid=28 class=zpool pool=$import
Jul  5 14:13:06 cla.use.net zed[1335]: Pool export $import
Jul  5 14:13:06 cla.use.net zed[1340]: Pool export $import
Jul  5 14:13:38 cla.use.net zfs.util[62]: zpool import error 1
Jul  5 14:13:38 cla.use.net zfs.util[71]: cannot import '15380268902317402435': no such pool available
Jul  5 14:13:38 cla.use.net zfs.util[71]: zpool import error 1
Jul  5 14:13:38 cla.use.net zed[1489]: eid=29 class=zpool pool=$import
Jul  5 14:13:38 cla.use.net zfs.util[1487]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:38 cla.use.net zfs.util[1487]: argv[1]: -k
Jul  5 14:13:38 cla.use.net zfs.util[1487]: argv[2]: disk6s1
Jul  5 14:13:38 cla.use.net zfs.util[1487]: zfs.util called with option k
Jul  5 14:13:38 cla.use.net zfs.util[1487]: blockdevice is /dev/disk6s1
Jul  5 14:13:38 cla.use.net zfs.util[1487]: FSUC_GETUUID
Jul  5 14:13:38 cla.use.net zfs.util[1488]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:38 cla.use.net zfs.util[1488]: argv[1]: -k
Jul  5 14:13:38 cla.use.net zfs.util[1488]: argv[2]: disk13s1
Jul  5 14:13:38 cla.use.net zfs.util[1488]: zfs.util called with option k
Jul  5 14:13:38 cla.use.net zfs.util[1488]: blockdevice is /dev/disk13s1
Jul  5 14:13:38 cla.use.net zfs.util[1488]: FSUC_GETUUID
Jul  5 14:13:38 cla.use.net zed[1494]: Pool export $import
Jul  5 14:13:38 cla.use.net zed[1505]: Pool export $import
Jul  5 14:13:47 cla.use.net zfs.util[85]: zpool import error 1
Jul  5 14:13:47 cla.use.net zfs.util[1548]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:47 cla.use.net zfs.util[1548]: argv[1]: -k
Jul  5 14:13:47 cla.use.net zfs.util[1548]: argv[2]: disk27s1
Jul  5 14:13:47 cla.use.net zfs.util[1548]: zfs.util called with option k
Jul  5 14:13:47 cla.use.net zfs.util[1548]: blockdevice is /dev/disk27s1
Jul  5 14:13:47 cla.use.net zfs.util[1548]: FSUC_GETUUID
Jul  5 14:13:47 cla.use.net zfs.util[1549]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:47 cla.use.net zfs.util[1549]: argv[1]: -p
Jul  5 14:13:47 cla.use.net zfs.util[1549]: argv[2]: disk27s9
Jul  5 14:13:47 cla.use.net zfs.util[1549]: argv[3]: removable
Jul  5 14:13:47 cla.use.net zfs.util[1549]: argv[4]: readonly
Jul  5 14:13:47 cla.use.net zfs.util[1549]: zfs.util called with option p
Jul  5 14:13:47 cla.use.net zfs.util[1549]: blockdevice is /dev/disk27s9
Jul  5 14:13:47 cla.use.net zfs.util[1549]: +zfs_probe : devpath /dev/rdisk27s9
Jul  5 14:13:47 cla.use.net zfs.util[1549]: -zfs_probe : ret -2
Jul  5 14:13:47 cla.use.net zfs.util[1549]: FSUC_PROBE /dev/disk27s9 : FSUR_UNRECOGNIZED
Jul  5 14:13:58 cla.use.net zfs.util[63]: zpool import error 1
Jul  5 14:13:58 cla.use.net zfs.util[67]: zpool import error 1
Jul  5 14:13:58 cla.use.net zfs.util[1604]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:58 cla.use.net zfs.util[1605]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:13:58 cla.use.net zfs.util[1604]: argv[1]: -k
Jul  5 14:13:58 cla.use.net zfs.util[1605]: argv[1]: -k
Jul  5 14:13:58 cla.use.net zfs.util[1604]: argv[2]: disk7s1
Jul  5 14:13:58 cla.use.net zfs.util[1604]: zfs.util called with option k
Jul  5 14:13:58 cla.use.net zfs.util[1605]: argv[2]: disk10s1
Jul  5 14:13:58 cla.use.net zfs.util[1604]: blockdevice is /dev/disk7s1
Jul  5 14:13:58 cla.use.net zfs.util[1605]: zfs.util called with option k
Jul  5 14:13:58 cla.use.net zfs.util[1604]: FSUC_GETUUID
Jul  5 14:13:58 cla.use.net zfs.util[1605]: blockdevice is /dev/disk10s1
Jul  5 14:13:58 cla.use.net zfs.util[1605]: FSUC_GETUUID
Jul  5 14:13:58 cla.use.net zed[1613]: eid=30 class=statechange 
Jul  5 14:13:58 cla.use.net zed[1616]: eid=31 class=statechange 
Jul  5 14:13:58 cla.use.net zed[1618]: eid=32 class=statechange 
Jul  5 14:13:58 cla.use.net zed[1620]: eid=33 class=statechange 
Jul  5 14:13:58 cla.use.net zed[1622]: eid=34 class=statechange 
Jul  5 14:13:59 cla.use.net zed[1624]: eid=35 class=statechange 
Jul  5 14:14:06 cla.use.net zed[1662]: eid=36 class=statechange 
Jul  5 14:14:06 cla.use.net zed[1664]: eid=37 class=statechange 
Jul  5 14:14:06 cla.use.net zed[1666]: eid=38 class=statechange 
Jul  5 14:14:09 cla.use.net zed[1683]: eid=39 class=config.sync pool=Donkey
Jul  5 14:14:09 cla.use.net zed[1687]: eid=39 class=config.sync pool=Donkey
Jul  5 14:14:31 cla.use.net zfs.util[352]: zpool import error 1
Jul  5 14:14:31 cla.use.net zfs.util[80]: cannot import '15380268902317402435': no such pool available
Jul  5 14:14:31 cla.use.net zfs.util[80]: zpool import error 1
Jul  5 14:14:31 cla.use.net zed[1789]: eid=40 class=zpool pool=$import
Jul  5 14:14:31 cla.use.net zfs.util[1788]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:14:31 cla.use.net zfs.util[1788]: argv[1]: -k
Jul  5 14:14:31 cla.use.net zfs.util[1787]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:14:31 cla.use.net zfs.util[1788]: argv[2]: disk22s1
Jul  5 14:14:31 cla.use.net zfs.util[1787]: argv[1]: -k
Jul  5 14:14:31 cla.use.net zfs.util[1788]: zfs.util called with option k
Jul  5 14:14:31 cla.use.net zfs.util[1787]: argv[2]: disk26s2
Jul  5 14:14:31 cla.use.net zfs.util[1788]: blockdevice is /dev/disk22s1
Jul  5 14:14:31 cla.use.net zfs.util[1787]: zfs.util called with option k
Jul  5 14:14:31 cla.use.net zfs.util[1787]: blockdevice is /dev/disk26s2
Jul  5 14:14:31 cla.use.net zfs.util[1788]: FSUC_GETUUID
Jul  5 14:14:31 cla.use.net zfs.util[1787]: FSUC_GETUUID
Jul  5 14:14:31 cla.use.net zfs.util[1792]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:14:31 cla.use.net zfs.util[1792]: argv[1]: -p
Jul  5 14:14:31 cla.use.net zfs.util[1792]: argv[2]: disk22s9
Jul  5 14:14:31 cla.use.net zfs.util[1792]: argv[3]: removable
Jul  5 14:14:31 cla.use.net zfs.util[1792]: argv[4]: readonly
Jul  5 14:14:31 cla.use.net zfs.util[1792]: zfs.util called with option p
Jul  5 14:14:31 cla.use.net zfs.util[1792]: blockdevice is /dev/disk22s9
Jul  5 14:14:31 cla.use.net zfs.util[1792]: +zfs_probe : devpath /dev/rdisk22s9
Jul  5 14:14:31 cla.use.net zed[1794]: Pool export $import
Jul  5 14:14:31 cla.use.net zed[1801]: Pool export $import
Jul  5 14:14:31 cla.use.net zfs.util[1792]: -zfs_probe : ret -2
Jul  5 14:14:31 cla.use.net zfs.util[1792]: FSUC_PROBE /dev/disk22s9 : FSUR_UNRECOGNIZED
Jul  5 14:15:00 cla.use.net zfs.util[141]: zpool import error 1
Jul  5 14:15:00 cla.use.net zfs.util[1933]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:00 cla.use.net zfs.util[1933]: argv[1]: -k
Jul  5 14:15:00 cla.use.net zfs.util[1933]: argv[2]: disk30s2
Jul  5 14:15:00 cla.use.net zfs.util[1933]: zfs.util called with option k
Jul  5 14:15:00 cla.use.net zfs.util[1933]: blockdevice is /dev/disk30s2
Jul  5 14:15:00 cla.use.net zfs.util[1933]: FSUC_GETUUID
Jul  5 14:15:09 cla.use.net zfs.util[154]: zpool import error 1
Jul  5 14:15:09 cla.use.net zfs.util[139]: cannot import '15701963728217618441': no such pool available
Jul  5 14:15:09 cla.use.net zfs.util[139]: zpool import error 1
Jul  5 14:15:09 cla.use.net zed[1990]: eid=41 class=zpool pool=$import
Jul  5 14:15:09 cla.use.net zfs.util[1988]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:09 cla.use.net zfs.util[1988]: argv[1]: -k
Jul  5 14:15:09 cla.use.net zfs.util[1988]: argv[2]: disk29s2
Jul  5 14:15:09 cla.use.net zfs.util[1988]: zfs.util called with option k
Jul  5 14:15:09 cla.use.net zfs.util[1988]: blockdevice is /dev/disk29s2
Jul  5 14:15:09 cla.use.net zfs.util[1988]: FSUC_GETUUID
Jul  5 14:15:09 cla.use.net zfs.util[1989]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:09 cla.use.net zfs.util[1989]: argv[1]: -k
Jul  5 14:15:09 cla.use.net zfs.util[1989]: argv[2]: disk23s2
Jul  5 14:15:09 cla.use.net zfs.util[1989]: zfs.util called with option k
Jul  5 14:15:09 cla.use.net zfs.util[1989]: blockdevice is /dev/disk23s2
Jul  5 14:15:09 cla.use.net zfs.util[1989]: FSUC_GETUUID
Jul  5 14:15:09 cla.use.net zed[1997]: Pool export $import
Jul  5 14:15:09 cla.use.net zed[2003]: Pool export $import
Jul  5 14:15:32 cla.use.net zfs.util[137]: zpool import error 1
Jul  5 14:15:32 cla.use.net zfs.util[74]: zpool import error 1
Jul  5 14:15:32 cla.use.net zfs.util[70]: zpool import error 1
Jul  5 14:15:32 cla.use.net zfs.util[75]: zpool import error 1
Jul  5 14:15:32 cla.use.net zfs.util[138]: cannot import '15701963728217618441': no such pool available
Jul  5 14:15:32 cla.use.net zfs.util[138]: zpool import error 1
Jul  5 14:15:32 cla.use.net zed[2114]: eid=42 class=zpool pool=$import
Jul  5 14:15:32 cla.use.net zfs.util[2110]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:32 cla.use.net zfs.util[2109]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:32 cla.use.net zfs.util[2111]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:32 cla.use.net zfs.util[2110]: argv[1]: -k
Jul  5 14:15:32 cla.use.net zfs.util[2109]: argv[1]: -k
Jul  5 14:15:32 cla.use.net zfs.util[2112]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:32 cla.use.net zfs.util[2111]: argv[1]: -k
Jul  5 14:15:32 cla.use.net zfs.util[2110]: argv[2]: disk20s2
Jul  5 14:15:32 cla.use.net zfs.util[2113]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:32 cla.use.net zfs.util[2112]: argv[1]: -k
Jul  5 14:15:32 cla.use.net zfs.util[2110]: zfs.util called with option k
Jul  5 14:15:32 cla.use.net zfs.util[2109]: argv[2]: disk12s1
Jul  5 14:15:32 cla.use.net zfs.util[2113]: argv[1]: -k
Jul  5 14:15:32 cla.use.net zfs.util[2112]: argv[2]: disk17s1
Jul  5 14:15:32 cla.use.net zfs.util[2110]: blockdevice is /dev/disk20s2
Jul  5 14:15:32 cla.use.net zfs.util[2113]: argv[2]: disk16s1
Jul  5 14:15:32 cla.use.net zfs.util[2111]: argv[2]: disk19s2
Jul  5 14:15:32 cla.use.net zfs.util[2109]: zfs.util called with option k
Jul  5 14:15:32 cla.use.net zfs.util[2112]: zfs.util called with option k
Jul  5 14:15:32 cla.use.net zfs.util[2110]: FSUC_GETUUID
Jul  5 14:15:32 cla.use.net zfs.util[2111]: zfs.util called with option k
Jul  5 14:15:32 cla.use.net zfs.util[2109]: blockdevice is /dev/disk12s1
Jul  5 14:15:32 cla.use.net zfs.util[2112]: blockdevice is /dev/disk17s1
Jul  5 14:15:32 cla.use.net zfs.util[2113]: zfs.util called with option k
Jul  5 14:15:32 cla.use.net zfs.util[2113]: blockdevice is /dev/disk16s1
Jul  5 14:15:32 cla.use.net zfs.util[2111]: blockdevice is /dev/disk19s2
Jul  5 14:15:32 cla.use.net zfs.util[2109]: FSUC_GETUUID
Jul  5 14:15:32 cla.use.net zfs.util[2113]: FSUC_GETUUID
Jul  5 14:15:32 cla.use.net zfs.util[2111]: FSUC_GETUUID
Jul  5 14:15:32 cla.use.net zfs.util[2112]: FSUC_GETUUID
Jul  5 14:15:32 cla.use.net zfs.util[2116]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:15:32 cla.use.net zfs.util[2116]: argv[1]: -p
Jul  5 14:15:32 cla.use.net zfs.util[2116]: argv[2]: disk12s9
Jul  5 14:15:32 cla.use.net zfs.util[2116]: argv[3]: removable
Jul  5 14:15:32 cla.use.net zfs.util[2116]: argv[4]: readonly
Jul  5 14:15:32 cla.use.net zfs.util[2116]: zfs.util called with option p
Jul  5 14:15:32 cla.use.net zfs.util[2116]: blockdevice is /dev/disk12s9
Jul  5 14:15:32 cla.use.net zfs.util[2116]: +zfs_probe : devpath /dev/rdisk12s9
Jul  5 14:15:32 cla.use.net zed[2119]: Pool export $import
Jul  5 14:15:32 cla.use.net zfs.util[2116]: -zfs_probe : ret -2
Jul  5 14:15:32 cla.use.net zfs.util[2116]: FSUC_PROBE /dev/disk12s9 : FSUR_UNRECOGNIZED
Jul  5 14:15:32 cla.use.net zed[2138]: Pool export $import
Jul  5 14:16:00 cla.use.net zfs.util[142]: zpool import error 1
Jul  5 14:16:00 cla.use.net zfs.util[2268]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:16:00 cla.use.net zfs.util[2268]: argv[1]: -k
Jul  5 14:16:00 cla.use.net zfs.util[2268]: argv[2]: disk31s2
Jul  5 14:16:00 cla.use.net zfs.util[2268]: zfs.util called with option k
Jul  5 14:16:00 cla.use.net zfs.util[2268]: blockdevice is /dev/disk31s2
Jul  5 14:16:00 cla.use.net zfs.util[2268]: FSUC_GETUUID
Jul  5 14:16:09 cla.use.net zfs.util[140]: zpool import error 1
Jul  5 14:16:09 cla.use.net zfs.util[2312]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:16:09 cla.use.net zfs.util[2312]: argv[1]: -k
Jul  5 14:16:09 cla.use.net zfs.util[2312]: argv[2]: disk28s2
Jul  5 14:16:09 cla.use.net zfs.util[2312]: zfs.util called with option k
Jul  5 14:16:09 cla.use.net zfs.util[2312]: blockdevice is /dev/disk28s2
Jul  5 14:16:09 cla.use.net zfs.util[2312]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util: stat /dev/disk28s2 failed, No such file or directory
Jul  5 14:16:28 cla.use.net zfs.util[162]: zpool import error 1
Jul  5 14:16:28 cla.use.net zfs.util[2395]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:16:28 cla.use.net zfs.util[2395]: argv[1]: -k
Jul  5 14:16:28 cla.use.net zfs.util[2395]: argv[2]: disk25s2
Jul  5 14:16:28 cla.use.net zfs.util[2395]: zfs.util called with option k
Jul  5 14:16:28 cla.use.net zfs.util[2395]: blockdevice is /dev/disk25s2
Jul  5 14:16:28 cla.use.net zfs.util[2395]: FSUC_GETUUID
Jul  5 14:16:28 cla.use.net zed[2403]: eid=43 class=config.sync pool=CLATM
Jul  5 14:16:28 cla.use.net zed[2407]: eid=43 class=config.sync pool=CLATM
Jul  5 14:16:28 cla.use.net zed[2410]: eid=44 class=config.sync pool=ssdpool
Jul  5 14:16:28 cla.use.net zed[2414]: eid=44 class=config.sync pool=ssdpool
Jul  5 14:16:29 cla.use.net zed[2422]: eid=45 class=config.sync pool=Donkey
Jul  5 14:16:29 cla.use.net zed[2426]: eid=45 class=config.sync pool=Donkey
Jul  5 14:16:30 cla.use.net zed[2439]: eid=46 class=zpool.import pool=ssdpool
Jul  5 14:16:30 cla.use.net zed[2441]: Pool import ssdpool
Jul  5 14:16:30 cla.use.net zed[2448]: eid=47 class=zpool.import pool=CLATM
Jul  5 14:16:30 cla.use.net zed[2450]: Pool import CLATM
Jul  5 14:16:30 cla.use.net zfs.util[72]: zpool import error 0
Jul  5 14:16:30 cla.use.net zfs.util[2454]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:16:30 cla.use.net zfs.util[2454]: argv[1]: -k
Jul  5 14:16:30 cla.use.net zfs.util[2454]: argv[2]: disk14s1
Jul  5 14:16:30 cla.use.net zfs.util[2454]: zfs.util called with option k
Jul  5 14:16:30 cla.use.net zfs.util[2454]: blockdevice is /dev/disk14s1
Jul  5 14:16:30 cla.use.net zfs.util[2454]: FSUC_GETUUID
Jul  5 14:16:30 cla.use.net zfs.util[66]: zpool import error 0
Jul  5 14:16:30 cla.use.net zfs.util[2479]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:16:30 cla.use.net zfs.util[2479]: argv[1]: -k
Jul  5 14:16:30 cla.use.net zfs.util[2479]: argv[2]: disk9s1
Jul  5 14:16:30 cla.use.net zfs.util[2479]: zfs.util called with option k
Jul  5 14:16:30 cla.use.net zfs.util[2479]: blockdevice is /dev/disk9s1
Jul  5 14:16:30 cla.use.net zfs.util[2479]: FSUC_GETUUID
Jul  5 14:16:30 cla.use.net zed[2481]: eid=48 class=resilver.start pool=ssdpool
Jul  5 14:16:32 cla.use.net zed[2567]: eid=49 class=zvol.create pool=Donkey
Jul  5 14:16:32 cla.use.net zed[2576]: eid=49 class=zvol.create pool=Donkey/TM symlinked disk28
Jul  5 14:16:33 cla.use.net zed[2591]: eid=50 class=io pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2595]: eid=51 class=io pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2600]: eid=52 class=io pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2605]: eid=53 class=probe_failure pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2607]: eid=54 class=io pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2611]: eid=55 class=io pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2615]: eid=56 class=io pool=ssdpool
Jul  5 14:16:34 cla.use.net zed[2619]: eid=57 class=probe_failure pool=ssdpool
Jul  5 14:16:35 cla.use.net zed[2628]: eid=58 class=zvol.create pool=Donkey
Jul  5 14:16:35 cla.use.net zed[2638]: eid=58 class=zvol.create pool=Donkey/Caching symlinked disk35
Jul  5 14:16:37 cla.use.net zed[2685]: eid=59 class=resilver.finish pool=ssdpool
Jul  5 14:16:41 cla.use.net zed[3000]: eid=60 class=zvol.create pool=Donkey
Jul  5 14:16:41 cla.use.net zed[3008]: eid=60 class=zvol.create pool=Donkey/TMMIS symlinked disk36
Jul  5 14:16:41 cla.use.net zed[3010]: eid=61 class=zpool.import pool=Donkey
Jul  5 14:16:41 cla.use.net zed[3012]: Pool import Donkey
Jul  5 14:16:43 cla.use.net zfs.util[68]: zpool import error 0
Jul  5 14:16:43 cla.use.net zfs.util[3045]: argv[0]: /System/Library/Filesystems/zfs.fs/Contents/Resources/./zfs.util
Jul  5 14:16:43 cla.use.net zfs.util[3045]: argv[1]: -k
Jul  5 14:16:43 cla.use.net zfs.util[3045]: argv[2]: disk11s1
Jul  5 14:16:43 cla.use.net zfs.util[3045]: zfs.util called with option k
Jul  5 14:16:43 cla.use.net zfs.util[3045]: blockdevice is /dev/disk11s1
Jul  5 14:16:43 cla.use.net zfs.util[3045]: FSUC_GETUUID
Jul  5 14:25:09 cla.use.net zed[4845]: eid=62 class=io pool=Donkey
Jul  5 14:25:09 cla.use.net zed[4849]: eid=63 class=io pool=Donkey
Jul  5 14:25:09 cla.use.net zed[4853]: eid=64 class=io pool=Donkey
Jul  5 14:25:09 cla.use.net zed[4857]: eid=65 class=probe_failure pool=Donkey
Jul  5 14:25:21 cla.use.net zed[4864]: eid=66 class=io pool=Donkey
Jul  5 14:25:21 cla.use.net zed[4868]: eid=67 class=io pool=Donkey
Jul  5 14:25:22 cla.use.net zed[4872]: eid=68 class=io pool=Donkey
Jul  5 14:25:22 cla.use.net zed[4876]: eid=69 class=probe_failure pool=Donkey
Jul  5 14:26:52 cla.use.net zed[5467]: eid=70 class=statechange 
Jul  5 14:26:52 cla.use.net zed[5469]: eid=71 class=statechange 
Jul  5 14:26:52 cla.use.net zed[5471]: eid=72 class=statechange 
Jul  5 14:26:53 cla.use.net zed[5473]: eid=73 class=statechange 
Jul  5 14:26:53 cla.use.net zed[5475]: eid=74 class=statechange 
Jul  5 14:26:53 cla.use.net zed[5477]: eid=75 class=statechange 
Jul  5 14:26:53 cla.use.net zed[5479]: eid=76 class=statechange 
Jul  5 14:26:53 cla.use.net zed[5481]: eid=77 class=statechange 
Jul  5 14:26:53 cla.use.net zed[5483]: eid=78 class=statechange 
Jul  5 14:26:54 cla.use.net zed[5487]: eid=79 class=statechange 
Jul  5 14:26:54 cla.use.net zed[5489]: eid=80 class=statechange 
Jul  5 14:27:05 cla.use.net zed[5496]: eid=81 class=statechange 
Jul  5 14:27:05 cla.use.net zed[5498]: eid=82 class=statechange 
Jul  5 14:27:05 cla.use.net zed[5500]: eid=83 class=statechange 
Jul  5 14:27:05 cla.use.net zed[5502]: eid=84 class=statechange 
Jul  5 14:27:05 cla.use.net zed[5504]: eid=85 class=statechange 
Jul  5 14:27:05 cla.use.net zed[5506]: eid=86 class=statechange 
Jul  5 14:27:19 cla.use.net zed[5521]: eid=87 class=config.sync pool=Trinity
Jul  5 14:27:19 cla.use.net zed[5525]: eid=87 class=config.sync pool=Trinity
Jul  5 14:27:57 cla.use.net zed[5594]: eid=88 class=config.sync pool=Trinity
Jul  5 14:27:57 cla.use.net zed[5598]: eid=88 class=config.sync pool=Trinity
Jul  5 14:27:57 cla.use.net zed[5601]: eid=89 class=zpool.import pool=Trinity
Jul  5 14:27:57 cla.use.net zed[5603]: Pool import Trinity
  pool: CLATM
 state: ONLINE
  scan: resilvered 0 in 0h10m with 0 errors on Thu Jul  3 13:47:56 2014
config:

    NAME          STATE     READ WRITE CKSUM
    CLATM         ONLINE       0     0     0
      mirror-0    ONLINE       0     0     0
        disk21s1  ONLINE       0     0     0
        disk22    ONLINE       0     0     0
    logs
      mirror-1    ONLINE       0     0     0
        disk13s1  ONLINE       0     0     0
        disk9s1   ONLINE       0     0     0
    cache
      disk34s2    ONLINE       0     0     0
      disk32s2    ONLINE       0     0     0

errors: No known data errors

  pool: Donkey
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: resilvered 0 in 2h38m with 0 errors on Thu Jul  3 17:25:41 2014
config:

    NAME          STATE     READ WRITE CKSUM
    Donkey        DEGRADED     0     0     0
      mirror-0    ONLINE       0     0     0
        disk19s2  ONLINE       0     0     0
        disk20s2  ONLINE       0     0     0
        disk23s2  ONLINE       0     0     0
    logs
      mirror-1    DEGRADED     0     0     0
        disk16s1  FAULTED      0   315     0  too many errors
        disk11s1  ONLINE       0     0     0
    cache
      disk33s1    ONLINE       0     0     0

errors: No known data errors

  pool: Trinity
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
    continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Jul  4 15:44:21 2014
    5.59T scanned out of 7.88T at 102M/s, 6h32m to go
    1.40T resilvered, 70.92% done
config:

    NAME                     STATE     READ WRITE CKSUM
    Trinity                  DEGRADED     0     0     0
      mirror-0               ONLINE       0     0     0
        disk27               ONLINE       0     0     0
        disk29s2             ONLINE       0     0     0
      mirror-1               DEGRADED     0     0     0
        disk30s2             ONLINE       0     0     0
        8780817928132127278  UNAVAIL      0     0     0  was /dev/disk30s2
      mirror-2               ONLINE       0     0     0
        disk31s2             ONLINE       0     0     0
        disk25s2             ONLINE       0     0     0
      mirror-3               ONLINE       0     0     0
        disk26s2             ONLINE       0     0     0
        disk24s2             ONLINE       0     0     0  (resilvering)
    logs
      mirror-4               ONLINE       0     0     0
        disk10s1             ONLINE       0     0     0
        disk17s1             ONLINE       0     0     0
    cache
      disk32s3               ONLINE       0     0     0
      disk34s3               ONLINE       0     0     0

errors: No known data errors

  pool: ssdpool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: resilvered 17.5M in 0h0m with 0 errors on Sat Jul  5 14:16:37 2014
config:

    NAME          STATE     READ WRITE CKSUM
    ssdpool       DEGRADED     0     0     0
      mirror-0    ONLINE       0     0     0
        disk12    ONLINE       0     0     1
        disk6     ONLINE       0     0     0
    logs
      mirror-1    DEGRADED     0     0     0
        disk14s1  ONLINE       0     0     0
        disk7s1   FAULTED      0     2     0  too many errors
    cache
      disk34s4    ONLINE       0     0     0
      disk32s4    ONLINE       0     0     0

errors: No known data errors

vdev-iokit and zfs.util should probably not compete at startup.

@JMoVS
Copy link
Contributor

JMoVS commented Oct 6, 2019

this is all from 2014, so closing this for now

@JMoVS JMoVS closed this as completed Oct 6, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants