I watched both the youtube video
http://www.youtube.com/watch?v=CN6iDzesEs0
and the one on http://www.opensolaris.com/, "ZFS – A Smashing Hit".
In the first one is obvious that the app stops working when they smash the
drives; they have to physically detach the drive before the array
reconstru
> Why would it be assumed to be a bug in Solaris? Seems
> more likely on
> balance to be a problem in the error reporting path
> or a controller/
> firmware weakness.
Weird: they would use a controller/firmware that doesn't work? Bad call...
> I'm pretty sure the first 2 versions of this demo
> if a disk vanishes like
> a sledgehammer
> hit it, ZFS will wait on the device driver to decide
> it's dead.
OK I see it.
> That said, there have been several threads about
> wanting configurable
> device timeouts handled at the ZFS level rather than
> the device driver
> level.
Uh, so I can
> In the worst case, the device would be selectable,
> but not responding
> to data requests which would lead through the device
> retry logic and can
> take minutes.
that's what I didn't know: that a driver could take minutes (hours???) to
decide that a device is not working anymore.
Now it come
> Oh, and regarding the original post -- as several
> readers correctly
> surmised, we weren't faking anything, we just didn't
> want to wait
> for all the device timeouts. Because the disks were
> on USB, which
> is a hotplug-capable bus, unplugging the dead disk
> generated an
> interrupt that b