On 8/11/07, Stan Seibert <[EMAIL PROTECTED]> wrote:
> I'm not sure if that answers the question you were asking, but generally I 
> found that damage to a zpool was very well confined.

But you can't count on it.  I currently have an open case where a
zpool became corrupt and put the system into a panic loop.  As this
case has progressed, I found that the panic loop part of it is not
present in any released version of S10 tested (S10U3 + 118833-36,
125100-07, 125100-10) but does exist in snv69.

The test mechanism is whether "zpool import" (no pool name) causes the
system to panic or not.  If that happens, I'm going on the assumption
that if this causes  panic, having the appropriate zpool.cache in
place will cause it to panic during every boot.

Oddly enough, I know I can't blame the storage subsystem on this - it
is ZFS as well.  :)

It goes like this:

HDS 99xx
T2000 primary ldom
S10u3 with a file on zfs presented as a block device for an ldom
T2000 guest ldom
zpool on slice 3 of block device mentioned above

Depending on the OS running on the guest LDOM "zpool import" gives
different results:

S10U3 118833-36 - 125100-10:
  "zpool is corrupt" "restore from backups"
S10u4 Beta, snv69 and I think snv59:
   panic - S10u4 backtrace is very different from snv*

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to