> Paul Kraus wrote:
> >         In the ZFS case I could replace the disk
> and the zpool would
> > resilver automatically. I could also take the
> removed disk and put it
> > into the second system and have it recognize the
> zpool (and that it
> > was missing half of a mirror) and the data was all
> there.
> > 
> >         In no case did I see any data loss or
> corruption. I had
> > attributed the system hanging to an interaction
> between the SAS and
> > ZFS layers, but the previous post makes me question
> that assumption.
> > 
> >         As another data point, I have an old Intel
> box at home I am
> > running x86 on with ZFS. I have a pair of 120 GB
> PATA disks. OS is on
> > SVM/UFS mirrored partitions and /export home is on
> a pair of
> > partitions in a zpool (mirror). I had a bad power
> connector and
> > sometime after booting lost one of the drives. The
> server kept running
> > fine. Once I got the drive powered back up (while
> the server was shut
> > down), the SVM mirrors resync'd and the zpool
> resilvered. The zpool
> > finished substantially before the SVM.
> > 
> >         In all cases the OS was Solaris 10 U 3
> (11/06) with no
> > additional patches.
> 
> The behaviour you describe is what I would expect for
> that release of
> Solaris + ZFS.
It seems this is fixed in SXCE, do you know if some of the fixes made it into 
10_U4?

Thanks,
Paul
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to