On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:
So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran for three months, and it's had no hardware errors. But my zfs file system seems to have died a quiet death. Sun engineering response was to point to the FMRI, which says to throw out the zfs partition and start over. I'm real reluctant to do that, since it'll take hours to do a tape restore, and we don't know what's wrong. I'm seriously wondering if I should just toss zfs. Again, this is Solaris 10 06/06, not some beta version. It's an older server, a 280R with an older SCSI RaidKing
Looks to me like another example of ZFS noticing and reporting an error that would go quietly by on any other filesystem. And if you're concerned with the integrity of the data, why not use some ZFS redundancy? (I'm guessing you're applying the redundancy further downstream; but, as this situation demonstrates, separating it too far from the checksum verification makes it less useful.) -- David Dyer-Bennet, <mailto:[EMAIL PROTECTED]>, <http://www.dd-b.net/dd-b/> RKBA: <http://www.dd-b.net/carry/> Pics: <http://www.dd-b.net/dd-b/SnapshotAlbum/> Dragaera/Steven Brust: <http://dragaera.info/> _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss