On Wed, Aug 6, 2008 at 8:20 AM, Tom Bird <[EMAIL PROTECTED]> wrote: > Hi, > > Have a problem with a ZFS on a single device, this device is 48 1T SATA > drives presented as a 42T LUN via hardware RAID 6 on a SAS bus which had > a ZFS on it as a single device. > > There was a problem with the SAS bus which caused various errors > including the inevitable kernel panic, the thing came back up with 3 out > of 4 zfs mounted.
Hi Tom, After reading this and the followups to date.... this could be due to anything ... and we (on the list) don't know the history of the system or the RAID device. You could have a bad SAS controller, bad system memory, a bad cable or a RAID controller with a firmware bug.... The first step would be to form a ZFS pool with 2 mirrors, beat up on it and gain some confidence in the overall system components. Write lots of data to it, run zpool scrub etc. and verify that it's 100% rock solid before you then zpool destroy it and then test with a larger pool. In every case where someone has initially posted an opening story list yours, the problem has almost always turned out to be outside of ZFS. As others have explained, if ZFS does not have a config with data redundancy - there is not much that can be learned - except that it "just broke". Keep testing and report back. Also, any additional data on the hardware and software config would be useful and let us know if this is a "new" system or if the hardware has already been in service and its reliability track record. > I've tried reading the partition table with format, works fine, also can > dd the first 100G from the device quite happily so the communication > issue appears resolved however the device just won't mount. Googling > around I see that ZFS does have features designed to reduce the impact > of corruption at a particular point, multiple meta data copies and so > on, however commands to help me tidy up a zfs will only run once the > thing has been mounted. > > Would be grateful for any ideas, relevant output here: > > [EMAIL PROTECTED]:~# zpool import > pool: content > id: 14205780542041739352 > state: FAULTED > status: The pool metadata is corrupted. > action: The pool cannot be imported due to damaged devices or data. > The pool may be active on on another system, but can be imported > using > the '-f' flag. > see: http://www.sun.com/msg/ZFS-8000-72 > config: > > content FAULTED corrupted data > c2t9d0 ONLINE > > [EMAIL PROTECTED]:~# zpool import content > cannot import 'content': pool may be in use from other system > use '-f' to import anyway > > [EMAIL PROTECTED]:~# zpool import -f content > cannot import 'content': I/O error > > [EMAIL PROTECTED]:~# uname -a > SunOS cs3.kw 5.10 Generic_127127-11 sun4v sparc SUNW,Sun-Fire-T200 > > > Thanks > -- > Tom > Regards, -- Al Hopper Logical Approach Inc,Plano,TX [EMAIL PROTECTED] Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss