On Jun 28, 2010, at 11:27 PM, George wrote: > I've tried removing the spare and putting back the faulty drive to give: > > pool: storage2 > state: FAULTED > status: An intent log record could not be read. > Waiting for adminstrator intervention to fix the faulted pool. > action: Either restore the affected device(s) and run 'zpool online', > or ignore the intent log records by running 'zpool clear'. > see: http://www.sun.com/msg/ZFS-8000-K4 > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > storage2 FAULTED 0 0 1 bad intent log > raidz1 ONLINE 0 0 0 > c9t4d2 ONLINE 0 0 0 > c9t4d3 ONLINE 0 0 0 > c10t4d2 ONLINE 0 0 0 > c10t4d4 ONLINE 0 0 0 > raidz1 DEGRADED 0 0 6 > c10t4d0 FAULTED 0 0 0 corrupted data > replacing DEGRADED 0 0 0 > c9t4d0 ONLINE 0 0 0 > c9t4d4 UNAVAIL 0 0 0 cannot open > c10t4d1 ONLINE 0 0 0 > c9t4d1 ONLINE 0 0 0 > > Again this core dumps when I try to do "zpool clear storage2" > > Does anyone have any suggestions what would be the best course of action now?
I think first we need to understand why it does not like 'zpool clear', as that may provide better understanding of what is wrong. For that you need to create directory for saving crashdumps e.g. like this mkdir -p /var/crash/`uname -n` then run savecore and see if it would save a crash dump into that directory. If crashdump is there, then you need to perform some basic investigation: cd /var/crash/`uname -n` mdb <dump number> ::status ::stack ::spa -c ::spa -v ::spa -ve $q for a start. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss