On Apr 14, 2010, at 12:05 AM, Jonathan wrote:

> I just started replacing drives in this zpool (to increase storage). I pulled 
> the first drive, and replaced it with a new drive and all was well. It 
> resilvered with 0 errors. This was 5 days ago. Just today I was looking 
> around and noticed that my pool was degraded (I see now that this occurred 
> last night). Sure enough there are 12 read errors on the new drive.
> 
> I'm on snv 111b. I attempted to get smartmontools workings, but it doesn't 
> seem to want to work as these are all sata drives. fmdump indicates that the 
> read errors occurred within about 10 minutes of one another.

Use "iostat -En" to see the nature of the I/O errors.

> 
> Is it safe to say this drive is bad, or is there anything else I can do about 
> this?

It is safe to say that there was trouble reading from the drive at some
time in the past. But you have not determined the root cause -- the info
available in zpool status is not sufficient.
 -- richard

> 
> Thanks,
> Jon
> 
> --------------------------------------------------------
> $ zpool status MyStorage
>  pool: MyStorage
> state: DEGRADED
> status: One or more devices are faulted in response to persistent errors.
>        Sufficient replicas exist for the pool to continue functioning in a
>        degraded state.
> action: Replace the faulted device, or use 'zpool clear' to mark the device
>        repaired.
> scrub: scrub completed after 8h7m with 0 errors on Sun Apr 11 13:07:40 2010
> config:
> 
>        NAME        STATE     READ WRITE CKSUM
>        MyStorage   DEGRADED     0     0     0
>          raidz1    DEGRADED     0     0     0
>            c5t0d0  ONLINE       0     0     0
>            c5t1d0  ONLINE       0     0     0
>            c6t1d0  ONLINE       0     0     0
>            c7t1d0  FAULTED     12     0     0  too many errors
> 
> errors: No known data errors
> --------------------------------------------------------
> $ fmdump
> TIME                 UUID                                 SUNW-MSG-ID
> Apr 09 16:08:04.4660 1f07d23f-a4ba-cbbb-8713-d003d9771079 ZFS-8000-D3
> Apr 13 22:29:02.8063 e26c7e32-e5dd-cd9c-cd26-d5715049aad8 ZFS-8000-FD
> --------------------------------------------------------
> That first log is the original drive being replaced. The second is the read 
> errors on the new drive.
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to