On 10/25/06, Siegfried Nikolaivich <[EMAIL PROTECTED]> wrote:
...
While the machine was idle, I started a scrub.  Around the time the scrubbing 
was supposed to be finished, the machine panicked.
This might be related to the 'metadata corruption' that happened earlier to me. 
 Here is the log, any ideas?
...
Oct 24 20:13:52 FServe marvell88sx: [ID 812950 kern.warning] WARNING: 
marvell88sx0: error on port 3:
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]       device 
disconnected
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]       device connected
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]       SError interrupt
Oct 24 20:13:52 FServe marvell88sx: [ID 131198 kern.info]       SErrors:
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]               
Recovered communication error
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]               PHY 
ready change
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]               10-bit 
to 8-bit decode error
Oct 24 20:13:52 FServe marvell88sx: [ID 517869 kern.info]               
Disparity error


Hi Siegfried,
this error from the marvell88sx driver is of concern, The 10b8b decode
and disparity error messages make me think that you have a bad piece
of hardware. I hope it's not your controller but I can't tell without more
data. You should have a look at the iostat -En output for the device
on marvell88sx instance #0, attached as port 3. If there are any error
counts above 0 then - after checking /var/adm/messages for medium
errors - you should probably replace the disk.

However, don't discount the possibly that the controller and or the
cable is at fault.

cheers,
James
--
Solaris kernel software engineer, system admin and troubleshooter
             http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/in/jamescmcpherson
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to