Greg Oster wrote:
Adam PAPAI writes:

When I'm creating the raid array (raidctl -iv raid0), I get the following error message:

sd0(mpt0:0:0): Check Condition (error 0x70) on opcode 0x28
   SENSE KEY: Media Error
        INFO: 0x224c10c (VALID flag on)
    ASC/ASCQ: Read Retries Exhausted
        SKSV: Actual Retry Count: 63
raid0: IO Error.  Marking /dev/sd0d as failed.
raid0: node (Rod) returned fail, rolling backward
Unable to verify raid1 parity: can't read stripe.
Could not verify parity.


This means no hdd error..


Well... no hdd error for this set of reads... Hmmmmm.... What if you push both drives at the same time:

 dd if=/dev/rsd0d of=/dev/null bs=10m &
 dd if=/dev/rsd1d of=/dev/null bs=10m &

? (Were the drives "warm" when you did this test, and/or when the original media errors were reported? Does a 'raidctl -iv raid0' work now or does it still trigger an error? )


Then probably the raidFrame has the problem I guess..


RAIDframe doesn't know anything about SCSI controllers or SCSI errors... all it knows about are whatever VOP_STRATEGY() happens to return to it from the underlying driver...

I have to use /altroot on /dev/sd1a then, or is there a patch for raidframe to fix this?


There is no patch for RAIDframe to fix this. There is either a problem with the hardware (most likely), some sort of BIOS configuration issue (is it negotiating the right speed for the drive?), or (less likely) a mpt driver issue. Once you figure out what the real problem is and fix it, RAIDframe will work just fine :)
Later...

Greg Oster


After reboot my dmesg end:

rootdev=0x400 rrootdev=0xd00 rawdev=0xd02
Hosed component: /dev/sd0d.
raid0: Ignoring /dev/sd0d.
raid0: Component /dev/sd1d being configured at row: 0 col: 1
         Row: 0 Column: 1 Num Rows: 1 Num Columns: 2
         Version: 2 Serial Number: 100 Mod Counter: 27
         Clean: No Status: 0
/dev/sd1d is not clean !
raid0 (root)raid0: no disk label
raid0: Error re-writing parity!

dd if=/dev/rsd0d of=/dev/null bs=10m &
dd if=/dev/rsd1d of=/dev/null bs=10m &

was successfully ended.

# raidctl -iv raid0
Parity Re-Write status:

After this, my dmesg end:

rootdev=0x400 rrootdev=0xd00 rawdev=0xd02
Hosed component: /dev/sd0d.
raid0: Ignoring /dev/sd0d.
raid0: Component /dev/sd1d being configured at row: 0 col: 1
         Row: 0 Column: 1 Num Rows: 1 Num Columns: 2
         Version: 2 Serial Number: 100 Mod Counter: 27
         Clean: No Status: 0
/dev/sd1d is not clean !
raid0 (root)raid0: no disk label
raid0: Error re-writing parity!
raid0: no disk label
raid0: Error re-writing parity!

This is the same with the 36GB and 73GB as well.

What else should I check?

--
Adam PAPAI
D i g i t a l Influence
http://www.digitalinfluence.hu
Phone: +36 30 33-55-735
E-mail: [EMAIL PROTECTED]

Reply via email to