Do you have a BBU on that card?

No BBU, and I *am* set to WriteThru. I haven't had any stability problems at all.

One other question, when you say you replace the manually failed
drive, are you using an absolute virgin disk? I seem to recall that
the card might "remember" the disk as a previously failed drive (based
on finding a previous config on the disk) and be reluctant to believe
it's a good disk.

No, I've only got four disks - matching 150GB WD Raptors, and I've done nothing sneaky before re-inserting them. The controller has treated them as good disks.

Oh, thanks for the extremely detailed report. It'll certainly help me
try and replicate your results.

No problem :),

Matthew

On Fri, 13 Oct 2006, Jon Simola wrote:

On 10/13/06, [EMAIL PROTECTED]

 That is, I am running firmware version 813G.  [According to the LSILogic
 website, it was released on 2005.03.11, and is now 5 versions old.]

I've got a spare with 813G, and my production one is 813J, fixed a few
little issues.

Do you have a BBU on that card? Without a BBU, and with the card's
cache set to WriteThru, trying to set a hot spare with bioctl would
lock up my controller, requiring a hard power cycle and the
entertaining fsck of large filesystems.

ami0 at pci4 dev 14 function 0 "Symbios Logic MegaRAID SATA 4x/8x" rev
0x07: irq 5 LSI 3008 32b
ami0: FW 813J, BIOS vH430, 128MB RAM

 Problem summary (problems with bioctl -H on a SATA 300-8x)
 ===============
 To summarize (I've included the full test case below) - I can now use
 bioctl -H to set an "Unused" drive to "Hot spare".  However, despite
 showing as hot spare in *both* bioctl and the LSI boot menu, when I
 fail a drive in my RAID array, the "hot spare" fails to behave as such
 (it will not be integrated into the degraded RAID array).

 It gets worse - once a drive has been set as a hot spare through bioctl,
 it can never be changed back to unused, nor can it be properly set as a
 hotspare through the LSI boot menu.  Essentially that slot is now
 unusable.  The only solution that I have found is to "Clear
 configuration" from the LSI boot menu (which then requires reinstall of
 the contents of the drives).

That sounds bad. I'm going to try and replicate that with my spare
stuff next week as I certainly don't want to be bit by that problem on
my production hardware.

One other question, when you say you replace the manually failed
drive, are you using an absolute virgin disk? I seem to recall that
the card might "remember" the disk as a previously failed drive (based
on finding a previous config on the disk) and be reluctant to believe
it's a good disk.

Oh, thanks for the extremely detailed report. It'll certainly help me
try and replicate your results.

--
Jon

Reply via email to