On Wednesday 19 December 2007 07:50, S Scharf wrote: > I am running a Debian 3.1 (Sarge) server with Raid 1 mirroring on the disk > drive. > > Recently, one of the disks failed. The system sent root a proper e-mail > notification of the failure. Unfortunately, > the system seemed to continue to try to use the disk and operations slowed > to the point that the only thing I could > do was to power the system down and physically remove the bad drive. I had > thought to check the mdadm status > and remove the failed drive from the array by command. > > My question is shouldn't the Raid system have removed the drive for me > after it had failed? Why was the system still > trying to do operations on it after noticing the failure? Was (is) there > something wrong with my raid configuration?
I'm assuming IDE drives, since this doesn't sound like a SCSI scenario. IDE, is well, 'not Scottish'[1] Even if your software RAID is no longer using the device it doesn't mean that the device won't try to communicate with your system and vice versa (at a hardware level). If you've got another device on the same cable as the failed drive, if the failed drive may try to respond. Either way you get issues. So no, likely there is nothing wrong your raid configuration. I'd suggest scsi drives and, better yet, hardware scsi raid if you can afford them, but with standard ide components there's not much to be done. hdparm _might_ allow you to detach the failed device from the ide bus, but I'm not really sure. [1] There a saying, "If it's not Scottish, it's crrrrap!". Of course as I am not, I don't believe it. -- And that's my crabbing done for the day. Got it out of the way early, now I have the rest of the afternoon to sniff fragrant tea-roses or strangle cute bunnies or something. -- Michael Devore GnuPG Key Fingerprint 86 F5 81 A5 D4 2E 1F 1C http://gnupg.org No more sea shells: Daniel's Weblog http://cshore.wordpress.com
pgpuvbINoRnHJ.pgp
Description: PGP signature