On Thursday, June 03, 2004 4:59 PM, Michael Kahle wrote:
> Recently when starting my computer I was notified by the Adaptec
> SCSI BIOS that my disk ID #4 was not starting up.  This disk was
> failed.  This disk is part of the software raid array that I have
> setup on the machine.
>
> When the kernel booted it notified me that it could not see the disk
> and as a result my raid 5 array was in a failed state.  I shutdown
> the computer, replaced the drive with an identical one and proceeded
> to restart the computer.  The Adaptec SCSI BIOS noticed the drive and
> linux booted.  At this point the RAID 5 array was still in a failed
> state.
>
> In the console I typed the following:
> lsraid -R -p
>
> And got the following output:
> # md device [dev 9, 0] /dev/md0 queried online raiddev /dev/md0
>         raid-level              5
>         nr-raid-disks           5
>         nr-spare-disks          0
>         persistent-superblock   1
>         chunk-size              32
> 
>         device          /dev/sdb1
>         raid-disk               0
>         device          /dev/sdc1
>         raid-disk               1
>         device          /dev/sdd1
>         raid-disk               2
>         device          /dev/sde1
>         raid-disk               3
>         device          /dev/null
>         failed-disk             4
>
> [EMAIL PROTECTED]:~#
>
> I then proceeded to run "raidhotadd /dev/md0 /dev/sdf1" after setting
> up the partition of /dev/sdf.  The program launched the process
> md0_resync.  After this completed I ran:
> lsraid -R -p
>
> And got the following output:
> # md device [dev 9, 0] /dev/md0 queried online raiddev /dev/md0
>         raid-level              5
>         nr-raid-disks           5
>         nr-spare-disks          0
>         persistent-superblock   1
>         chunk-size              32
>
>         device          /dev/sdb1
>         raid-disk               0
>         device          /dev/sdc1
>         raid-disk               1
>         device          /dev/sdd1
>         raid-disk               2
>         device          /dev/sde1
>         raid-disk               3
>         device          /dev/sdf1
>         raid-disk               4
>
> So, it looks to me like everything worked OK.  That is until I
> reboot... then I get the same failed-disk report as shown on top!
> Weird.  Below is my syslog.  Sorry that I my post is so long, I
> thought it best to have more information than not enough.  
> 
> // Syslog
<snip>

I thought I would follow up with this.  I have solved the problem.  It turns
out that the partition /dev/sdf1 was setup as an ext2 partition and not a
"RAID auto detect" partition.  I rebooted the computer.  Changed the
partition type.  Re-synced the RAID array.  Rebooted.  Now everything is
working as advertised.  Thanks for looking!

Michael


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to