hi philippe

if you want your system to boot...no many how badly damaged
your system might get... ( within reason )...you should
keep / as SMALL as possible so that yu can always boot
into single user mode to fix things

To test that raid1 setup properly..
-----------------------------------
        - power down... pull the ide cable to either of the disk
        - boot it up.... if it works...good
        - power down... reconnect the simulated bad disk...
        - boot it up... and let it resync again....
        -
        - cat /proc/mdstat should tell you its status???

        than simulate the OTHER failed disk 

keeping /boot separate is not very useful that you cn boot
a kernel since you dont have a root filesystem yet..

have fun
alvin
http://www.Linux-Consulting.com/Raid


the following worked for me... unplugged drives and everything still
booted... ( be careful, that any data written while in degraded mode
due to the fail disk is not yet mirrored... ie it...getit out of
degraded mode asap...


my test environment/partitions
------------------------------
/       64Mb    - just need to have 64Mb of good disk to work your magic

/tmp    128Mb
/var    256mb
/usr    2048Gb
swap    2xMemory ( say 512M max )
/opt    reset of disk for user space



/etc/fstab
----------
/dev/md0        /
/dev/md1        /tmp
/dev/md2        /var
/dev/md3        /usr
/dev/md4        /home   

/etc/lilo.conf
---------------
....
boot=/dev/md0
....
image=/boot/vmlinuz-2.2.19-Raid
 label=linux-2.2.19-Raid
 root=/dev/md0
...


/etc/raidtab ( with 2 disks ) - properly defined in the real file
------------
/dev/md0        /dev/hda1 + /dev/hdc1   /
/dev/md1        /dev/hda2 + /dev/hdc2   /tmp
/dev/md2        /dev/hda3 + /dev/hdc3   /var
/dev/md3        /dev/hda5 + /dev/hdc5   /usr
/dev/md4        /dev/hda7 + /dev/hdc7   /home
        - no point to making swapspace raid1 ???

... have fun ...

On Mon, 28 May 2001, Philippe Trolliet wrote:

> hello,
> i want lilo to boot from the md devices even if one hd fails. can anybody
> help me?
> here my configuration:
> 
> df -h shows:
> Filesystem            Size  Used Avail Use% Mounted on
> /dev/md0               28G  3.5G   23G  14% /
> /dev/md1               99M  5.3M   88M   6% /boot
> ----------------------------------------------------------------------------
> -------
> my raidtab:
> 
> #MD0
> raiddev                       /dev/md0
> raid-level            1
> nr-raid-disks         2
> chunk-size            32
> nr-spare-disks                0
> persistent-superblock 1
> device                        /dev/hdc3
> raid-disk             0
> device                        /dev/hda3
> raid-disk             1
> 
> #MD1
> raiddev                       /dev/md1
> raid-level            1
> nr-raid-disks         2
> chunk-size            32
> nr-spare-disks                0
> persistent-superblock 1
> device                        /dev/hdc1
> raid-disk             0
> device                        /dev/hda1
> raid-disk             1
> ----------------------------------------------------------------------------
> -------
> my fstab:
> 
> /dev/hda2       swap                      swap            defaults   0   2
> /dev/hdc2       swap                      swap            defaults   0   2
> /dev/md0        /                         ext2            defaults   1   1
> /dev/md1        /boot                     ext2            defaults   1   1
> 
> /dev/hdb        /cdrom                    auto
> ro,noauto,user,exec 0   0
> 
> /dev/fd0        /floppy                   auto            noauto,user 0   0
> 
> proc            /proc                     proc            defaults   0   0
> # End of YaST-generated fstab lines
> ----------------------------------------------------------------------------
> -------
> /proc/mdstat:
> 
> Personalities : [linear] [raid0] [raid1] [raid5]
> read_ahead 1024 sectors
> md1 : active raid1 hda1[1] hdc1[0] 104320 blocks [2/2] [UU]
> md0 : active raid1 hda3[1] hdc3[0] 29808512 blocks [2/2] [UU]
> unused devices: <none>
> 
> thanks a lot
> best regards
> ph. trolliet
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to [EMAIL PROTECTED]
> 

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]

Reply via email to