Hi all

I have setup 2 software raids (5 and 0) with 3 hard disks (120, 160, 250
Go) on my gutsy box. The raid 5 array (md0) contains the root system,
and the raid 0 array (md1) is mounted as a storage (unused) partition.
The /boot partition is a normal ext3 partition, present on each disk
(duplicated manually, for the moment).

I'm trying to let this setup boot, even if the raid 5 array (md0) is
degraded (ie one disk fails). When all hd are present, the system boots.
But if I try with only 2 disks, initramfs loads well, but md0 is never
mounted so the system doesn't find / and stop loading.
When the boot fails, the system gives me the hand in initramfs console.
Here, I can run my md array with this command :

# mdadm --assemble --scan --run

The "--run" option tell mdadm to start array, even in degraded mode.

So here, I suspected that the wrong option was passed to mdadm in
initramfs, and tell it to not to run a degraded array.
I've found (with grep on initrams content) that the file
/etc/udev/rules.d/85-mdadm.rules contains this line :

SUBSYSTEM=="block", ACTION=="add|change", ENV{ID_FS_TYPE}=="linux_raid*", \
        RUN+="watershed /sbin/mdadm --assemble --scan --no-degraded"

I guess it's the boot parameter for mdadm ! So, I changed it, made a new
initramfs, reboot with only 2 disks and... nothing more, it doesn't
start anymore :/

So, after this long story (sorry), my questions :

Do you think I'm totally lost, or editing this file is the good way ?
Is there a good reason why ubuntu's dev chose this "--no-degraded"
option for mdadm by default ?
What can I do more ??

Thank's for reading !

Ben

-- 
Ubuntu-devel-discuss mailing list
Ubuntu-devel-discuss@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-discuss

Reply via email to