Andrea Ganduglia a écrit :
Hi, I have a problem with unexpected raid behavior. On my machine I have
configured two raid5 (raid software, mdadm) over 5 disks + 1 spare disk.
md0: sda1 sdb1 sdc1 sdd1 sde1 [UUUUU] (spare: sdf1)
md1: sda2 sdb2 sdc2 sdd2 sde2 [UUUUU] (spare: sdf2)
Now. I have set fails sdb2 on md1
~$ mdadm --fail /dev/md1 /dev/sdb2
md1: sda2 sdc2 sdd2 sde2 [U_UUU] (spare: sdf2)
and hot added sdf2 on same disk array. Array has been rebuild including
sdf2
into it:
~$ mdadm --add /dev/md1 /dev/sdf2
md1: sda2 sdc2 sdd2 sde2 sdf2 [UUUUU] (spare: none)
Ok, it works well.
Now, for emulate disaster scenario I have halted machine and phisically
remove
/dev/sdb. System booted well but /dev point has been shifted by one
position,
or, in other words:
sda -now-is-> sda
sdb -now-is-> sdc
sdc -now-is-> sdd
sdd -now-is-> sde
sdf -now-is-> sde
while sdf is not recognized by the system. Why system reallocate /dev
point in
this way? It's a disaster for daily maintenance. Now my /proc/mdstat said:
md0: sda1 sdb1 sdc1 sdd1 sde1 [UUUUU] (spare: none)
md1: sda2 sdb2 sdc2 sdd2 sde2 [UUUUU] (spare: none)
/dev/sdf do not exist, but it is phisically in my machine.
while
/dev/sdb found into raid arrays, but it is phisically on my desk!
Please, help me to understand mdadm logic.
Hi,
It's not a mdadm logic problem...
That's "/dev" logic problem... Linux assign the first disk as sda, the
second as sdb, and so on
If you remove the first disk, all is shifted by one !!! :-(
Regards
Guillaume
--
Guillaume
E-mail: silencer_<at>_free-4ever_<dot>_net
Blog: http://guillaume.free-4ever.net
----
Site: http://www.free-4ever.net
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]