Uwe Dippel <udippel <at> uniten.edu.my> writes:

> To me this seems a result of the sequence at boot: at first we identify the
> physical drives, that is sd0, sd1, sd2 and sd3 in this case, and only later
> do we get softraid up, sensibly roaming the RAID one up. Sensibly? Because
> fstab can't know and will want to mount partitions of a lower number 
> (sd3 in this case), which is always impossible.

I do understand the problem of 'no labels'/'no UUID', but the current working
will break boot whatever happens: any extra drive, in any slot, will be
discovered at boot time before softraid is activated. So it will break 100%,
right? There is no real solution without disk IDs, though a hackish one: 
If softraid was configured at sd3 (assembled from sd1 and sd2 in this case), 
the kernel needs to be aware of this fact when it goes into drive discovery 
at boot.
So that when one plugs another drive into a higher controller, it will discover:
sd0 - sd1 - sd2 - sd3_is_taken - sd4. Then fstab will be correct w.r.t. sd0 to
sd3, and one can use sd4, the new drive, for whatever purpose it had been
intended. And if sd0 was removed from the original configuration, it would find
sd0 - sd1 - sd3_is_taken. Then roaming can still do sd0->sd1 and sd1->sd2, and
the RAID will come up properly, again.
That's the best I could think of now, anything but perfect, but always better
than a 100% breakage. 
What do you think?

Reply via email to