> Have you had the same problem, in that softraid wouldn't assemble the
> RAID volume with a missing disk? How did you "remove" the failed device
> from the RAID array (ie. you 'add' the new disk with -R during rebuild,
> but how do you 'remove' the failed/offline drive with eg. bioctl)?

No, in fact I used RAID5 purely for testing if I broke something
during my softraid hackings or not. My testing of RAID5 was intensive
offlining and rebuild with a new drive. No, I've not used 4 real
drives, but in fact just 2 real drives and on every just 2 RAID
partitions. I created array with something like -l
/dev/sd0a,/dev/sd0d,/dev/sd1a,/dev/sd1d -- or so and then randomly put
some drive/partition off-line by bioctl -O and then used the same to
rebuild by bioctl -R. If you put drive offline it'll be offline and
will not be attempted to be added again if its health status changes
(somehow and whatever it means). softraid founds its drives since it
uses (1) specific RAID partitions and (2) it saves specific metadata
to the partition. This way RAID partitions (those already part of some
array) know what array they are part of. That also means that if you
use bioctl -c on drive/partition which is already part of some array,
it'll not attempt array creation but array attachement. It's somewhat
misleading and I always felt that there should be clear "attach" and
"create" options in bioctl but so far have not come with any patches
for this.

Anyway, easy testing. Do you have some windows/linux machine with
virtualbox or do you use openbsd with vmd? If so just create virtual
openbsd environment with several drives and test what I suggest: (1)
create RAID5, (2) put one of the drives into off-line mode (3) rebuild
with another drive. Easy isn't it? If this runs fine for you, do the
same on real array...and pray/cross your finger/do whatever you do in
this case. :-)

Reply via email to