Hi, engineer.

Hmm... I done this trisk, but it's not works ;). I add

CHECK=`raidctl -s raid1 | grep '/dev/wd1e: optimal'`
if [ -n "$CHECK" ]; then
        raidctl -f /dev/wd1e raid1
fi
# Check parity on raid devices.
raidctl -P all

into /etc/rc on /dev/wd0a (where live system exist and kernel booting
from). But it's reconstructing/repariting, making me wait all of it.
Actually I try that with 10G raid1 so testing was faster.

When I do the same from shell, I see
# raidctl -s all
raid1 Components:
           /dev/wd0e: optimal
           /dev/wd1e: failed
No spares.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.
raid0 Components:
           /dev/wd0d: optimal
           /dev/wd1d: optimal
No spares.
Parity status: clean
Reconstruction is 100% complete.
Parity Re-write is 100% complete.
Copyback is 100% complete.

# raidctl -P all
raid1: Parity status: clean
raid0: Parity status: clean

e.g. effect achieved, parity looks clean. Why this not so with boot?

>  Further with RAID.

> Let's assume, that we have 2 RAID's:

> raid0 (wd0d, wd1d), 10 G - system (/, swap, /usr, /var)
> raid1 (wd0e, wd1e), 160 G - files, logs (/var/log, /var/www,
> /usr/[src,obj,ports]) 

>  Let's imagine, what if I add "raidctl -f /dev/wd1e raid1" (let it
> look's like failed) before "raidctl -P all"?
>  The parity will be repaired "fast" on raid0, parity on raid1 will be
> skipped, fsck will check raid0 and raid1 fast, system will start fast.

>  Furthermore, I can not just do "raidctl -f /dev/wd1e raid1", but
> something like "raidctl -s raid1" and if
>           /dev/wd0e: optimal
> do "raidctl -f /dev/wd1e raid1". But if
>           /dev/wd0f: failed
> NOT do (there may be all disk0 actually failed), not fail wd1e,
> because wd1e already is single working part of raid1.
>  Anyhow, is it wd1e labeled as "failed" or wd0e actually "failed" -
> "raidctl -P all" skip parity check on raid1, and rc just fsck it.

>  Then, in /etc/rc.local maybe, check "raidctl -s raid1", and if w1e
> failed but wd0e optimal - is it our "raidctl -f /dev/wd1e raid1" fail
> it or wd1e actually fail or all disk1 fail, not important - do
> "raidctl -F /dev/wd1d raid1" and let it reconstruct as it wants... or
> not, if disk1 fail. Not important again.
>  But we are working here, not waiting until it ends!
>  Then we may check, if it so bad that disk0 or disk1 fail, we mail to
> root/admin, or so... But, again, let it be, anyhow, SYSTEM WORKING.

>  I understand that it is very specific solution, very dependent on
> that configuration, but this if not for all cases, this is for SUCH
> cases! And this can be useful for those who use such configurations.

>  So, what you can say about this?
 



-- 
engineer

Reply via email to