On Saturday, 15 March 2003 at 10:34:54 +0200, Vallo Kallaste wrote: > On Sat, Mar 15, 2003 at 12:02:23PM +1030, Greg 'groggy' Lehey > <[EMAIL PROTECTED]> wrote: > >>> -current, system did panic everytime at the end of >>> initialisation of parity (raidctl -iv raid?). So I used the >>> raidframe patch for -stable at >>> http://people.freebsd.org/~scottl/rf/2001-08-28-RAIDframe-stable.diff.gz >>> Had to do some patching by hand, but otherwise works well. >> >> I don't think that problems with RAIDFrame are related to these >> problems with Vinum. I seem to remember a commit to the head branch >> recently (in the last 12 months) relating to the problem you've seen. >> I forget exactly where it went (it wasn't from me), and in cursory >> searching I couldn't find it. It's possible that it hasn't been >> MFC'd, which would explain your problem. If you have a 5.0 machine, >> it would be interesting to see if you can reproduce it there. > > Yes, yes, the whole raidframe story was meant as information about > the conditions I did the raidframe vs. Vinum testing on. Nothing to > do with Vinum, besides that raidframe works and Vinum does not. > >>> Will it suffice to switch off power for one disk to simulate "more" >>> real-world disk failure? Are there any hidden pitfalls for failing >>> and restoring operation of non-hotswap disks? >> >> I don't think so. It was more thinking aloud than anything else. As >> I said above, this is the way I tested things in the first place. > > Ok, I'll try to simulate the disk failure by switching off the > power, then.
I think you misunderstand. I simulated the disk failures by doing a "stop -f". I can't see any way that the way they go down can influence the revive integrity. I can see that powering down might not do the disks any good. Greg -- See complete headers for address and phone numbers
pgp00000.pgp
Description: PGP signature