Hi, I remember building a RAID5 on gvinum with 3 500GB hard drives some months ago, and it took horribly long to initialize the raid5 (several hours).
It seems to be a one-time job, cause since the raid finished it's initialization the machine starts up/ reboots within normal times. The documentation is some point, yes ;-) I got my basic know-how about gvinum and raid-1 from a blog also and could read-on with what I needed depending on the man pages. but it was hard.. Regards --- Mr. Olli On Mon, 2009-05-25 at 14:57 +0100, Howard Jones wrote: > Hi, > > Can anyone with experience of software RAID point me in the right > direction please? I've used gmirror before with no trouble, but nothing > fancier. > > I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD > 7.1-p4 system. > > I created a RAID 5 set with gvinum: > drive d0 device /dev/ad4s1a > drive d1 device /dev/ad6s1a > drive d2 device /dev/ad8s1a > drive d3 device /dev/ad10s1a > volume jumbo > plex org raid5 256k > sd drive d0 > sd drive d1 > sd drive d2 > sd drive d3 > > and it shows as up and happy. If I reboot, all the subdisks show as > stale, and so the plex is down. It seems to be doing a rebuild, although > it wasn't before, and would newfs, mount and accept data onto the new > plex before the reboot. > > Is there any way to avoid having to wait while gvinum apparently > calculates the parity on all those zeroes? > > Am I missing some step to 'liven up' the plex before the first reboot? > (loader.conf has the correct line to load gvinum at boot) I tried again, > with 'gvinum start jumbo' before rebooting, and that made no difference. > > Also is the configuration file format actually documented anywhere? I > got that example from someone's blog, but the gvinum manpage doesn't > mention the format at all! It *does* have a few pages dedicated to > things that don't work, which was handy... :-) The handbook is still > talking about ccd and vinum, and mostly covers the complications of > booting of such a device. > > On the subject of documentation, I'm also assuming that this: > S jumbo.p0.s2 State: I 1% D: d2 Size: > 931 GB > means it's 1% through initialising, because the states or the output of > 'list' aren't described in the manual either. > > I'm was half-considering switching to ZFS, but the most positive thing I > could find written about that (as implemented on FreeBSD) is that it > "doesn't crash that much", so perhaps not. That was from a while ago though. > > Does anyone use software RAID5 (or RAIDZ) for data they care about? > > Cheers, > > Howie > _______________________________________________ > freebsd-questions@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-questions > To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org" _______________________________________________ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"