On 8 Aug 2005, at 22:00, [EMAIL PROTECTED] wrote:
Quoting Chris Boot <[EMAIL PROTECTED]>:Hi all,I know this isn't strictly Debian related, but I'll be implementing this on a Sarge system so I thought it appropriate.I've just taken the plunge and bought 4 250GB SATA hard drives and am planning on implementing a RAID, with LVM2 layered over the top of that. The main purpose of this system will be to store films, music, and other general fileserver uses: in other words larger files as opposed to smaller ones. I've had a few ideas about how to set it all up, but I'm wondering what others would suggest...Option 1: RAID 10 md0: RAID 1 (sda sdc) md1: RAID 1 (sdb sdd) md2: RAID 0 (md0 md1) Option 2: RAID 0+1 md0: RAID 0 (sda sdc) md1: RAID 0 (sdb sdd) md2: RAID 1 (md0 md1) Option 3: RAID 5 md0: RAID 5 (sda sdb sdc sdd) Option 4: 2xRAID 0 + LVM striping md0: RAID 0 (sda sdc) md1: RAID 0 (sdb sdd) Striping would be implemented through LVM.In particular I'm having trouble figuring out what the actual performance / fault-tolerance differences between RAID 10 and RAID 0 +1 might be. I hear RAID 5 is hardly worth using in software, and I'm not sure I'd be using it properly with only 4 drives. As for the final option, would there be any advantage at all to using this method (bearing in mind I'll be using LVM in any case)? Would anyone have any other suggestions?Hello, If I had a similar setup, I would be considering option 1 or option 3.Option 4 doesn't give any fault tolerance. If a drive dies, you'll be quickly restoring from backup. Option 3 with Raid 5 would give you more overall disk space and might have quicker reads than option 1 (3 disks striped compared to just 2). But I get the feeling that RAID 10 is the new thing these days if you have enough hard disks. I'm in the process of converting my SW raid 5 to raid 10. The rebuild times of RAID 5 just take too long for our environment if a disk were to fail. In case you haven't seen it already, check out http://miracleas.com/ BAARF/BAARF2.html
Whoops! Sorry! I meant RAID-1 for Option 4, thus giving me the redundancy but striping with LVM instead of RAID 0.
Now you've pointed me at BAARF I've made up my mind about RAID-5 and RAID-0+1 so options 2 and 3 are out. But that leaves RAID-10 and 2xRAID-1 + LVM. Other than the simple addition of 'choice' and possible reduction in performance, is there any reason so stripe using LVM instead of RAID-0?
Aargh! I guess I'm doing too much thinking...
If you have the time, I would suggest setting up your arrays, and run some tests. (iozone, and bonnie can be installed and used for benchmarking)or just perform some every day tasks, and get a feel for how it runs. Then, change it to match your other options, and test it again. See which one works best for you.
If only I had that sort of time, unfortunately I need a working system pretty quickly since the old drives are being recycled...
Thanks for the help! Chris -- Chris Boot [EMAIL PROTECTED] http://www.bootc.net/
smime.p7s
Description: S/MIME cryptographic signature