> My problem is that LVM2 is not supported in parted which is the
> recommended tool to deal with this.
> 
> I suspect I only need to map the individual PE to a particular start
> sector on each drive, not btrfs, but then there is stripe/block sizes to
> consider as well ... WD also are recommending 1mb sector boundaries for
> best performance - I can see a reinstall coming up :)
>
 
I have on my workstation:
    2 WD 2TB Black Drives
    5 WD 2TB RE4 Drives

Some notes:
- The black drives have horrible reliability, poor sector remapping, and have 
certain standard drive features to make them unusable in raid.  I would not 
buy them again. I'm not sure how similar the green drives are.
- Many of the recent WD drives have a tendency to power down/up frequently 
which can reduce drive lifetime (research it and ensure it is set 
appropriately for your needs).
- Due to reliability concerns, you'll may need to run smartd to give adequate 
pre-failure warnings

Anyhow, in my config I have:

1 RE4 Drive as Server Boot Disk
4 RE4 Drives in SW RAID10 (extremely good performance and reliability)
2 Black Drives in LVM RAID0 for disk-to-disk backups (thats about all I trust 
them with).

When I setup the LVM RAID0, I used the following commands to get good 
performance:
      fdisk (remove all partitions, you don't need them for lvm)
     pvcreate --dataalignmentoffset 7s /dev/sdd
     pvcreate --dataalignmentoffset 7s /dev/sdf
     vgcreate -s 64M -M 2 vgArchive /dev/sdd /dev/sdf
     lvcreate -i 2 -l 100%FREE -I 256 -n lvArchive -r auto vgArchive
    mkfs.ext4 -c -b 4096 -E stride=64,stripe_width=128 -j -i 1048576 -L 
/archive /dev/vgArchive/lvArchive

I may have the ext4 stride/stripe settings wrong above, I didn't have my 
normal notes when I selected them - but the rest of the config I scrounged 
from other blogs and seemed to make sense (the --dataalignmentoffset 7s) seems 
to be the key.

My RAID10 drives are configured slightly different w/ 1 partition that starts 
on sector 2048 if I remember and extends to the end of the drive.

The 4 Disk SW RAID10 array gives me 255MB/s reads, 135MB/s block writes, and 
98MB/s rewrites (old test, may need to rerun for latest changes/etc).

LVM 2 Disk RAID0 gives 303MB/s reads, 190MB/s block writes, and 102MB/s 
rewrites (test ran last week).  

Regards,
Matt
-- 
Matthew Marlowe    /  858-400-7430  /    DeployLinux Consulting, Inc
  Professional Linux Hosting and Systems Administration Services
              www.deploylinux.net   *   m...@deploylinux.net
                             'MattM' @ irc.freenode.net
       

Reply via email to