Hi I recently (last week), migrated from 10 x 1Tb to adaptec 51645 and 5 x 2T drives.
my experience, I can't get frub2 and the adaptec to work, so I am booting from a SSD I had. I carved up the 5x2T into 32G (mirror 1e - mirror stripe + parity) - too boot from and mirrored against my ssd. the rest went into a raid5 till i moved my info over - this took around 18 hours, my data was originally on a vgs on a pv on my raid6 mdadm I made the adaptec 5T into a pv and added it to the vg and then did a pvmove - that took time :) next I went and got 2 more 2Tb drives and did an online upgrade from 5x2T raid5 to 7x2Tb raid6 now sitting at 9T - this took about 1 week to resettle. Other quirks I had to used parted to install gpt partition table on the drive it was over some limit for mbr's. Had a bit of a scare when I resized my pvs partition with parted, it submits each command once its typed - I had to delete my pv partition and then recreate it - same as deleting a partition and then recreating it, but with fdisk it doesn't really happen until Write time.... 5 T of info gone potentially .... could not use the resize command it did not under stand lvm/pv's But all is okay now resized and ready. I choose raid6 because its just another drive and I value my data more than another drive. I also have 3 x 1T in the box in the raid 1ee setup which is stripe / mirror / parity setup. I don't use batter backup to the machine, I have a ups attached, which can run it for 40 min on battery. Note - I also backup all my data to another server close by but another bulding and all the important stuff get backed up off site. I use rdiff-backup Alex On Tue, Apr 27, 2010 at 2:11 PM, Tim Clewlow <t...@clewlow.org> wrote: > >> I don't know what your requirements / levels of paranoia are, but >> RAID 5 is >> probably better than RAID 6 until you are up to 6 or 7 drives; the >> chance of a >> double failure in a 5 (or less) drive array is minuscule. >> > . > I currently have 3 TB of data with another 1TB on its way fairly > soon, so 4 drives will become 5 quite soon. Also, I have read that a > common rating of drive failure is an unrecoverable read rate of 1 > bit in 10^14 - that is 1 bit in every 10TB. While doing a rebuild > across 4 or 5 drives that would mean it is likely to hit an > unrecoverable read. With RAID 5 (no redundancy during rebuild due to > failed drive) that would be game over. Is this correct? > > Tim. > > > -- > To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org > with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org > Archive: > http://lists.debian.org/706fc98e51cb5ceddd4e32ea1bc05cc3.squir...@192.168.1.100 > > -- To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org Archive: http://lists.debian.org/x2i836a6dcf1004270256v9bf62c2bh9fa70af4907fe...@mail.gmail.com