On 06/03/13 09:49 PM, Richard Hector wrote:
On 07/03/13 07:37, Dick Thomas wrote:
What is the best way to setup a raid 5 array (4* 2TB drives)
I'd avoid it, if possible.

If you lose a disk from a 2-disk raid1, you're back to the reliability
of a single disk.

If you lose a disk from your 4-disk raid 5, then you've got the
reliability of a 3-disk raid0 - ie if any of the 3 remaining disks dies,
you're hosed - 3 times the chances of failure. And if all the disks are
from the same batch as the initial failed one, and have had the same
usage ...

I'd rather have the single disks, or perhaps 2 single disks and a raid1,
split according to how valuable the data is.

But of course that's all up to your judgement :-)

Richard

The issue is the probability of failure of a second drive when the array is vulnerable after the failure of one drive. Given that all modern drives have SMART capability, you can normally detect a faulty drive long before it fails. The chances of a second failure during the rebuild are small.

The larger problem is having a defective array that goes undetected. That's why mdadm is normally configured to check the array for errors periodically.

RAID 6 only takes one more drive and removes even these small failure windows. RAID 1 simply uses too much hardware for the slight increase in reliability it gives relative to RAID 5. If you're super concerned about reliability, go to RAID 6.

The other thing to recognize is that RAID is not backup. Most data loss takes place through human error, not hardware failure. A good backup system is your ultimate guard against data loss. RAID is simply there to keep the hardware running between backups.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/51381ffe.3070...@rogers.com

Reply via email to