> I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID 5. This
> 2 TB logical drive is partitioned into 10 x 200GB slices. I gave 4 of these 
> slices to a 
> Solaris 10 U2 machine and added each of them to a concat (non-raid) zpool as 
> listed below:

This is certainly a supportable configuration.  However, it's not an optimal 
one.

You think that you have a 'concat' structure, but it's actually striped/RAID-0, 
because ZFS implicitly stripes across all of its top-level structures (your 
slices, in this case). This means that ZFS will constantly be writing data to 
addresses around 0, 50 GB, 100 GB, and 150 GB of each disk (presuming the first 
four slices are those you used). This will keep the disk arms constantly in 
motion, which isn't good for performance.

> do you think my zfs configuration caused the drive failure?

I doubt it. I haven't investigated which disks ship in the 3511, but I would 
presume they are "enterprise-class" ATA drives, which can handle this type of 
head motion. (Standard ATA disks can overheat under a load which is heavy in 
seeks.)  Then again, the 3511 is marketed as a "near-line" rather than "on-line 
array" ... that may be simply because the SATA drives don't perform as well as 
FC.

I do see this note in the 3511 documentation: "Note - Do not use a Sun StorEdge 
3511 SATA array to store single instances of data. It is more suitable for use 
in configurations where the array has a backup or archival role."

(I too am curious -- why do you consider yourself down? You've got a RAID 5, 
one disk is down, are you just worried about your current lack of redundancy? 
[I would be.] Will you be adding a hot spare?)

Anton
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to