Thanks to everyone who replied to my question.  Your input is very helpful.

To clarify, I was concerned more with MTTDL than performance. With either of the 7+2 or 10+2 layouts, I am able to achieve far more throughput than is available to me via the network. Doing tests from memory on the system I can push > 550MB/s to the drives, but as of now I only have a 1Gb/s network interface on the box. I should be able to add another 1Gb/s link shortly, but that is still far less than I can drive the disks to. The major concern was weighting the increased probability of data loss given more drives in the raid set versus having spares available in the array given a 4-6hr drive replacement window 24x7.

For Richard,
The drives are Seagate 500GB SATA drives (Not sure of the exact model), in an EMC Clariion CX3 enclosure. There are 6 shelves of 15 drives, with each drive presented as a raw lun to the server. They are attached to a pair of dedicated 4Gb/s fabrics.

It was interesting to test the 7+2 and 10+2 layouts w/ zfs versus a 3+1 hardware RAID running on the array. Using hardware RAID we saw a ~2% performance improvement. But we figured the improved MTTDL and being able to discover/recover from write/read errors with ZFS was well worth the 2% difference.

One last question, the link from przemol (http://sunsolve.sun.com/search/document.do?assetkey=1-9-88385-1) references a qlc.conf parameter, but we are running Emulex cards (emlxs driver), is there similar tuning that can be done with those?

Thanks again!

-Andy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to