We have the following setup configured.  The drives are running on a couple PAC 
PS-5404s. Since these units do not support JBOD, we have configured each 
individual drive as a RAID0 and shared out all 48 RAID0’s per box. This is 
connected to the solaris box through a dual port 4G Emulex fibrechannel card 
with MPIO enabled (round-robin).  This is configured with the 18 raidz2 vdevs 
and 1 big pool. We currently have 2 zvols created with the size being around 
40TB sparse (30T in use). This in turn is shared out using a fibrechannel 
Qlogic QLA2462 in target mode, using both ports. We have 1 zvol connected to 1 
windows server and the other zvol connected to another windows server with both 
windows servers having a qlogic 2462 fibrechannel adapter, using both ports and 
MPIO enabled. The windows servers are running Windows 2008 R2. The zvols are 
formatted NTFS and used as a staging area and D2D2T system for both Commvault 
and Microsoft Data Protection Manager backup solutions. The SAN system sees 
mostly writes since it is used for backups.

We are using Cisco 9124 fibrechannel switches and we have recently upgraded to 
Cisco 10G Nexus switches for our Ethernet side. Fibrechannel support on the 
Nexus will be in a few years due to the cost. We are just trying to fine tune 
our SAN for the best performance possible and we don’t really have any 
expectations right now. We are always looking to improve something. :)
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to