Dear list,

I am in the process of speccing an OpenSolaris box for iSCSI Storage of 
XenServer domUs.  I'm trying to get the best performance from a combination of 
decent SATA II disks and some SSDs and I would really appreciate some feedback 
on my plans.  I don't have much idea what the workload will be like because we 
simply haven't got any existing implementation to guide us.  All I can say is 
that the vast majority of domUs will be small linux web servers, so I guess it 
will be largely random IO...

I was planning on using either a current development build of OpenSolaris or 
perhaps the next release version if it comes out in time -I understand 2009.06 
has some issues which negatively affect iSCSI and/or ZFS performance?

Here is what I have in mind for the hardware:

1 x Supermicro 4U Rackmount Chassis 24 x 3.5in SAS Hot-Swap
1 x Supermicro X8ST3-F Server Board LGA1366 DDR3 SAS/SATA2 RAID IPMI GbE PCIe 
ATX MBD-X8ST3-F-O
2 x Intel dual port gigabit NICs (model to be decided)
1 x Supermicro AOC-USAS-L4i UIO RAID Adapter SAS 8-Port 16MB PCIe x8
1 x Intel Xeon E5520
6 x 2GB Registered ECC RAM = 12GB total
2 x 160GB Intel X25M MLC SSDs for ARC
2 x 32GB Intel X25-E SLC SSDs for ZIL
18 x WD 250GB RE3 7200RPM 16MB for storage (arranged as 4 x 6-disk raidz2)
2 x 250GB SATA II for rpool mirror

This will sit on a dedicated gigabit ethernet storage network and the above 
gives me 4 x gigabit NICs-worth of throughput (ignoring the two NICs on the 
motherboard, which I will need for management and maybe a crossover for copying 
data to another box).

We are hoping the hardware will scale to over 50 domUs across three dom0 boxes 
but wouldn't be surprised if the network is saturated well before then, at 
which point we may have to look to 10gig ethernet.  The decision to use iSCSI 
over NFS was made primarily because we thought the dom0s would cache some and 
thus reduce the amount of data travelling over the wire.

Does this configuration look OK?

Stupid question: should I have a battery-backed SAS adapter? -it will allegedly 
be protected by UPS, but...

Later on, I would like to add a second lower spec box to continuously (or 
near-continously) mirror the data (using a gig crossover cable, maybe).  I have 
seen lots of ways of mirroring data to other boxes which has left me with more 
questions than answers.  Is there a simple, robust way of doing this without 
setting up a complex HA service and at the same time minimising load on the 
master?

Thanks in advance and sorry for the barrage of questions,

Matt.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to