Phil,

 

Recently, we have built a large configuration on 4 way Xeon sever with 8 4U
24 Bay JBOD. We are using 2x LSI 6160 SAS switch so we can easy to expand
the Storage in the future.

 

1)      If you are planning to expand your storage, you should consider
using LSI SAS switch for easy future expansion. 

2)      We carefully pick one HD from each JBOD to create RAIDZ2. So we can
loss two JBOD at the same time while data is still accessible . It is good
to know you have the same idea

3)      Seq. read/write is currently limited by 10G NIC. Local storage can
easily hit 1500MB/s + with even small number of HD. Again 10G is bottom-neck


4)      I recommend you use native SAS HD in large scale system if possible.
Native SAS HD work better 

5)      We are using DSM to locate fail disk and monitor FRU of JBOD
http://dataonstorage.com/dsm.

 

I hope the above points can help

 

The configuration is similar to the configuration 3 in the following link

http://dataonstorage.com/dataon-solutions/lsi-6gb-sas-switch-sas6160-storage
.html

 

Technical Specs:

DNS-4800 4way Intel Xeon 7550 server with 256G RAM 

2x LSI 9200-8E HBA

2x LSI 6160 SAS Switch

8x DNS-1600 4U 24bay JBOD(dual IO in MPxIO) with 2TB Seagate SAS HD RAIDZ2

STEC Zeus RAM for ZIL

Intel 320 SSD for L2ARC   

10G NIC

 

Rocky 

 

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Phil Harrison
Sent: Sunday, July 24, 2011 11:34 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] Large scale performance query

 

Hi All,

 

Hoping to gain some insight from some people who have done large scale
systems before? I'm hoping to get some performance estimates, suggestions
and/or general discussion/feedback. I cannot discuss the exact specifics of
the purpose but will go into as much detail as I can.

 

Technical Specs:

216x 3TB 7k3000 HDDs

24x 9 drive RAIDZ3

4x JBOD Chassis (45 bay)

1x server (36 bay)

2x AMD 12 Core CPU

128GB EEC RAM

2x 480GB SSD Cache

10Gbit NIC

 

Workloads:

 

Mainly streaming compressed data. That is, pulling compressed data in a
sequential manner however could have multiple streams happening at once
making it somewhat random. We are hoping to have 5 clients pull 500Mbit
sustained. 

 

Considerations:

 

The main reason RAIDZ3 was chosen was so we can distribute the parity across
the JBOD enclosures. With this method even if an entire JBOD enclosure is
taken offline the data is still accessible. 

 

Questions:

 

How to manage the physical locations of such a vast number of drives? I have
read this
(http://blogs.oracle.com/eschrock/entry/external_storage_enclosures_in_solar
is) and am hoping some can shed some light if the SES2 enclosure
identification has worked for them? (enclosures are SES2)

 

What kind of performance would you expect from this setup? I know we can
multiple the base IOPS by 24 but what about max sequential read/write?


Thanks, 

 

Phil

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to