Darren J Moffat wrote:
Len Zaifman wrote:
We are looking at adding to our storage. We would like ~20TB-30 TB.

we have ~ 200 nodes (1100 cores) to feed data to using nfs, and we are looking for high reliability, good performance (up to at least 350 MBytes /second over 10 GigE connection) and large capacity.

For the X45xx (aka thumper) capacity and performanance seem to be there (we have 3 now) However, for system upgrades , maintenance and failures, we have an availability problem.

For the 7xxx in a cluster configuration, we seem to be able to solve the availability issue, and perhaps get performance benefits the SSD.

however, the costs constrain the capacity we could afford.

If anyone has experience with both systems, or with the 7xxx system in a cluster configuration, we would be interested in hearing:

1) Does the 7xxx perform as well or better than thumpers?

Depends on which 7xxx you pick.
The 7210 (the thumper/thor-based AmberRoad) does not support clustering. Neither does the 7110. The 7310/7410 are the clusterable solutions.

They are much more flexible in configuration than the Thumper stuff, as they provide disk attach via a J4000-series JBOD, which can be populated with SAS or SATA drives, and different SSD configurations. Frankly, you might want a 7310/7410 in any case, over a thumper. Even with SSDs, certain workloads are far better served with SAS drives than SATA drives, and with a 7310/7410, you can easily mix both types in the same clustered setup. In my case, I'm going with SAS to serve xVM images, as they demand a very high level of random I/O which is not well served by even SSD/SATA configs.

I'd really concentrate on the 7310 - it's in your capacity band, and provides clustering and SSD support.

A note here: officially, you can't add anything other than a J4400 with SATA/SSDs to these things. HOWEVER, there's no technical reason not to add any J4xxx into one, and populate it with any combination of SAS or SSDs. The software certainly has no problem with it. I'm still waiting for Official Support of a SAS-populated J4xxx into an A-R system.

2) Does the 7xxx failove r work as expected (in test and real life)

Depends what your expectations are! The time to failover depends on how you configure the cluster and how many filesystems you have and how many disks etc etc.

Have a read over this blog entry:

http://blogs.sun.com/wesolows/entry/7000_series_takeover_and_failback

3) Does the SSD really help?

For NFS yes the WriteZilla (slog) really helps because of how the NFS protocol works. For ReadZillia (l2arc) it depends on your workload.

I'm testing SLOG performance right now with iSCSI-shared xVM images. The L2ARC definitely makes a big difference here, as my VMs have a huge amount of common data which is read-mostly.


4) Do the analytics help prevent and solve real problems, or arui?he ge they frivolous pretty pictures?

Yes they do, at a level of detail no other storage vendor can currently provide.

I have to agree here. The A-R custom software is definitely nicer than the roll-my-own OpenSolaris-based setup I pitted the A-R against.

5) is the 7xxx really a black box to be managed only by the GUI?

GUI or CLI but the CLI is *NOT* a Solaris shell it is a CLI version of the GUI. The 7xxx is a true appliance, it happens to be built from OpenSolaris code but it is not a Solaris/OpenSolaris install. So you can't run your own applications on it. Backups are via NDMP for example.


I highly recommend downloading the simulator and trying it in VirtualBox/VMware:

http://www.sun.com/storage/disk_systems/unified_storage/
My biggest bitch with the A-R systems is that I can't add common X4x40-series upgrades to them (and, attaching any combo of a J4xxx to one is still not Officially Supported). That is, I'd love to be able to add a FC HBA into one and make it act like a FC target, but so far, I'm not getting that it's a supported option. Also, while you can add a second CPU or more RAM to some of the configs, it's not really "encouraged". A-R is an appliance, and frankly, you have to live with the limited configurations it's sold in.



--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to