Richard Elling wrote:

Does anyone have a clue as to where the bottlenecks are going to be with this:

16x hot swap SATAII hard drives (plus an internal boot drive)

Be sure to check the actual bandwidth of the drives when installed in the
final location.  We have been doing some studies on the impact of vibration
on performance and reliability. If your enclosure does not dampen vibrations, then you should see reduced performance, and it will be obvious for streaming workloads. There was a thread about this a year or so ago regarding thumpers,
but since then we've seen it in a number of other systems, too.  There have
also been industry papers on this topic.

Okay, we have a number of the chassis installed here from the same source, but none seem to share the high-throughput workflow, so that's one thing to quiz the integrator on.

Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)

All I can add to the existing NIC comments in this thread is that Neptune kicks
ass.  The GbE version is:
http://www.sun.com/products/networking/ethernet/sunx8quadgigethernet/index.xml
... but know that I don't set pricing :-0

Oh, man, I didn't need to know about that NIC. Actually, it's something to shoot for.

2x Areca 8-port PCIe (8-lane) RAID drivers

I think this is overkill.

I'm getting convinced of that. With the additional comments in this thread, I'm now seriously considering replacing these PCIe cards with Supermicro's PCI-X cards, and switching over to a different Tyan board...

- 2x SuperMicro AOC-SAT2-MV8 PCI-X SATA2 interfaces
- Tyan S2892 (K8SE) motherboard, so that ditches nvidia for:
- Dual GigE (integral Broadcom ports)

2x AMD Opteron 275 CPUs (2.2GHz, dual core)

This should be a good choice. For high networking loads, you can burn a lot
of cycles handling the NICs.  For example, using Opterons to drive the dual
10GbE version of Neptune can pretty much consume a significant number of cores.
I don't think your workload will come close to this, however.

No, but it's something to shoot for. :)

8 GiB RAM

I recommend ECC memory, not the cheap stuff... but I'm a RAS guy.

So noted.

Pretty much any SAS/SATA controller will work ok.  You'll be media speed
bound, not I/O channel bound.

Okay, that message is coming through.

RAM as a cache presumes two things: prefetching and data re-use.  Most
likely, you won't have re-use and prefetching only makes sense when the
disk subsystem is approximately the same speed as the network.  Personally,
I'd start at 2-4 GBytes and expand as needed (this is easily measured)

I'll start with 4GBytes, because I like to deploy services in containers, and so will need some elbow room.

Many thanks to all in this thread: my spec has certainly evolved, and I hope the machine has gotten cheaper in the process, with little sacrifice in theoretical performance.

adam
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to