additional comments below...
Adam Lindsay wrote:
In asking about ZFS performance in streaming IO situations, discussion
quite quickly turned to potential bottlenecks. By coincidence, I was
wondering about the same thing.
Richard Elling said:
We know that channels, controllers, memory, network, and CPU bottlenecks
can and will impact actual performance, at least for large configs.
Modeling these bottlenecks is possible, but will require more work in
the tool. If you know the hardware topology, you can do a
back-of-the-napkin
analysis, too.
Well, I'm normally a Mac guy, so speccing server hardware is a bit of a
revelation for me. I'm trying to come up with a ZFS storage server for a
networked multimedia research project which hopefully has enough oomph
to be a nice resource that outlasts the (2-year) project, but without
breaking the bank.
Does anyone have a clue as to where the bottlenecks are going to be with
this:
16x hot swap SATAII hard drives (plus an internal boot drive)
Be sure to check the actual bandwidth of the drives when installed in the
final location. We have been doing some studies on the impact of vibration
on performance and reliability. If your enclosure does not dampen vibrations,
then you should see reduced performance, and it will be obvious for streaming
workloads. There was a thread about this a year or so ago regarding thumpers,
but since then we've seen it in a number of other systems, too. There have
also been industry papers on this topic.
Tyan S2895 (K8WE) motherboard
Dual GigE (integral nVidia ports)
All I can add to the existing NIC comments in this thread is that Neptune kicks
ass. The GbE version is:
http://www.sun.com/products/networking/ethernet/sunx8quadgigethernet/index.xml
... but know that I don't set pricing :-0
2x Areca 8-port PCIe (8-lane) RAID drivers
I think this is overkill.
2x AMD Opteron 275 CPUs (2.2GHz, dual core)
This should be a good choice. For high networking loads, you can burn a lot
of cycles handling the NICs. For example, using Opterons to drive the dual
10GbE version of Neptune can pretty much consume a significant number of cores.
I don't think your workload will come close to this, however.
8 GiB RAM
I recommend ECC memory, not the cheap stuff... but I'm a RAS guy.
The supplier is used to shipping Linux servers in this 3U chassis, but
hasn't dealt with Solaris. He originally suggested 2GiB RAM, but I hear
things about ZFS getting RAM hungry after a while. I dug up the RAID
controllers after a quick look on Sun's HCL, but they're pricy for
something that's just going to give JBOD access (but the bus
interconnect looks to be quick, on the other hand).
Pretty much any SAS/SATA controller will work ok. You'll be media speed
bound, not I/O channel bound.
I guess my questions are:
- Does anyone out there have a clue where the potential bottlenecks
might be?
software + cores --> handling the net and managing data integrity.
- Is there anywhere where I can save a bit of money? (For example,
might the SuperMicro AOC-SAT2-MV8 hanging off the PCI-X slots
provide enough bandwidth to the disks?)
- If I focused on simple streaming IO, would giving the server less RAM
have an impact on performance?
RAM as a cache presumes two things: prefetching and data re-use. Most
likely, you won't have re-use and prefetching only makes sense when the
disk subsystem is approximately the same speed as the network. Personally,
I'd start at 2-4 GBytes and expand as needed (this is easily measured)
- I had assumed four cores would be better than the two faster (3.0GHz)
single-core processors the vendor originally suggested. Agree?
Yes, lacking further data.
-- richard
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss