Kent Watsen wrote:

>> What are you *most* interested in for this server? Reliability? 
>> Capacity? High Performance? Reading or writing? Large contiguous reads 
>> or small seeks?
>>
>> One thing that I did that got a good feedback from this list was 
>> picking apart the requirements of the most demanding workflow I 
>> imagined for the machine I was speccing out.
> My first posting contained my use-cases, but I'd say that video 
> recording/serving will dominate the disk utilization - thats why I'm 
> pushing for 4 striped sets of RAIDZ2 - I think that it would be all 
> around goodness

It sounds good, that way, but (in theory), you'll see random I/O suffer 
a bit when using RAID-Z2: the extra parity will drag performance down a 
bit. The RAS guys will flinch at this, but have you considered 8*(2+1) 
RAID-Z1?

I don't want to over-pimp my links, but I do think my blogged 
experiences with my server (also linked in another thread) might give 
you something to think about:
  http://lindsay.at/blog/archive/tag/zfs-performance/

> 
>> I'm learning more and more about this subject as I test the server 
>> (not all that dissimilar to what you've described, except with only 18 
>> disks) I now have. I'm frustrated at the relative unavailability of 
>> PCIe SATA controller cards that are ZFS-friendly (i.e., JBOD), and the 
>> relative unavailability of motherboards that support both the latest 
>> CPUs as well as have a good PCI-X architecture.
> Good point - another reply I just sent noted a PCI-X sata controller 
> card, but I'd prefer a PCIe card - do you have a recommendation on a 
> PCIe card? 

Nope, but I can endorse the Supermicro card you mentioned. That's one 
component in my server I have few doubts about.

When I was kicking around possibilities on the list, I started out 
thinking about Areca's PCIe RAID drivers, used in JBOD mode. The on-list 
consensus was that they would be overkill. (Plus, there's the reliance 
on Solaris drivers from Areca.) It's true, for my configuration: disk 
I/O far exceeds the network I/O I'll be dealing with.

Testing 16 disks locally, however, I do run into noticeable I/O 
bottlenecks, and I believe it's down to the top limits of the PCI-X bus.

 > As far as a mobo with "good PCI-X architecture" - check out
> the latest from Tyan (http://tyan.com/product_board_detail.aspx?pid=523) 
> - it has three 133/100MHz PCI-X slots

I use a Tyan in my server, and have looked at a lot of variations, but I 
hadn't noticed that one. It has some potential.

Still, though, take a look at the block diagram on the datasheet: that 
actually looks like 1x PCI-X 133MHz slot and a bridge sharing 2x 100MHz 
slots. My benchmarks so far show that putting a controller on a 100MHz 
slot is measurably slower than 133MHz, but contention over a single 
bridge can be even worse.

hth,
adam
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to