On Wed, Sep 02, 2009 at 02:54:42PM -0400, Jacob Ritorto wrote:
> Torrey McMahon wrote:
>
>> 3) Performance isn't going to be that great with their design 
>> but...they might not need it.
>
>
> Would you be able to qualify this assertion?  Thinking through it a bit,  
> even if the disks are better than average and can achieve 1000Mb/s each,  
> each uplink from the multiplier to the controller will still have  
> 1000Gb/s to spare in the slowest SATA mode out there.  With (5) disks  
> per multiplier * (2) multipliers * 1000GB/s each, that's 10000Gb/s at  
> the PCI-e interface, which approximately coincides with a meager 4x  
> PCI-e slot.

Let's look at the math.  First, I don't know how 5 * 2 * 1000GB/s equals
10000Gb/s, or how a 4x PCIe-gen2 slot, which can't really push a
10Gb/s Ethernet NIC can do 1000x that.

Moving on, modern high-capacity SATA drives are in the 100-120MB/s
range.  Let's call it 125MB/s for easier math.  A 5-port port multiplier
(PM) has 5 links to the drives, and 1 uplink.  SATA-II speed is 3Gb/s,
which after all the framing overhead, can get you 300MB/s on a good day.
So 3 drives can more than saturate a PM.  45 disks (9 backplanes at 5
disks + PM each) in the box won't get you more than about 21 drives
worth of performance, tops.  So you leave at least half the available
drive bandwidth on the table, in the best of circumstances.  That also
assumes that the SiI controllers can push 100% of the bandwidth coming
into them, which would be 300MB/s * 2 ports = 600MB/s, which is getting
close to a 4x PCIe-gen2 slot.  Frankly, I'd be surprised.  And the card
that uses 3 of the 4 ports has to do more like 900MB/s, which is greater
than 4x PCIe-gen2 can pull off in the real world.

And I'd re-iterate what myself and others have observed about SiI and
silent data corruption over the years.

Most of your data, most of the time, it would seem.



--Bill
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to