On Fri, May 28, 2010 at 00:56, Marc Bevand <m.bev...@gmail.com> wrote:
> Giovanni Tirloni <gtirloni <at> sysdroid.com> writes:
>>
>> The chassis has 4 columns of 6 disks. The 18 disks I was testing were
>> all on columns #1 #2 #3.
>
> Good, so this confirms my estimations. I know you said the current
> ~810 MB/s are amply sufficient for your needs. Spreading the 18 drives
> across all 4 port multipliers
The Supermicro SC846E1 cases don't contain multiple (sata) port
multipliers, they contain a single SAS expander, which shares
bandwidth among the controllers and drives: no column- or row-based
limitations should be present.

That backplane has two 8087 ports, IIRC: one labeled for the host, and
one for a downstream chassis.  I don't think there's actually any
physical or logical difference between the upstream and downstream
ports, so you might consider trying connecting two cables (ideally
from two SAS controllers, with multipath) and see if that goes any
faster.

Giovanni: When you say you saturated the system with a RAID-0 device,
what do you mean?  I think the suggested benchmark (read from all the
disks independently, using dd or some other sequential-transfer
mechanism like vdbench) would be more interesting in terms of finding
the limiting bus bandwidth than a ZFS-based or hardware-raid-based
benchmark.  Inter-disk synchronization and checksums and such can put
a damper on ZFS performance, so simple read-sequentially-from-disk can
often deliver surprising results.

Note that such results aren't always useful: after all, the goal is to
run ZFS on the hardware, not dd! but may indicate that a certain
component of the system is or is not to blame.

Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to