On Wed, Jun 25, 2008 at 10:44 AM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> I see that the configuration tested in this X4500 writeup only uses
> the four built-in gigabit ethernet interfaces.  This places a natural
> limit on the amount of data which can stream from the system.  For
> local host access, I am achieving this level of read performance using
> one StorageTek 2540 (6 mirror pairs) and a single reading process.
> The X4500 with 48 drives should be capable of far more.
>
> The X4500 has two expansion bus slows but they are only 64-bit 133MHz
> PCI-X so it seems that the ability to add bandwidth via more
> interfaces is limited.  A logical improvement to the design is to
> offer PCI-E slots which can support 10Gbit ethernet, Infiniband, or
> Fiber Channel cards so that more of the internal disk bandwidth is
> available to "power user" type clients.
>
> Bob
> ======================================
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


Uhhh... 64bit/133mhz is 17Gbit/sec.  I *HIGHLY* doubt that bus will be a
limit.  Without some serious offloading, you aren't pushing that amount of
bandwidth out the card.  Most systems I've seen top out around 6bit/sec with
current drivers.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to