On Fri, Sep 4, 2009 at 5:36 AM, Marc Bevand <m.bev...@gmail.com> wrote:

> Marc Bevand <m.bevand <at> gmail.com> writes:
> >
> > So in conclusion, my SBNSWAG (scientific but not so wild-ass guess)
> > is that the max I/O throughput when reading from all the disks on
> > 1 of their storage pod is about 1000MB/s.
>
> Correction: the SiI3132 are on x1 (not x2) links, so my guess as to
> the aggregate throughput when reading from all the disks is:
> 3*150+100 = 550MB/s.
> (150MB/s is 60% of the max theoretical 250MB/s bandwidth of an x1 link)
>
> And if they tuned MAX_PAYLOAD_SIZE to allow the 3 PCI-E SATA cards
> to exploit closer to the max theoretical bandwidth of an x1 PCI-E
> link, it would be:
> 3*250+100 = 850MB/s.
>
> -mrb
>
>

Whats the point of arguing what the back-end can do anyways?  This is bulk
data storage.  Their MAX input is ~100MB/sec.  The backend can more than
satisfy that.  Who cares at that point whether it can push 500MB/s or
5000MB/s?  It's not a database processing transactions.  It only needs to be
able to push as fast as the front-end can go.

--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to