Adam:

> Hi, hope you don't mind if I make some portions of your email public in 
> a reply--I hadn't seen it come through on the list at all, so it's no 
> duplicate to me.

I don't mind at all.  I had hoped to avoid sending the list a duplicate
e-mail, although it looks like my first post never made it here.

> > I suspect that if you have a bottleneck in your system, it would be due
> > to the available bandwidth on the PCI bus.
> 
> Mm. yeah, it's what I was worried about, too (mostly through ignorance 
> of the issues), which is why I was hoping HyperTransport and PCIe were 
> going to give that data enough room on the bus.
> But after others expressed the opinion that the Areca PCIe cards were 
> overkill, I'm now looking to putting some PCI-X cards on a different 
> (probably slower) motherboard.

I dug up a copy of the S2895 block diagram and asked Bill Moore about
it.  He said that you should be able to get about 700mb/s off of each of
the PCI-X channels and that you only need 100mb/s to saturate a GigE
link.  He also observed that the RAID card you were using was
unnecessary and would probably hamper performance.  He reccomended
non-RAID SATA cards based upon the Marvell chipset.

Here's the e-mail trail on this list where he discusses Marvell SATA
cards in a bit more detail:

http://mail.opensolaris.org/pipermail/zfs-discuss/2006-March/016874.html

It sounds like if getting disk -> network is the concern, you'll have
plenty of bandwidth, assuming you have a reasonable controller card.

> > Caching isn't going to be a huge help for writes, unless there's another
> > thread reading simultaneoulsy from the same file.
> >
> > Prefetch will definitely use the additional RAM to try to boost the
> > performance of sequential reads.  However, in the interest of full
> > disclosure, there is a pathology that we've seen where the number of
> > sequential readers exceeds the available space in the cache.  In this
> > situation, sometimes the competeing prefetches for the different streams
> > will cause more temporally favorable data to be evicted from the cache
> > and performance will drop.  The workaround right now is just to disable
> > prefetch.  We're looking into more comprehensive solutions.
> 
> Interesting. So noted. I will expect to have to test thoroughly.

If you run across this problem and are willing to let me debug on your
system, shoot me an e-mail.  We've only seen this in a couple of
situations and it was combined with another problem where we were seeing
excessive overhead for kcopyout.  It's unlikely, but possible that you'll
hit this.

-K
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to