Hi Guy,

On Fri, Feb 04, 2005 at 11:03:31AM -0600, Guy Helmer wrote:
> A while back, Maxim Konovalov made a commit to usr.sbin/ngctl/main.c to 
> increase its socket receive buffer size to help 'ngctl list' deal with a 
> big number of nodes, and Ruslan Ermilov responded that setting sysctls 
> net.graph.recvspace=200000 and net.graph.maxdgram=200000 was a good idea 
> on a system with a large number of nodes.
> 
> I'm getting what I consider to be sub-par performance under FreeBSD 5.3 
> from a userland program using ngsockets connected into ng_tee to play 
> with packets that are traversing a ng_bridge, and I finally have an 
> opportunity to look into this.  I say "sub-par" because when we've 
> tested this configuration using three 2.8GHz Xeon machines with Gigabit 
> Ethernet interfaces at 1000Mbps full-duplex, we obtained peak 
> performance of a single TCP stream of about 12MB/sec through the 
> bridging machine as measured by NetPIPE and netperf.
> 
The bottleneck must be in ng_tee(4) -- the latter uses m_dup(9) when
a duplicate is needed, which is very expensive as it has to create a
writable copy of the entire mbuf chain (the original chain is DMA'ed
into the host memory by the network card).

> I'm wondering if bumping the recvspace should help, if changing the 
> ngsocket hook to queue incoming data should help, if it would be best to 
> replace ngsocket with a memory-mapped interface, or if anyone has any 
> other ideas that would help performance.
> 
If you absolutely need to see *all* GigE traffic in userland, then
it's going to be troublesome.  If not, filter it with ng_bpf(4).


Cheers,
-- 
Ruslan Ermilov
[EMAIL PROTECTED]
FreeBSD committer

Attachment: pgp8SIaEbgTI2.pgp
Description: PGP signature

Reply via email to