Bruce Evans wrote:
On Mon, 7 Jul 2008, Andre Oppermann wrote:
Paul,
to get a systematic analysis of the performance please do the following
tests and put them into a table for easy comparison:
1. inbound pps w/o loss with interface in monitor mode (ifconfig em0
monitor)
...
I won't be running many of these tests, but found this one interesting --
I didn't know about monitor mode. It gives the following behaviour:
-monitor ttcp receiving on bge0 at 397 kpps: 35% idle (8.0-CURRENT) 13.6
cm/p
monitor ttcp receiving on bge0 at 397 kpps: 83% idle (8.0-CURRENT) 5.8
cm/p
-monitor ttcp receiving on em0 at 580 kpps: 5% idle (~5.2) 12.5
cm/p
monitor ttcp receiving on em0 at 580 kpps: 65% idle (~5.2) 4.8
cm/p
cm/p = k8-dc-misses (bge0 system)
cm/p = k7-dc-misses (em0 system)
So it seems that the major overheads are not near the driver (as I already
knew), and upper layers are responsible for most of the cache misses.
The packet header is accessed even in monitor mode, so I think most of
the cache misses in upper layers are not related to the packet header.
Maybe they are due mainly to perfect non-locality for mbufs.
Monitor mode doesn't access the payload packet header. It only looks
at the mbuf (which has a structure called mbuf packet header). The mbuf
header it hot in the cache because the driver just touched it and filled
in the information. The packet content (the payload) is cold and just
arrived via DMA in DRAM.
--
Andre
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"