On Sat, 24 Dec 2005, Andre Oppermann wrote:
Julian Elischer wrote:
"."@babolo.ru wrote:
I've been Googling up a storm but I am having trouble finding
recommendations for a good gigabit ethernet card to use with 4.11. The
Intel part numbers I found in the em readme are a few years old now, and
I can't quite determine how happy people are with other chipsets despite
my searches.
I'm looking for a basic PCI 1-port card with jumbo frame support if
possible--I can live without it. Either way, stability is much more
important than performance.
em for PCI32x33MHz works good up to 250Mbit/s, not more
em for PCI64x66MHz works up to about 500Mbit/s without polling
Please specify the packet size (distribution) you've got these numbers
from.
sk and bge for PCI 33MHz under my version of an old version of FreeBSD
and significantly modified sk driver:
- nfs with default packet size gives 15-30MB/s on a file system where
local r/w gives 51-53MB/s. Strangely, tcp is best for writing
(30MB/s vs 19 vor udp) and worst for reading (15MB/s vs 23).
- sk to bge packet size 5 using ttcp -u: 1.1MB/s 240kpps (2% lost).
Either ttcp or sk must be modified to avoid problems with ENOBUFS.
- sk to bge packet size 1500 using ttcp -u: 78MB/s 53.4kpps (0% lost).
- sk to bge packet size 8192 using ttcp -u: [panic]. Apparently I got
bad bits from -current or mismerged them.
- bge to sk packet size 5 using ttcp -u: 1.0MB/s 208kpps (0% lost).
Different problems with ENOBUFS -- unmodified ttcp spins so test
always takes 100% CPU.
- bge to sk packet size 1500 using ttcp -u: [bge hangs]
You have to be careful here. Throughput and packets per second are not
directly related. Throughput is generally limited by good/bad hardware
and DMA speed. My measurements show that with decent hardware (em(4) and
bge(4) on PCI-X/133MHz) you can easily run at full wirespeed of 1 gigabit
per second with 1500 bytes per packet as the CPU only has to handle about
81,000 packets per second. All processing like forwarding, firewalling and
PCI/33MHz apparently can't do "only" 81000 non-small packets/sec.
routing table lookups are done once per packet no matter how large it is.
So at wirespeed with 64 bytes packets you've got to do this 1.488 million
times per second. This is a bit harder and entirely CPU bound. With some
mods and fastforward we've got em(4) to do 714,000 packets per second on
my Opteron 852 with PCI-X/133. Hacking em(4) to m_free() the packets just
before they would hit the stack I see that the hardware is capable of
receiving full wirespeed at 64 byte packets.
I have timestamps which show that my sk (a Yukon-mumble, whatever is
on an A7N8X-E) can't do more than the measured 240kpps. Once the ring
buffer is filled up, it takes about 4 usec per packet (typically 1767
usec for 480 packets) to send the packets. I guess it spends the
entire 4 usec talking to the PCI bus and perhaps takes several cycles
setting up transactions.
Bruce
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"