On Tue, Sep 16, 2003 at 04:45:36PM -0500, David J Duchscher wrote:We have been benchmarking FreeBSD configured as a bridge and I thought
I would share the data that we have been collecting. Its a work in
progress so more data will show up as try some more Ethernet cards and
machine configurations. Everything is 100Mbps at the moment. Would be
very interested in any thoughts, insights or observations people might
have.
http://wolf.tamu.edu/~daved/bench-100/
interesting results, thanks for sharing them. I would like to add a few comments and suggestions:
* as the results with the Gbit card show, the system per se is able to work at wire speed at 100Mbit/s, but some cards and/or drivers have bugs which prevent full-speed operation. Among these, i ran extensive experiments on the Intel PRO/100, and depending on how you program the card, the maximum transmit speed ranges from ~100kpps (with the default driver) to ~120kpps no matter how fast the CPU is. I definitely blame the hardware here.
We have seen similar results. In a quick test, I didn't see any difference
in the performance of the Intel Pro/100 on a 2.4Ghz Xeon machine. That was
rather surprising to me since lots of people swear by them.
* I have had very good results with cards supported by the 'dc' driver (Intel 21143 chipset and various clones) -- wire speed even at 64-byte frames. Possibly the 'sis' chips might do the same. I know the 'dc' cards are hard to find these days, but i would definitely try one of them if possible. I would also love to see numbers with the 'rl' cards (Realtek8139, most of the cards you find around in the stores) which are probably among the slowest ones we have.
Yea, I trying to find cards to test but its hard. I can only purchase cards
that help with the project. For example, I will be testing the Intel Pro/1000T
Desktop Adapters since the gigabit cards have shown to be full bandwidth.
* the "latency" curves for some of the cards are quite strange (making me suspect bugs in the drivers or the like). How do you define the 'latency', how do you measure it, and do you know if it is affected by changing "options HZ=..." in your kernel config file (default is 100, i usually recommend using 1000) ?
All of this data is coming from a Anritsu MD1230A test unit running the RFC2544 Performance tests.
http://snurl.com/2d9x
Currently the kernel HZ value is set to 1000. I have it on my list of things to change and perform the tests again.
* especially under heavy load (e.g. when using bridge_ipfw=1 and largish rulesets), you might want to build a kernel with options DEVICE_POLLING and do a 'sysctl kern.polling.enable=1' (see "man polling" for other options you should use). It would be great to have the graphs with and without polling, and also with/without bridge_ipfw (even with a simple one-line firewall config) to get an idea of the overhead.
The use of polling should prevent the throughput dip after the box reaches the its throughput limit visible in some of the 'Frame loss' graphs.
Polling support is available for a number of cards including 'dc', 'em', 'sis', 'fxp' and possibly a few others.
DEVICE_POLLING is high on the lists of things to test. It looks like its going to be a requirement since all of these cards have livelocked the machine at some point during testing. I tried SMC cards today and the machine overloads so much it stops responding long enough for the testing to fail.
Thanks for all the input. I am really hoping to get some useful numbers that others can use.
DaveD
_______________________________________________ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-net To unsubscribe, send any mail to "[EMAIL PROTECTED]"