Hi All,
I've done some ad-hoc testing off and on for a few years. None of
the data around, but we do have a couple rules of thumb that we use
internally...
1) Get the fastest PCI bus you can - PCI-X, etc.
2) Plan on 1GHz of CPU per 1 gigabit of throughput.
The performance hit going from FBSD4.x to FBSD5.x/6.x was horrendous
for this kind of stuff, hopefully 7.x will speed things up.
Also, thus far, we have stuck only with single CPU machines to be
conservative/safe. We are looking forward to a speedier TCP/IP stack
in 7.x and hoping to go to SMP routers at that time also.
Also, we've noticed at least on FBSD 6.x that there seem to be very
few advantages in using polling on network interfaces. We still run
it, so that we have responsive SSH/BGP/OSPF processes on the
machines, but my testing has shown that for sheer throughput, there
is basically no difference. I'd be curious if anybody knows the
scoop on this.
Thanks,
- mike
Michael F. DeMan
Director of Technology
OpenAccess Network Services
Bellingham, WA 98225
[EMAIL PROTECTED]
360-733-9279
On Oct 4, 2007, at 11:49 AM, Cristian KLEIN wrote:
Thank you all for your replies.
Kirill Ponazdyr wrote:
Hi list,
A few days ago I tested whether a FreeBSD 7 box is able to handle
Gigabit
Can anybody point me what the bottleneck of this configuration
is? CPU was
mostly idle and PCIe 1x should carry way more. Or is the experiment
perhaps
fundamentally flawed?
ICMP is not a good way to perform such tests as many have mentioned,
better use iperf.
I used this test, because it proved perfect when, almost a decade
ago, gigabit
appeared. There wasn't anything at that time that could fill 1
Gbps, so we used
the routers themselves to do the job. Also, I used this setup to
avoid TCPs
congestion control mecachnism and sub-maximum bandwidth.
Of course, when I said "ping -f", I didn't mean a single "ping -f",
but rather
enough ping -f so that the looping packets would saturate the link.
We have a FreeBSD 6.2 / pf box handling 2Gbps of traffic, real
traffic, it
will probably handle more, we just had no capacities or need to test.
Hardware is a Single 2.4 Ghz Xeon with 2 x Intel Quad Pro 1000MT
PCI-X
Controllers on separate PCI-X Busses.
Could you tell me, is there any difference between 1000PT and
1000MT, except the
slot type? Also, is there any difference between Intel Desktop and
Intel Server
adaptors, or are these just marketing buzzwords?
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"