Reply @Thomas Bodzar
> Why i386 on 12GB of RAM? Did you test amd64 and best option current?
Because it's an old Xeon CPU which doesn't support amd64 instructions
(only ia64).

> You think that 870Mbps is bad for 1Gbps card????
No, I don't. I Think it's quite low for an aggregation of two 1Gbps
card (4Gbps throughput in FDx)

> Maybe you want to try roundrobin option of
> http://www.openbsd.org/cgi-bin/man.cgi?query=trunk&apropos=0&sektion=0&manpath=OpenBSD
>  \
> +Current&arch=i386&format=html to aggregate traffic instead of load balance 
> or I \
> don't understand.
Load balance seems more appropriate because it's a "smart algorithm"
based on @MAC src+dst, @IP src+dst and Vlan ID, like my switch when
it's configured with "base algorithm" (advanced use src and dst port).
But why not, round robin should work too. I've tried it but it's
extremely slow (less than 100Mbps) maybe CPU usage ?
As mentioned, I have also tried with LACP (configured on both side)
without breaking the 870Mbps.

Thanks for help.

Reply @Robert Blacquiere
> trunk loadbalance ports handle traffic in a specific way. The logaritm
> is based on source <-> destination hashes by default and it keeps them
> over the a single interface, till interface is dropped.

A "loadbalance" algorithm should split the traffic even if congestion
doesn't occur.
But it doesn't still work, if I use a tool like NetPerf the generated
traffic should exceed the capacity of one GigCard, generate drops, and
therefore use the second GigCard ?

> If you want to maximize thru put you need to use round robin logaritm on
> both ends. If you only do it on OpenBSD it will cause multiple links
> used for sending but selective for recieving.

That's why my switch is also configured with aggregation, with an
algorithm based on @MAC src+dst, @IP src+dst.

> And 870 Mbps is a respectible speed for a gig card.
You are right, but for trunking (with loadbalance or LACP algorithm)
it should be double.

Thanks for help.


I understand the doubts about my configuration, but the performance
results through switches or with direct links between the two server
was the same.

Initially I doubted of my configuration on the OpenBSD side, but it
was good according "man trunk".
After that I doubted of my configuration on the switches side, but it
was good, the default algorithm is based on @MAC src+dst, @IP src+dst.

So I have tested this other ways:
With "advanced" algorithm based on @MAC src+dst, @IP src+dst, IP port
src+dst on switch side and loadbalance on the OpenBSD side. Same
results.
With LACP (configured on both side). Same results. (LACP was well established).
With Round Robin on OpenBSD side and default algorithm on switch side.
Less than 100Mbps.
Without switches, direct link between the two OpenBSD box. With
Loadbalance, LACP, Round Robin, same results as previous tests.

I considered that trunk driver wasn't working and tried with two
separated direct link (see experiment 5).
Same result ! Two separate links should work at 1Gbps each, not
~870Mbps in total (repartition was round 80/20).

So it isn't the trunk driver, but a lower problem like em drivers.

Or maybe, this is normal on OpenBSD to doesn't exceed ~870Mbps... ?
Somebody has checked OpenBSD at higher speed, maybe with EM driver (Intel NIC) ?

Thanks
Xinform3n

2013/1/23 Robert Blacquiere <open...@blacquiere.nl>:
> On Tue, Jan 22, 2013 at 04:02:04PM +0100, Patrick Vultier wrote:
>> Hi,
>>
>> I tried to use two OpenBSD systems as network load with iperf and netperf.
>>
>> Each server is equipped with two Intel dual NIC gigabit (plus one
>> embedded gigabit NIC), two Xeon 3.2GHz H.T., 12GB RAM and OpenBSD 5.2
>> i386.
>>
>> My problem, I can't exceed ~ 870Mbps with multiple interface as
>> reported in the experiments (see below).
>> (PF was disabled for all experiment).
>>
>> Why am I blocked at ~ 1Gbps limit ? Is this normal ?
>> EM drivers ? Kernel performance ? ... ?
>>
>> Thanks for your help.
>> Xinform3n
> <snip>
>
> trunk loadbalance ports handle traffic in a specific way. The logaritm
> is based on source <-> destination hashes by default and it keeps them
> over the a single interface, till interface is dropped.
>
> If you want to maximize thru put you need to use round robin logaritm on
> both ends. If you only do it on OpenBSD it will cause multiple links
> used for sending but selective for recieving.
>
> And 870 Mbps is a respectible speed for a gig card.
>
> Regards
>
> Robert

Reply via email to