On 29.12.2015. 22:23, Hrvoje Popovski wrote:
> On 29.12.2015. 17:49, Mark Kettenis wrote:
>>> Date: Tue, 22 Dec 2015 23:45:49 +0100
>>>>
>>>> On 22.12.2015. 22:08, Mark Kettenis wrote:
>>>>>> Anybody willing to give this a spin?  I don't have access to hardware
>>>>>> currently...
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> Mark
>>>>
>>>> Hi,
>>>>
>>>> i'm sending 1.1Mpps and this patch almost immediately trigger OACTIVE
>>>> flag. Patch is applied on clean source few minutes ago. If there is
>>>> anything i can do more, please tell....
>> ok, that diff wasn't quite right.  Here is a better one.  This one
>> gets rid of the ridiculous number of scatter segments on the 82598.
>> There really is no point in allowing that many segments, and with the
>> new code it would reduce the usable part of the tx ring significantly.
>>
>> I did some testing of forwarding performance on a machine with two
>> sockets filled with:
>>
>>   cpu0: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz, 2400.37 MHz
>>
>> for a total of 16 cores forwarding between ix1 and ix0:
>>
>>   ix0 at pci3 dev 0 function 0 "Intel X540T" rev 0x01: msi, address 
>> 0c:c4:7a:4d:a3:e4
>>   ix1 at pci3 dev 0 function 1 "Intel X540T" rev 0x01: msi, address 
>> 0c:c4:7a:4d:a3:e5
>>
>> I basically tested how many pps I could push through the box without
>> loss, and many pps got through if sent 1Mpps into the box.  All
>> testing was done with pf disabled.
>>
>> With -current I got the following numbers:
>>
>> - 730kpps without loss
>> - 82kpps when receiving 1Mpps
>>
>> and if I set net.inet.ip.ifq.maxlen to 8000 I got:
>>
>> - 740kpps without loss
>> - 640-740kpps when receiving 1Mpps (fluctuating)
>>
>> With this diff I got:
>>
>> - 670kpps without loss
>> - 250kpps when receiving 1Mpps
>>
>> and if I set net.inet.ip.ifq.maxlen to 8000 I got:
>>
>> - 690kpps without loss
>> - 680kpps when receiving 1Mpps (fluctuating)
>>
>> So the maximal throughput goes slightly down, but it seems it with the
>> diff it behaves betterunder load.
>>
>> Further tests are welcome!
> 
> 
> Hi,
> 
> i'm getting similar results with this patch. Sending 1Mpps and getting
> 750Kpps. I didn't see OACTIVE flag even when generating 14Mpps :)
> 
> PF=no
> ddb.console=1
> kern.pool_debug=0
> net.inet.ip.forwarding=1
> net.inet.ip.ifq.maxlen=8192
> kern.maxclusters=32768
> 
> One cpu...
> Intel(R) Xeon(R) CPU E5-2430 v2 @ 2.50GHz, 2800.01 MHz
> 
> ix0 at pci2 dev 0 function 0 "Intel 82599" rev 0x01: msi, address
> 90:e2:ba:33:af:ec
> ix1 at pci2 dev 0 function 1 "Intel 82599" rev 0x01: msi, address
> 90:e2:ba:33:af:ed
> 


just for fun i leave 14Mpps through box over night and ssh is alive,
very slow but it's alive ... and i'm getting stable 650Kpps without
OACTIVE flag ...

Reply via email to