Hey Rahul,

Please do not cross-post......

I'll have someone dealing with ovs performance test reply to your original
post soon,

Could you also clarify (in your original post) if you are testing using vm
or
bare metal machine?

Thanks,
Alex Wang,

On Thu, Dec 4, 2014 at 10:42 PM, Rahul Arora <rahul1991.ar...@gmail.com>
wrote:

> Hi Team,
>
> We are doing comparison of throughput and CPU consumption between OVS
> 2.3.0 and kernel bridge with different packet size.
>
> We are observing huge difference in performance. With frame size 64 and
> 128 bytes unidirectional traffic from port1 to port2 below are the numbers.
>
>
> *OVS 2.3.0 (Dual core with matching flow in kernel space, kernel 3.12)*
>
> * kernel bridge(Dual Core System, kernel 3.12) *  *FRAME SIZE* *Throughput
> unidirectional (Mbps)* *CPU Usage %* *VS* *FRAME SIZE* *Throughput (Mbps)* 
> *CPU
> Usage %*  64 375 100 64 487 100  128 747 100 128 864 40  256 927 10 256
> 927 5  320 941 8 320 941 4  384 950 6 384 950 4  448 957 4 448 957 3  512
> 962 3 512 962 3  1024 980 1 1024 980 1  1500 986 1 1500 986 1
> We have matching flow in kernel space with in_port=1 and action=output:2
> and flow is matching in kernel space
>
> How we can improve the performance of OVS i.e. increasing the throughput
> and decrease CPU consumption with lower frame size.
>
> On Wed, Dec 3, 2014 at 4:44 PM, Adam Mazur <adam.ma...@tiktalik.com>
> wrote:
>
>>  I will try on current head version.
>> Meanwhile, answers are below.
>>
>>
>> W dniu 02.12.2014 o 23:24, Alex Wang pisze:
>>
>> Hey Adam,
>>
>>  Besides the questions just asked,
>>
>> On Tue, Dec 2, 2014 at 1:11 PM, Alex Wang <al...@nicira.com> wrote:
>>
>>> Hey Adam,
>>>
>>>  Did you use any trick to avoid the arp resolution?
>>>
>>>  Running your script on my setup causes only arp pkts sent,
>>>
>>>  Also, there is no change of mem util of ovs.
>>>
>>
>> There is no trick with arp.
>> Gateway for VM acts as a "normal" router, with old ovs 1.7.
>> The router IS a bottleneck, while it consumes 100% of CPU. But in the
>> same time ovs 2.3 on the hypervisor consumes 400% of CPU and grows in RSS.
>>
>>
>>    One more thing, did you see the issue without tunnel?
>>> This very recent commit fixes some issue about tunneling,
>>> Could you try again with it?
>>>
>>
>> I will try. These problems was seen on b6a3dd9cca (Nov 22), will try on
>> head version.
>>
>>     commit b772066ffd066d59d9ebce092f6665150723d2ad
>>> Author: Pravin B Shelar <pshe...@nicira.com>
>>> Date:   Wed Nov 26 11:27:05 2014 -0800
>>>
>>>      route-table: Remove Unregister.
>>>
>>>     Since dpif registering for routing table at initialization
>>>     there is no need to unregister it. Following patch removes
>>>     support for turning routing table notifications on and off.
>>>     Due to this change OVS always listens for these
>>>     notifications.
>>>
>>>     Reported-by: YAMAMOTO Takashi <yamam...@valinux.co.jp>
>>>     Signed-off-by: Pravin B Shelar <pshe...@nicira.com>
>>>     Acked-by: YAMAMOTO Takashi <yamam...@valinux.co.jp>
>>>
>>>
>>
>>
>>   Want to ask more questions to help debug:
>>
>>  1. Could you post the 'ovs-vsctl show' output on the xenserver?
>>
>>
>> http://pastebin.com/pe8YpRwr
>>
>>   2. could you post the 'ovs-dpctl dump-flows' output during the run of
>> script?
>>
>>
>> Partial output - head: http://pastebin.com/fUkbfeUN and tail:
>> http://pastebin.com/P1QgyH02
>> Full output got more than 100MB of text when flooding 400K pps. Would you
>> like gzipped on priv? (less than 1MB)
>>
>>   3. if oom is activated, you should see the oom log from syslog or dmeg
>> output, could you provide it?
>>
>>
>> Don't have one - production logs has been rotated, remote logs during oom
>> was unavailable (network was dead while vswitch has been starting), testing
>> environment is too slow to fast generate oom... first (and much faster) I
>> will try on the head version as you have said there was fixes for such case.
>>
>>   4. could you provide the route output on the hypervisor
>>
>>
>> # route -n
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags Metric Ref    Use
>> Iface
>> 0.0.0.0         10.2.7.1        0.0.0.0         UG    0      0        0
>> xenbr0
>> 10.2.7.0        0.0.0.0         255.255.255.0   U     0      0        0
>> xenbr0
>> 10.30.7.0       0.0.0.0         255.255.255.0   U     0      0        0
>> ib0
>> 37.233.99.0     0.0.0.0         255.255.255.0   U     0      0        0
>> xapi4
>>
>>
>>
>>
>>  Thanks,
>> Alex Wang,
>>
>>
>>
>>
>>
>>>  Thanks,
>>> Alex Wang,
>>>
>>> On Mon, Dec 1, 2014 at 2:43 AM, Adam Mazur <adam.ma...@tiktalik.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> We are testing on kernel 3.18, ovs current master, gre tunnels / xen
>>>> server. Following python script leads to fast ovs-vswitchd memory grow (1GB
>>>> / minute) and finally OOM kill:
>>>>
>>>>
>>>> import random, socket, struct, time
>>>> sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
>>>> while True:
>>>>     ip_raw = struct.pack('>I', random.randint(1, 0xffffffff))
>>>>     ip = socket.inet_ntoa(ip_raw)
>>>>     try:
>>>>         sock.sendto("123", (ip, 12345))
>>>>     except:
>>>>         pass
>>>>     #time.sleep(0.001)
>>>>
>>>>
>>>> During this test ovs did not show growing flow number, but memory still
>>>> grows.
>>>>
>>>> If packets are sent too slow, then memory never grows - uncomment
>>>> time.sleep line above.
>>>>
>>>> Best,
>>>> Adam
>>>> _______________________________________________
>>>> discuss mailing list
>>>> discuss@openvswitch.org
>>>> http://openvswitch.org/mailman/listinfo/discuss
>>>>
>>>
>>>
>>
>>
>> _______________________________________________
>> discuss mailing list
>> discuss@openvswitch.org
>> http://openvswitch.org/mailman/listinfo/discuss
>>
>>
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to