Hi,

It seems to me you have to configure the number of RX queues:

ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=<integer>

Otherwise they default to 1. The TX queues are created based on the number of lcores, AFAIK.

Regards,

Zoltan

On 27/11/15 09:53, Praveen MANKARA RADHAKRISHNAN wrote:
Hi,

I am testing a scenario with 2 servers.

2 servers are connected via 1 10Gb interface.
each server 1 ixia port is connected.

from the ixia vlan traffic is coming to dpdk port 0 of server 1. it
strips valn go to a vxlan tunnel through dpdk port 1 to the next server.
there vxla is stripped and through the next going to the ixia port2.

same for reverse direction also.

when i tried the test i have the following observations.
packets are not distributing across the cores. because of this
performance is not scaling.
is there any configuration i should specifically do to have the scaling.

Please find the details about my test.

Configuration:
-----------------------

Server 1
------------

    ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
    ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
    ovs-vsctl add-port br0 vxlan0 -- set interface vxlan0 type=vxlan
options:remote_ip=10.200.0.6
    ovs-vsctl add-br br1 -- set bridge br1 datapath_type=netdev
    ovs-vsctl add-port br1 dpdk1 -- set Interface dpdk1 type=dpdk
    ip link set br0 up
    ip link set br1 up
    ip a a 10.200.0.5/24 <http://10.200.0.5/24> dev br1
    ovs-ofctl add-flow br0 in_port=1,actions=strip_vlan,output:2
    ovs-ofctl add-flow br0 in_port=2,actions=output:1
Server 2
------------

    ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
    ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
    ovs-vsctl add-port br0 vxlan0 -- set interface vxlan0 type=vxlan
options:remote_ip=10.200.0.5
    ovs-vsctl add-br br1 -- set bridge br1 datapath_type=netdev
    ovs-vsctl add-port br1 dpdk1 -- set Interface dpdk1 type=dpdk
    ip link set br0 up
    ip link set br1 up
    ip a a 10.200.0.6/24 <http://10.200.0.6/24> dev br1
    ovs-ofctl add-flow br0 in_port=1,actions=strip_vlan,output:2
    ovs-ofctl add-flow br0 in_port=2,actions=output:1

1C_2T ~~~~~~~~ ovs-vsctl set Open_vSwitch .
other_config:pmd-cpu-mask=2000002

[root@controller ~]# ovs-appctl dpif-netdev/pmd-stats-show main thread:
emc hits:0 megaflow hits:0 miss:0 lost:0 polling cycles:844386 (100.00%)
processing cycles:0 (0.00%) pmd thread numa_id 1 core_id 1: emc
hits:140438952 megaflow hits:126 miss:2 lost:0 polling
cycles:11260105290 (21.84%) processing cycles:40302774189 (78.16%) avg
cycles per packet: 367.15 (51562879479/140439080) avg processing cycles
per packet: 286.98 (40302774189/140439080) pmd thread numa_id 1 core_id
25: emc hits:148663170 megaflow hits:126 miss:2 lost:0 polling
cycles:11918926350 (23.16%) processing cycles:39554751984 (76.84%) avg
cycles per packet: 346.24 (51473678334/148663298) avg processing cycles
per packet: 266.07 (39554751984/148663298)

Here traffic is coming from both ixia ports so bothcores are getting
packets.

Changed the threads to 2C_4T :

[root@redhat7 ~]# ovs-vsctl set Open_vSwitch .
other_config:pmd-cpu-mask=a00000a

[root@redhat7 ~]#  ovs-appctl dpif-netdev/pmd-stats-show
main thread:
         emc hits:0
         megaflow hits:0
         miss:0
         lost:0
         polling cycles:1105692 (100.00%)
         processing cycles:0 (0.00%)
pmd thread numa_id 1 core_id 1:
         emc hits:174248214
         megaflow hits:126
         miss:2
         lost:0
         polling cycles:41551018158 (49.30%)
         processing cycles:42726992931 (50.70%)
         avg cycles per packet: 483.67 (84278011089/174248342)
         avg processing cycles per packet: 245.21 (42726992931/174248342)
pmd thread numa_id 1 core_id 3:
         emc hits:182375144
         megaflow hits:126
         miss:2
         lost:0
         polling cycles:39728724567 (46.68%)
         processing cycles:45381262518 (53.32%)
         avg cycles per packet: 466.68 (85109987085/182375272)
         avg processing cycles per packet: 248.83 (45381262518/182375272)
pmd thread numa_id 1 core_id 27:
         emc hits:0
         megaflow hits:0
         miss:0
         lost:0
pmd thread numa_id 1 core_id 25:
         emc hits:0
         megaflow hits:0
         miss:0
         lost:0
[root@redhat7 ~]#

As you can see still only 2 cores were used.



The same is for the other server also.
from the ixia i have sending multiple flows more than 50 flows by
changing the source ip.
the problem is its as if 1 port is bind to the 1 core. its not
distributing the packets.

Is there any specific configuration i need to do to spread the packet
across different cores?

Thanks
Praveen



_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to