Hi All,
        While sending 1G bidirectional traffic of 64bytes size, I see a low
performance for Phy-VM-Phy setup using OVS with DPDK. I assigned 1core to
vswitchd process and also did performance tuning step for setting core
affinity as mentioned in INSTALL.DPDK.md. But still I get around 1.1G
throughput. When the same configuration is used for phy-phy I observe good
performance. By increasing the cores to vswitchd also I could observe good
performance. But I want to know if there is a possibility to get good
performance by assigning 1core itself.

Following are the Platform and setup details:
NOTE: Used latest ovs-master and DPDK2.0.0
1. Intel Xeon E5 2603 v3 (2 Sockets)
2. hugepagesz=1G, hugepages=8,isolcpus=1,2,3,4,5,6,7,8
3. Bound two I350 nics to igb_uio driver.
4. Ensured that the dpdk ports and the cores assigned to vswitchd are
mapped to same socket.(included 'socket-mem 4096' also)
5. Brought up the OVS+DPDK as mentioned in INSTALL.DPDK.md for vhost-user
implementation.
6.. Brought up VM using qemu with 4 vcpus and ran DPDK l2fwd inside it.
7. Used DPDK Pktgen to pump 1G bidirectional traffic of 64bytes size.

I observe that, even though 1G bidirectional traffic is pumped, the rate is
still 1100/1100. I am really not sure why each nic is not transmitting
beyond 550Mbps. When I use the same configuration for Phy-Phy I see the
rate as 2000/2000.

Can you please let me know If I should make any I350 NIC specific changes
in the code?
All the RX/TX descriptor values of my I350 nic are set to its default
values in my linux host. Should I do any tuning in my nic to increase
performance?

I'm new to Openvswitch and want to learn the internals. It would be really
helpful if you could let me know packet path in the source code for the
Phy-VM-Phy scenario, so I can get a clear understanding and work on
improving performance.

Thanks in advance.

Regards,
Rajeev.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to