[ovs-dev] [PATCH 2/2] ovn: fix lrouter flows building issue when easy SNAT configured

2016-07-28 Thread Dongjun
The lrouter drop the traffic to it's own port IPs unless the IPs are configured for SNAT. Now these flows are still built. Signed-off-by: Dongjun --- ovn/northd/ovn-northd.c | 16 +++- 1 file changed, 11 insertions(+), 5 deletions(-) mode change 100644 => 100755 ovn/no

[ovs-dev] [PATCH 1/2] ovn:add easy SNAT test case

2016-07-28 Thread Dongjun
Signed-off-by: Dongjun --- tests/system-ovn.at | 106 +++- 1 file changed, 105 insertions(+), 1 deletion(-) mode change 100644 => 100755 tests/system-ovn.at diff --git a/tests/system-ovn.at b/tests/system-ovn.at old mode 100644 new mode 100

[ovs-dev] [PATCH 2/2] ovn: fix lrouter flows building issue when easy SNAT configured

2016-07-27 Thread Dongjun
The lrouter drop the traffic to it's own port IPs unless the IPs are configured for SNAT. Now these flows are still built. Signed-off-by: Dongjun --- ovn/northd/ovn-northd.c | 16 +++- 1 file changed, 11 insertions(+), 5 deletions(-) mode change 100644 => 100755 ovn/no

[ovs-dev] [PATCH 1/2] ovn:add easy SNAT test case

2016-07-27 Thread Dongjun
Signed-off-by: Dongjun --- tests/system-ovn.at | 106 +++- 1 file changed, 105 insertions(+), 1 deletion(-) mode change 100644 => 100755 tests/system-ovn.at diff --git a/tests/system-ovn.at b/tests/system-ovn.at old mode 100644 new mode 100

Re: [ovs-dev] [PATACH]ovn: fix bug of dropping flows building in build_lrouter_flows

2016-07-27 Thread Dongjun
There is a DNAT-SNAT(other IP) test case in system-ovn.at, but easy SNAT(port IP) is not included. I added a new test, failed currently and passed with the modification. Will update my patch V2. On 2016/7/26 10:57, Ryan Moats wrote: "dev" wrote on 07/25/2016 08:49:38 PM: >

[ovs-dev] [PATACH]ovn: fix bug of dropping flows building in build_lrouter_flows

2016-07-25 Thread Dongjun
p;" {stage-name=lr_in_ip_input} 1915535e-7738-43db-8341-2306221b0691 "ip4.dst == {192.168.246.200}" ingress 60 1 c3d56ca6-38c6-4943-8d89-2c39cbc3cd9b"next;" {stage-name=lr_out_snat} 1915535e-7738-43db-8341-2306221b0691"1" egress 0

[ovs-dev] Fwd: [ovs-discuss][dpdk-ovs]VXLAN encapsulation exceeds the MTU of dpdk port .

2015-06-28 Thread Dongjun
As following topology, two VMs communicate via VXLAN tunnel. TCP pkts may be droped for exceeding the MTU of host DPDK port in br2. Now I can decrease VMs' MTU to accommodate the traffics, it works well. The way I have not found to change MTU of DPDK phy port, is there a blueprint for supportin

Re: [ovs-dev] [PATCH v2] Do not flush tx queue which is shared among CPUs since it is always flushed

2015-06-16 Thread Dongjun
On 2015/6/17 1:44, Daniele Di Proietto wrote: On 16/06/2015 07:40, "Pravin Shelar" wrote: On Mon, Jun 8, 2015 at 7:42 PM, Pravin Shelar wrote: On Mon, Jun 8, 2015 at 6:13 PM, Wei li wrote: When tx queue is shared among CPUS,the pkts always be flush in 'netdev_dpdk_eth_send' So it is unn

Re: [ovs-dev] Is this an issue for DPDK vhost rss?

2015-06-10 Thread Dongjun
ecv(). Other netdev providers that do not support reading the RSS hash (netdev-linux, netdev-bsd) call dp_packet_set_rss_hash(pkt, 0) on every received packet. I thought Dongjun was suggesting that we should generate a hash? I agree that it should be reset on every received packet. I upstreamed somethi

Re: [ovs-dev] Is the tx spinlock in __netdev_dpdk_vhost_send necessary?

2015-06-10 Thread Dongjun
On 2015/6/10 19:16, Traynor, Kevin wrote: -Original Message- From: dev [mailto:dev-boun...@openvswitch.org] On Behalf Of Dongjun Sent: Tuesday, June 9, 2015 4:36 AM To: dev@openvswitch.org Subject: [ovs-dev] Is the tx spinlock in __netdev_dpdk_vhost_send necessary? This is the source

[ovs-dev] Is this an issue for DPDK vhost rss?

2015-06-10 Thread Dongjun
Hi: In "dp_packet_get_rss_hash", mbuf.hash.rss is returned directly. But when it's DPDK vhost pkt, rss isn't initialized, "dpif_netdev_packet_get_rss_hash" gets a indeterminate value. There is not the same issue for ovs linux vport. I confirmed it with gdb. *Flow 1:* 6.0.0.1 -> 6.0.0.2 Break

[ovs-dev] Is the tx spinlock in __netdev_dpdk_vhost_send necessary?

2015-06-08 Thread Dongjun
This is the source code of "__netdev_dpdk_vhost_send" in master branch: " ... /* There is vHost TX single queue, So we need to lock it for TX. */ rte_spinlock_lock(&vhost_dev->vhost_tx_lock); do { unsigned int tx_pkts; tx_pkts = rte_vhost_enqueue_burst(virtio_dev,

[ovs-dev] Just for my output format test, pls ignore it.

2015-06-07 Thread dongjun
When tx queue is shared among CPUS,the pkts always be flush in 'netdev_dpdk_eth_send' So it is unnecessarily for flushing in netdev_dpdk_rxq_recv Otherwise tx will be accessed without locking Signed-off-by: Wei li http://openvswitch.org/mailman/listinfo/dev> > --- lib/netdev-dpdk.c | 7 +--