> > When tx queue is shared among CPUS,the pkts always be flush in > 'netdev_dpdk_eth_send' > So it is unnecessarily for flushing in netdev_dpdk_rxq_recv Otherwise tx will > be accessed without locking
Are you seeing a specific bug or is this just to account for a device with less queues than pmds? > > Signed-off-by: Wei li <l...@dtdream.com> > --- > lib/netdev-dpdk.c | 7 +++++-- > 1 file changed, 5 insertions(+), 2 deletions(-) > > diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c index 63243d8..25e3a73 > 100644 > --- a/lib/netdev-dpdk.c > +++ b/lib/netdev-dpdk.c > @@ -892,8 +892,11 @@ netdev_dpdk_rxq_recv(struct netdev_rxq *rxq_, > struct dp_packet **packets, > int nb_rx; > > /* There is only one tx queue for this core. Do not flush other > - * queueus. */ > - if (rxq_->queue_id == rte_lcore_id()) { > + * queueus. s/queueus/queues > + * Do not flush tx queue which is shared among CPUs > + * since it is always flushed */ > + if (rxq_->queue_id == rte_lcore_id() && > + OVS_LIKELY(!dev->txq_needs_locking)) { > dpdk_queue_flush(dev, rxq_->queue_id); Do you see any drop in performance in a simple phy-phy case before and after this patch? > } > > -- > 1.9.5.msysgit.1 > > > _______________________________________________ > dev mailing list > dev@openvswitch.org > http://openvswitch.org/mailman/listinfo/dev _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev