> On 16 Mar 2015, at 15:21, Traynor, Kevin <kevin.tray...@intel.com> wrote: > >> >> -----Original Message----- >> From: dev [mailto:dev-boun...@openvswitch.org] On Behalf Of Daniele Di >> Proietto >> Sent: Thursday, March 12, 2015 6:05 PM >> To: dev@openvswitch.org >> Subject: [ovs-dev] [PATCH 2/6] netdev-dpdk: Adapt the requested number of tx >> and rx queues. >> >> This commit changes the semantics of 'netdev_set_multiq()' to allow OVS >> DPDK to run on device with limited multi queue support. > > This is great, because on a dual socket system with an 18 core Haswell and HT > enabled you could be looking for 72 tx queues. > >> >> * If a netdev doesn't have the requested number of rxqs it can simply >> inform the datapath without failing. >> * If a netdev doesn't have the requested number of txqs it should try >> to create as many as possible and use locking. >> >> Signed-off-by: Daniele Di Proietto <diproiet...@vmware.com> >> --- >> lib/netdev-dpdk.c | 94 +++++++++++++++++++++++++++++++------------------ >> -- >> lib/netdev-provider.h | 11 ++++++ >> lib/netdev.c | 10 ++++++ >> vswitchd/vswitch.xml | 2 +- >> 4 files changed, 80 insertions(+), 37 deletions(-) >> >> diff --git a/lib/netdev-dpdk.c b/lib/netdev-dpdk.c >> index 54bc318..2278377 100644 >> --- a/lib/netdev-dpdk.c >> +++ b/lib/netdev-dpdk.c > > [snip] > >> @@ -656,8 +684,10 @@ netdev_dpdk_set_multiq(struct netdev *netdev_, unsigned >> int n_txq, >> netdev->up.n_txq = n_txq; >> netdev->up.n_rxq = n_rxq; >> rte_free(netdev->tx_q); >> - netdev_dpdk_alloc_txq(netdev, n_txq); >> err = dpdk_eth_dev_init(netdev); >> + netdev_dpdk_alloc_txq(netdev, netdev->real_n_txq); >> + >> + netdev->txq_needs_locking = netdev->real_n_txq != netdev->up.n_txq; > > Probably no point in allocing here if you have been returned an error from > dpdk_eth_dev_init(). You could just skip to the mutex_unlocking >
I could add another goto label, but it would complicate the code a bit. If you don’t have a strong opinion about this I would prefer leaving it as it is. >> >> ovs_mutex_unlock(&netdev->mutex); >> ovs_mutex_unlock(&dpdk_mutex); >> @@ -921,12 +951,21 @@ netdev_dpdk_send__(struct netdev_dpdk *dev, int qid, >> } >> > > [snip] > >> >> -#define NETDEV_DPDK_CLASS(NAME, INIT, CONSTRUCT, MULTIQ, SEND) \ >> +#define NETDEV_DPDK_CLASS(NAME, INIT, CONSTRUCT) \ >> { \ >> NAME, \ >> INIT, /* init */ \ >> @@ -1429,9 +1455,9 @@ unlock_dpdk: >> NULL, /* push header */ \ >> NULL, /* pop header */ \ >> netdev_dpdk_get_numa_id, /* get_numa_id */ \ >> - MULTIQ, /* set_multiq */ \ >> + netdev_dpdk_set_multiq, /* set_multiq */ \ > > I don’t think the netdev_dpdk_set_multiq() is needed for dpdkr as at > present you can't change the number of q's for dpdkr. It doesn't do > any harm either. Is there a reason you put it in e.g. future proofing? I wanted to minimise (when possible) the differences between dpdk and dpdkr. Besides set_multiq for dpdkr stores the requested number of transmission queues on netdev->n_txq, which can be shown via ovs-appctl dpctl/show -s _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev