> -----Original Message----- > From: Ferruh Yigit <ferruh.yi...@amd.com> > Sent: Friday, October 21, 2022 17:52 > To: Guo, Junfeng <junfeng....@intel.com>; Zhang, Qi Z > <qi.z.zh...@intel.com>; Wu, Jingjing <jingjing...@intel.com>; > ferruh.yi...@xilinx.com; Xing, Beilei <beilei.x...@intel.com> > Cc: dev@dpdk.org; Li, Xiaoyun <xiaoyun...@intel.com>; > awogbem...@google.com; Richardson, Bruce > <bruce.richard...@intel.com>; hemant.agra...@nxp.com; > step...@networkplumber.org; Xia, Chenbo <chenbo....@intel.com>; > Zhang, Helin <helin.zh...@intel.com> > Subject: Re: [PATCH v7 8/8] net/gve: add support for Rx/Tx > > On 10/21/2022 10:19 AM, Junfeng Guo wrote: > > > > > Add Rx/Tx of GQI_QPL queue format and GQI_RDA queue format. > > > > Signed-off-by: Xiaoyun Li <xiaoyun...@intel.com> > > Signed-off-by: Junfeng Guo <junfeng....@intel.com> > > <...> > > > + > > +static inline void > > +gve_tx_clean_swr_qpl(struct gve_tx_queue *txq) > > +{ > > + uint32_t start = txq->sw_ntc; > > + uint32_t ntc, nb_clean; > > + > > + ntc = txq->sw_tail; > > + > > + if (ntc == start) > > + return; > > + > > + /* if wrap around, free twice. */ > > + if (ntc < start) { > > + nb_clean = txq->nb_tx_desc - start; > > + if (nb_clean > GVE_TX_MAX_FREE_SZ) > > + nb_clean = GVE_TX_MAX_FREE_SZ; > > + gve_free_bulk_mbuf(&txq->sw_ring[start], nb_clean); > > + > > + txq->sw_nb_free += nb_clean; > > + start += nb_clean; > > + if (start == txq->nb_tx_desc) > > + start = 0; > > + txq->sw_ntc = start; > > + } > > + > > + if (ntc > start) { > > + nb_clean = ntc - start; > > + if (nb_clean > GVE_TX_MAX_FREE_SZ) > > + nb_clean = GVE_TX_MAX_FREE_SZ; > > + gve_free_bulk_mbuf(&txq->sw_ring[start], nb_clean); > > + txq->sw_nb_free += nb_clean; > > + start += nb_clean; > > + txq->sw_ntc = start; > > + } > > +} > > [copy/paste from previous version] > > may be can drop the 'if' block, since "ntc == start" and "ntc < start" > cases already covered.
Yes, this 'if' block is dropped in v7 as suggested. Thanks! > > <...> > > > +uint16_t > > +gve_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t > nb_pkts) > > +{ > > + struct gve_tx_queue *txq = tx_queue; > > + > > + if (txq->is_gqi_qpl) > > + return gve_tx_burst_qpl(tx_queue, tx_pkts, nb_pkts); > > + > > + return gve_tx_burst_ra(tx_queue, tx_pkts, nb_pkts); > > +} > > + > > [copy/paste from previous version] > > Can there be mix of queue types? > If only one queue type is supported in specific config, perhaps burst > function can be set during configuration, to prevent if check on datapath. > > This is optimization and can be done later, it doesn't have to be in the > set. The exact queue type can be fetched from the backend via adminq in priv->queue_format. So there won't be mix of the queue types. Currently, only GQI_QPL and GQI_RDA queue format are supported in PMD. Also, only GQI_QPL queue format is in use on GCP since GQI_RDA hasn't been released in production. This part code will be optimized/refactored later when involving the queue type DQO_RDA. Thanks!