> -----Original Message-----
> From: Ferruh Yigit <ferruh.yi...@amd.com>
> Sent: Thursday, October 20, 2022 22:47
> To: Guo, Junfeng <junfeng....@intel.com>; Zhang, Qi Z
> <qi.z.zh...@intel.com>; Wu, Jingjing <jingjing...@intel.com>; Xing,
> Beilei <beilei.x...@intel.com>
> Cc: dev@dpdk.org; Li, Xiaoyun <xiaoyun...@intel.com>;
> awogbem...@google.com; Richardson, Bruce
> <bruce.richard...@intel.com>; hemant.agra...@nxp.com;
> step...@networkplumber.org; Xia, Chenbo <chenbo....@intel.com>;
> Zhang, Helin <helin.zh...@intel.com>
> Subject: Re: [PATCH v6 8/8] net/gve: add support for Rx/Tx
> 
> On 10/20/2022 11:36 AM, Junfeng Guo wrote:
> 
> >
> > Add Rx/Tx of GQI_QPL queue format and GQI_RDA queue format.
> >
> > Signed-off-by: Xiaoyun Li <xiaoyun...@intel.com>
> > Signed-off-by: Junfeng Guo <junfeng....@intel.com>
> 
> <...>
> 
> > +uint16_t
> > +gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t
> nb_pkts)
> > +{
> > +       volatile struct gve_rx_desc *rxr, *rxd;
> > +       struct gve_rx_queue *rxq = rx_queue;
> > +       uint16_t rx_id = rxq->rx_tail;
> > +       struct rte_mbuf *rxe;
> > +       uint16_t nb_rx, len;
> > +       uint64_t addr;
> > +       uint16_t i;
> > +
> > +       rxr = rxq->rx_desc_ring;
> > +       nb_rx = 0;
> > +
> > +       for (i = 0; i < nb_pkts; i++) {
> > +               rxd = &rxr[rx_id];
> > +               if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno)
> > +                       break;
> > +
> > +               if (rxd->flags_seq & GVE_RXF_ERR)
> > +                       continue;
> > +
> > +               len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD;
> > +               rxe = rxq->sw_ring[rx_id];
> > +               if (rxq->is_gqi_qpl) {
> > +                       addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id * 
> > PAGE_SIZE
> + GVE_RX_PAD;
> > +                       rte_memcpy((void *)((size_t)rxe->buf_addr + rxe-
> >data_off),
> > +                                  (void *)(size_t)addr, len);
> > +               }
> > +               rxe->pkt_len = len;
> > +               rxe->data_len = len;
> > +               rxe->port = rxq->port_id;
> > +               rxe->ol_flags = 0;
> > +
> > +               if (rxd->flags_seq & GVE_RXF_TCP)
> > +                       rxe->packet_type |= RTE_PTYPE_L4_TCP;
> > +               if (rxd->flags_seq & GVE_RXF_UDP)
> > +                       rxe->packet_type |= RTE_PTYPE_L4_UDP;
> > +               if (rxd->flags_seq & GVE_RXF_IPV4)
> > +                       rxe->packet_type |= RTE_PTYPE_L3_IPV4;
> > +               if (rxd->flags_seq & GVE_RXF_IPV6)
> > +                       rxe->packet_type |= RTE_PTYPE_L3_IPV6;
> > +
> > +               if (gve_needs_rss(rxd->flags_seq)) {
> > +                       rxe->ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
> > +                       rxe->hash.rss = rte_be_to_cpu_32(rxd->rss_hash);
> 
> You are updating "m->hash.rss" anyway, and if this is without and cost
> you can force enable as done in previous version:
> 'dev->data->dev_conf.rxmode.offloads |=
> RTE_ETH_RX_OFFLOAD_RSS_HASH;'

Yes, it seems the RSS is enabled by default with no obvious perf loss.
There is no RSS init stage. We will also force enable this in the dev config
stage. Thanks!

> 
> <...>
> 
> > +static inline void
> > +gve_free_bulk_mbuf(struct rte_mbuf **txep, int num)
> > +{
> > +       struct rte_mbuf *m, *free[GVE_TX_MAX_FREE_SZ];
> > +       int nb_free = 0;
> > +       int i, s;
> > +
> > +       if (unlikely(num == 0))
> > +               return;
> > +
> > +       /* Find the 1st mbuf which needs to be free */
> > +       for (s = 0; s < num; s++) {
> > +               if (txep[s] != NULL) {
> > +                       m = rte_pktmbuf_prefree_seg(txep[s]);
> > +                       if (m != NULL)
> > +                               break;
> > +                       }
> 
> '}' indentation is wrong.

Thanks for the catch! Will update in the coming version.

> 
> <...>
> 
> > +static inline void
> > +gve_tx_clean_swr_qpl(struct gve_tx_queue *txq)
> > +{
> > +       uint32_t start = txq->sw_ntc;
> > +       uint32_t ntc, nb_clean;
> > +
> > +       ntc = txq->sw_tail;
> > +
> > +       if (ntc == start)
> > +               return;
> > +
> > +       /* if wrap around, free twice. */
> > +       if (ntc < start) {
> > +               nb_clean = txq->nb_tx_desc - start;
> > +               if (nb_clean > GVE_TX_MAX_FREE_SZ)
> > +                       nb_clean = GVE_TX_MAX_FREE_SZ;
> > +               gve_free_bulk_mbuf(&txq->sw_ring[start], nb_clean);
> > +
> > +               txq->sw_nb_free += nb_clean;
> > +               start += nb_clean;
> > +               if (start == txq->nb_tx_desc)
> > +                       start = 0;
> > +               txq->sw_ntc = start;
> > +       }
> > +
> > +       if (ntc > start) {
> 
> may be can drop the 'if' block, since "ntc == start" and "ntc < start"
> cases already covered.

Sure, will drop this 'if' in the coming version. Thanks!

> 
> <...>
> 
> > +uint16_t
> > +gve_tx_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t
> nb_pkts)
> > +{
> > +       struct gve_tx_queue *txq = tx_queue;
> > +
> > +       if (txq->is_gqi_qpl)
> > +               return gve_tx_burst_qpl(tx_queue, tx_pkts, nb_pkts);
> > +
> > +       return gve_tx_burst_ra(tx_queue, tx_pkts, nb_pkts);
> > +}
> > +
> 
> Can there be mix of queue types?
> If only one queue type is supported in specific config, perhaps burst
> function can be set during configuration, to prevent if check on datapath.
> 
> This is optimization and can be done later, it doesn't have to be in the
> set.

Maybe not. There are three queue format types, and we can get the real 
used one via adminq with 'priv->queue_format'. So there may not have 
mix of the queue types. And currently only GQI_QPL and GQI_RDA queue 
format are supported in PMD. Also, only GQI_QPL queue format is in use 
on GCP since GQI_RDA hasn't been released in production.

Will do some refactors for the queue types later. Thanks for the advice!

Reply via email to