> -----Original Message-----
> From: Ferruh Yigit <ferruh.yi...@amd.com>
> Sent: Thursday, October 6, 2022 22:25
> To: Guo, Junfeng <junfeng....@intel.com>; Zhang, Qi Z
> <qi.z.zh...@intel.com>; Wu, Jingjing <jingjing...@intel.com>
> Cc: ferruh.yi...@xilinx.com; dev@dpdk.org; Li, Xiaoyun
> <xiaoyun...@intel.com>; awogbem...@google.com; Richardson, Bruce
> <bruce.richard...@intel.com>; Lin, Xueqin <xueqin....@intel.com>
> Subject: Re: [PATCH v4 7/9] net/gve: add support for Rx/Tx
>
> On 9/27/2022 8:32 AM, Junfeng Guo wrote:
>
> >
> > Add Rx/Tx of GQI_QPL queue format and GQI_RDA queue format.
> >
> > Signed-off-by: Xiaoyun Li <xiaoyun...@intel.com>
> > Signed-off-by: Junfeng Guo <junfeng....@intel.com>
>
> <...>
>
> > --- a/drivers/net/gve/gve_ethdev.c
> > +++ b/drivers/net/gve/gve_ethdev.c
> > @@ -583,6 +583,11 @@ gve_dev_init(struct rte_eth_dev *eth_dev)
> > if (err)
> > return err;
> >
> > + if (gve_is_gqi(priv)) {
> > + eth_dev->rx_pkt_burst = gve_rx_burst;
> > + eth_dev->tx_pkt_burst = gve_tx_burst;
> > + }
> > +
>
> What do you think to add a log here for 'else' case, to inform user why
> datapath is not working?
Agreed, make sense!
Currently only one queue mode (i.e., qpl mode) is supported on the GCP
env. Will add a log to inform this in the else case. Thanks!
>
> <...>
>
> > +uint16_t
> > +gve_rx_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t
> nb_pkts)
> > +{
> > + volatile struct gve_rx_desc *rxr, *rxd;
> > + struct gve_rx_queue *rxq = rx_queue;
> > + uint16_t rx_id = rxq->rx_tail;
> > + struct rte_mbuf *rxe;
> > + uint16_t nb_rx, len;
> > + uint64_t addr;
> > +
> > + rxr = rxq->rx_desc_ring;
> > +
> > + for (nb_rx = 0; nb_rx < nb_pkts; nb_rx++) {
> > + rxd = &rxr[rx_id];
> > + if (GVE_SEQNO(rxd->flags_seq) != rxq->expected_seqno)
> > + break;
> > +
> > + if (rxd->flags_seq & GVE_RXF_ERR)
> > + continue;
> > +
> > + len = rte_be_to_cpu_16(rxd->len) - GVE_RX_PAD;
> > + rxe = rxq->sw_ring[rx_id];
> > + rxe->data_off = RTE_PKTMBUF_HEADROOM;
> > + if (rxq->is_gqi_qpl) {
> > + addr = (uint64_t)(rxq->qpl->mz->addr) + rx_id *
> > PAGE_SIZE
> + GVE_RX_PAD;
> > + rte_memcpy((void *)((size_t)rxe->buf_addr + rxe-
> >data_off),
> > + (void *)(size_t)addr, len);
>
> Why a 'memcpy' is needed? Can't it DMA to mbuf data buffer?
Well, only qpl (queue page list) mode supported on the GCP env now.
So the DMA may not be used in current case.
>
> > + }
> > + rxe->nb_segs = 1;
> > + rxe->next = NULL;
> > + rxe->pkt_len = len;
> > + rxe->data_len = len;
> > + rxe->port = rxq->port_id;
> > + rxe->packet_type = 0;
> > + rxe->ol_flags = 0;
> > +
>
> As far as I can see 'sw_ring[]' filled using 'rte_pktmbuf_alloc_bulk()'
> API, which should reset mbuf fields to default values, so some of the
> assignment above can be redundant.
Yes, some fields are already assigned at 'rte_pktmbuf_reset()'.
Will remove the redundant ones in the coming version. Thanks!
>
> > + if (rxd->flags_seq & GVE_RXF_TCP)
> > + rxe->packet_type |= RTE_PTYPE_L4_TCP;
> > + if (rxd->flags_seq & GVE_RXF_UDP)
> > + rxe->packet_type |= RTE_PTYPE_L4_UDP;
> > + if (rxd->flags_seq & GVE_RXF_IPV4)
> > + rxe->packet_type |= RTE_PTYPE_L3_IPV4;
> > + if (rxd->flags_seq & GVE_RXF_IPV6)
> > + rxe->packet_type |= RTE_PTYPE_L3_IPV6;
> > +
>
> If you are setting packet_type, it is better to implement
> 'dev_supported_ptypes_get()' dev_ops too, to announce host which
> packet
> type parsin supporting. (+ dev_ptypes_set() dev_ops)
> And later driver can announce "Packet type parsing" feature in .ini file.
Well, on current GCP env, the APIs for supported ptypes get/set have not
been exposed even in the base code. The only one in the base code is for
the dqo mode (gve_adminq_get_ptype_map_dqo). But this also cannot
be used on current GCP env. We can only implement this once they are
supported and exposed at GCP. Thanks!