Hi Vipin,
> -----Original Message-----
> From: Varghese, Vipin
> Sent: Tuesday, December 4, 2018 1:42 PM
> To: Lu, Wenzhuo <wenzhuo...@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo...@intel.com>; Yang, Qiming
> <qiming.y...@intel.com>; Li, Xiaoyun <xiaoyun...@intel.com>; Wu, Jingjing
> <jingjing...@intel.com>
> Subject: RE: [dpdk-dev] [PATCH v2 16/20] net/ice: support basic RX/TX
> 
> snipped
> > +uint16_t
> > +ice_recv_pkts(void *rx_queue,
> > +         struct rte_mbuf **rx_pkts,
> > +         uint16_t nb_pkts)
> > +{
> > +   struct ice_rx_queue *rxq = rx_queue;
> > +   volatile union ice_rx_desc *rx_ring = rxq->rx_ring;
> > +   volatile union ice_rx_desc *rxdp;
> > +   union ice_rx_desc rxd;
> > +   struct ice_rx_entry *sw_ring = rxq->sw_ring;
> > +   struct ice_rx_entry *rxe;
> > +   struct rte_mbuf *nmb; /* new allocated mbuf */
> > +   struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */
> > +   uint16_t rx_id = rxq->rx_tail;
> > +   uint16_t nb_rx = 0;
> > +   uint16_t nb_hold = 0;
> > +   uint16_t rx_packet_len;
> > +   uint32_t rx_status;
> > +   uint64_t qword1;
> > +   uint64_t dma_addr;
> > +   uint64_t pkt_flags = 0;
> > +   uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
> > +   struct rte_eth_dev *dev;
> > +
> > +   while (nb_rx < nb_pkts) {
> > +           rxdp = &rx_ring[rx_id];
> > +           qword1 = rte_le_to_cpu_64(rxdp-
> > >wb.qword1.status_error_len);
> > +           rx_status = (qword1 & ICE_RXD_QW1_STATUS_M) >>
> > +                       ICE_RXD_QW1_STATUS_S;
> > +
> > +           /* Check the DD bit first */
> > +           if (!(rx_status & (1 << ICE_RX_DESC_STATUS_DD_S)))
> > +                   break;
> > +
> > +           /* allocate mbuf */
> > +           nmb = rte_mbuf_raw_alloc(rxq->mp);
> > +           if (unlikely(!nmb)) {
> > +                   dev = ICE_VSI_TO_ETH_DEV(rxq->vsi);
> > +                   dev->data->rx_mbuf_alloc_failed++;
> > +                   break;
> > +           }
> 
> Should we check if the received packet length is greater than mbug pkt_len
> then we need bulk alloc with n_segs?
We cannot do it here. It's fast path. It hurts performance badly. So we do the 
check before and choose the right RX function.
Normally by default the n_segs is supported.

> 
> > +           rxd = *rxdp; /* copy descriptor in ring to temp variable*/
> > +
> > +           nb_hold++;
> > +           rxe = &sw_ring[rx_id]; /* get corresponding mbuf in SW ring
> */
> > +           rx_id++;
> > +           if (unlikely(rx_id == rxq->nb_rx_desc))
> > +                   rx_id = 0;
> > +           rxm = rxe->mbuf;
> > +           rxe->mbuf = nmb;
> > +           dma_addr =
> > +
>       rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
> > +
> > +           /**
> > +            * fill the read format of descriptor with physic address in
> > +            * new allocated mbuf: nmb
> > +            */
> > +           rxdp->read.hdr_addr = 0;
> > +           rxdp->read.pkt_addr = dma_addr;
> > +
> > +           /* calculate rx_packet_len of the received pkt */
> > +           rx_packet_len = ((qword1 & ICE_RXD_QW1_LEN_PBUF_M) >>
> > +                           ICE_RXD_QW1_LEN_PBUF_S) - rxq->crc_len;
> > +
> > +           /* fill old mbuf with received descriptor: rxd */
> > +           rxm->data_off = RTE_PKTMBUF_HEADROOM;
> > +           rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr,
> > RTE_PKTMBUF_HEADROOM));
> > +           rxm->nb_segs = 1;
> 
> Same comment for above for multi segment alloc for larger packets or
> smaller pkt_len in mempool?
> 
> Snipped

Reply via email to