On Tue,  5 Jan 2016 16:12:55 -0800
Yong Wang <yongwang at vmware.com> wrote:

> @@ -365,6 +366,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf 
> **tx_pkts,
>                       break;
>               }
>  
> +             if (rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) {
> +                     struct Vmxnet3_TxDataDesc *tdd;
> +
> +                     tdd = txq->data_ring.base + txq->cmd_ring.next2fill;
> +                     copy_size = rte_pktmbuf_pkt_len(txm);
> +                     rte_memcpy(tdd->data, rte_pktmbuf_mtod(txm, char *), 
> copy_size);
> +             }

Good idea to use a local region which optmizes the copy in the host,
but this implementation needs to be more general.

As written it is broken for multi-segment packets. A multi-segment
packet will have a pktlen >= datalen as in:
  m -> mb_segs=3, pktlen=1200, datalen=200
    -> datalen=900
    -> datalen=100

There are two ways to fix this. You could test for nb_segs == 1
or better yet. Optimize each segment it might be that the first
segment (or tail segment) would fit in the available data area.

Reply via email to