[dpdk-dev] [PATCH v3 1/4] vmxnet3: restore tx data ring support

2016-01-13 Thread Yong Wang
On 1/5/16, 4:48 PM, "Stephen Hemminger" wrote: >On Tue, 5 Jan 2016 16:12:55 -0800 >Yong Wang wrote: > >> @@ -365,6 +366,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf >> **tx_pkts, >> break; >> } >> >> +if (rte_pktmbuf_pkt_len(txm) <= V

[dpdk-dev] [PATCH v3 1/4] vmxnet3: restore tx data ring support

2016-01-12 Thread Stephen Hemminger
On Wed, 13 Jan 2016 02:20:01 + Yong Wang wrote: > >Good idea to use a local region which optmizes the copy in the host, > >but this implementation needs to be more general. > > > >As written it is broken for multi-segment packets. A multi-segment > >packet will have a pktlen >= datalen as in:

[dpdk-dev] [PATCH v3 1/4] vmxnet3: restore tx data ring support

2016-01-05 Thread Stephen Hemminger
On Tue, 5 Jan 2016 16:12:55 -0800 Yong Wang wrote: > @@ -365,6 +366,14 @@ vmxnet3_xmit_pkts(void *tx_queue, struct rte_mbuf > **tx_pkts, > break; > } > > + if (rte_pktmbuf_pkt_len(txm) <= VMXNET3_HDR_COPY_SIZE) { > + struct V

[dpdk-dev] [PATCH v3 1/4] vmxnet3: restore tx data ring support

2016-01-05 Thread Yong Wang
Tx data ring support was removed in a previous change to add multi-seg transmit. This change adds it back. According to the original commit (2e849373), 64B pkt rate with l2fwd improved by ~20% on an Ivy Bridge server at which point we start to hit some bottleneck on the rx side. I also re-did th