Hello,
I've been using DPDK for a while and now I encountered the following issue:
when I try to run two primary processes on the same host (with --no-shconf
option enabled) respectively sending packets on one port and receiving them
on a different port (the two ports are directly connected with a
Hi,
I'm working on TSO for 82599, and encounter a problem: nowhere to store MSS.
TSO must be aware of MSS, or gso in skb of kernel.
But MSS nees 16 bits per mbuf. And we have no spare 16 bits in
rte_mbuf or rte_pktmbuf.
If we add 16 bit field in rte_pktmbuf, the size of rte_mbuf will be
doubled,
On Fri, 4 Oct 2013 13:47:02 +0200
Walter de Donato wrote:
> Hello,
>
> I've been using DPDK for a while and now I encountered the following issue:
> when I try to run two primary processes on the same host (with --no-shconf
> option enabled) respectively sending packets on one port and receiving
On Fri, 4 Oct 2013 15:44:19 +0300
jigsaw wrote:
> Hi,
>
> I'm working on TSO for 82599, and encounter a problem: nowhere to store MSS.
>
> TSO must be aware of MSS, or gso in skb of kernel.
> But MSS nees 16 bits per mbuf. And we have no spare 16 bits in
> rte_mbuf or rte_pktmbuf.
> If we add 1
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Stephen
> Hemminger
> Sent: Friday, October 04, 2013 5:39 PM
> To: Walter de Donato
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] Multi-process on the same host
>
> On Fri, 4 Oct 2013 13:47:02 +0200
> Walter d
This patch is a draft of TSO on 82599. That is, it is not expected to be
accepted as is.
The problem is where to put the mss field. In this patch, the mss is put in
the union of hash in rte_pktmbuf. It is not the best place, but it is quite
convenient, since hash is not used in TX procedure.
The id
Add support for TCP/UDP segment offload on 82599.
User can turn on TSO by setting MSS in the first frame.
Meantime, the L2 and L3 len, together with offload flags must be set in the
first frame accordingly. Otherwise the driver will cease the sending.
---
lib/librte_mbuf/rte_mbuf.h|6 +
Hi Stephen,
Thanks for comment. Pls check the other thread that I just posted.
thx &
rgds,
-Qinglai
On Fri, Oct 4, 2013 at 7:41 PM, Stephen Hemminger
wrote:
> On Fri, 4 Oct 2013 15:44:19 +0300
> jigsaw wrote:
>
>> Hi,
>>
>> I'm working on TSO for 82599, and encounter a problem: nowhere to stor
On Fri, 4 Oct 2013 20:06:52 +0300
Qinglai Xiao wrote:
> This patch is a draft of TSO on 82599. That is, it is not expected to be
> accepted as is.
> The problem is where to put the mss field. In this patch, the mss is put in
> the union of hash in rte_pktmbuf. It is not the best place, but it is
Hi,
If you are not using SRIOV or direct device assignment to VM, your traffic hits
vSwitch(via vmware native ixgbe driver and network stack) in the ESX and
switched to your E1000/VMXNET3 interface connected to a VM. The vSwitch is not
optimized for PMD at present so you would get optimal perfo
Hi Stephen,
>>This will work for local generated packets but overlapping existing field
>>won't work well for forwarding.
So adding a new mss field in mbuf could be the way out? or I
misunderstand something.
>> What we want to be able to do is to take offload (jumbo) packets in with
>> from vi
Correction: "you would NOT get optimal performance benefit having PMD"
Thanks,
Rashmin
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Patel, Rashmin N
Sent: Friday, October 04, 2013 10:47 AM
To: Selvaganapathy Chidambaram
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev]
Stephen,
Agree. Growing to two cache lines is an inevitability. Re-organizing the mbuf a
bit to alleviate some of the immediate space with as minimal a performance as
possible (including separating the QoS fields out completely into its own
separate area) is a good idea - the first cache line
On Fri, 4 Oct 2013 20:54:31 +0300
jigsaw wrote:
> Hi Stephen,
>
>
> >>This will work for local generated packets but overlapping existing field
> >>won't work well for forwarding.
> So adding a new mss field in mbuf could be the way out? or I
> misunderstand something.
>
> >> What we want to
Stephen,
Agree on the checksum flag definition. I'm presuming that we should do this on
the L3 and L4 checksums separately (that ol_flags field is another one that
needs extension in the mbuf).
Regards,
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of
Hi Stephen,
Thanks for showing a bigger picture.
GSO is quite big implementation, that I think it won't be easily
ported to DPDK. The mbuf needs to be equipped with many fields from
skb to be able to deal with GSO.
Do you have the plan to port GSO to DPDK, or you would like to keep
GSO in scope o
Thanks a lot Bruce,
I started looking at the multi-process examples - where this case is not
considered - and I missed that section in the programmer's guide.
Regards,
-Walter
Walter de Donato, Ph.D.
PostDoc @ Department of Electrical Engineering and Information Technologies
University of Napoli
On Fri, 4 Oct 2013 22:10:33 +0300
jigsaw wrote:
> Hi Stephen,
>
> Thanks for showing a bigger picture.
>
> GSO is quite big implementation, that I think it won't be easily
> ported to DPDK. The mbuf needs to be equipped with many fields from
> skb to be able to deal with GSO.
> Do you have the
Thanks Rashmin for your time and help!
So it looks like with the given hardware config, we could probably only
achieve around 8 Gbps in VM without using SRIOV. Once DPDK is used in
vSwitch design, we could gain more performance.
Thanks,
Selvaganapathy.C.
On Fri, Oct 4, 2013 at 11:02 AM, Patel,
19 matches
Mail list logo