Hi Olivier,
> -Original Message-
> From: Olivier Matz [mailto:olivier.m...@6wind.com]
> Sent: Friday, March 3, 2017 8:45 AM
> To: Venkatesan, Venky
> Cc: Richardson, Bruce ; dev@dpdk.org;
> thomas.monja...@6wind.com; Ananyev, Konstantin
> ; Lu, Wenzhuo ;
> Zha
> -Original Message-
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Olivier Matz
> Sent: Thursday, March 2, 2017 8:15 AM
> To: Richardson, Bruce
> Cc: dev@dpdk.org; thomas.monja...@6wind.com; Ananyev, Konstantin
> ; Lu, Wenzhuo ;
> Zhang, Helin ; Wu, Jingjing
> ; adrien.mazarg...
> -Original Message-
> From: Olivier Matz [mailto:olivier.matz at 6wind.com]
> Sent: Friday, March 25, 2016 3:56 AM
> To: Venkatesan, Venky ; Lazaros Koromilas
> ; Wiles, Keith
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] mempool: allow for user-owned mem
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Lazaros Koromilas
> Sent: Thursday, March 24, 2016 7:36 AM
> To: Wiles, Keith
> Cc: Olivier Matz ; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] mempool: allow for user-owned mempool
> caches
>
> On Mon,
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Olivier MATZ
> Sent: Friday, February 19, 2016 5:31 AM
> To: Hunt, David ; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/6] mempool: add stack (lifo) based
> external mempool handler
>
> Hi David,
>
> On 0
On 10/13/2015 7:47 AM, Sanford, Robert wrote:
[Robert:]
1. The 82599 device supports up to 128 queues. Why do we see trouble
with as few as 5 queues? What could limit the system (and one port
controlled by 5+ cores) from receiving at line-rate without loss?
2. As far
NAK. This causes more (unsuccessful) cleanup attempts on the descriptor ring.
What is motivating this change?
Regards,
Venky
> On May 28, 2015, at 1:42 AM, Zoltan Kiss wrote:
>
> This check doesn't do what's required by rte_eth_tx_burst:
> "When the number of previously sent packets reached
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Wiles, Keith
> Sent: Wednesday, May 27, 2015 7:51 AM
> To: Lin XU; dev at dpdk.org
> Subject: Re: [dpdk-dev] proposal: raw packet send and receive API for PMD
> driver
>
>
>
> On 5/26/15, 11:18 PM, "Lin XU" w
On 11/24/2014 5:28 AM, Bruce Richardson wrote:
> On Mon, Nov 24, 2014 at 02:19:16PM +0100, Thomas Monjalon wrote:
>> Hi Bruce and Neil,
>>
>> 2014-11-24 11:28, Bruce Richardson:
>>> On Sat, Nov 22, 2014 at 08:35:17PM -0500, Neil Horman wrote:
On Sat, Nov 22, 2014 at 10:43:39PM +0100, Thomas M
On 9/30/2014 7:29 AM, Neil Horman wrote:
> On Tue, Sep 30, 2014 at 11:10:45AM +, Hiroshi Shimamoto wrote:
>> From: Hiroshi Shimamoto
>>
>> This patchset improves MEMNIC PMD performance.
>>
>> The first patch introduces a new benchmark test run in guest,
>> and will be used to evaluate the fol
Keith,
On 9/28/2014 11:04 AM, Wiles, Roger Keith wrote:
> I am also looking at the bulk dequeue routines, which the ring can be fixed
> or variable. On fixed < 0 on error is returned and 0 if successful. On a
> variable ring < 0 on error or n on success, but I think n can be zero in the
> vari
On 9/22/2014 6:33 PM, Matthew Hall wrote:
> On Tue, Sep 23, 2014 at 01:08:21AM +, Saha, Avik (AWS) wrote:
>> I was wondering if there is way to use the rte_table_hash_lru without
>> building a pipeline - Basically using the same hash table like functionality
>> of add, delete and lookup withou
On 9/18/2014 12:14 PM, Neil Horman wrote:
> On Thu, Sep 18, 2014 at 08:23:36PM +0200, Thomas Monjalon wrote:
>> Hi Neil,
>>
>> 2014-09-15 15:23, Neil Horman:
>>> The DPDK ABI develops and changes quickly, which makes it difficult for
>>> applications to keep up with the latest version of the librar
On 8/27/2014 7:35 AM, Thomas Monjalon wrote:
> Hi Jingjing,
>
> 2014-08-27 10:13, Jingjing Wu:
>> add structure definition to construct programming packet.
> What is a "programming packet"?
>
>> +#ifdef RTE_LIBRTE_I40E_PMD
>> +"i40e_flow_director_filter (port_id) (add|del)"
>>
DPDK currently isn't exactly poll mode - it has an API that receives and
transmits packets. How you enter that API could be interrupt or polled
-we've left that up to the application to decide, rather than force a
interrupt/NAPI type architecture. I do agree with Alex in that
implementing a int
On 8/1/2014 6:56 AM, Ananyev, Konstantin wrote:
>> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Neil Horman
>> Sent: Friday, August 01, 2014 2:37 PM
>> To: Richardson, Bruce
>> Cc: dev at dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH 0/2] dpdk: Allow for dynamic enablement of
>> some isol
Neil,
Nice patch! One question - what gcc versions did you try this out on? We'll
round out with checking the other versions.
Regards,
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Neil Horman
Sent: Thursday, July 24, 2014 11:24 AM
To: dev at dpdk.org
On Fri, Jul 11, 2014 at 03:29:17PM +, Venkatesan, Venky wrote:
> > -Original Message-
> > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of John W.
> > Linville
> > Sent: Friday, July 11, 2014 7:49 AM
> > To: Stephen Hemminger
> > Cc: dev at
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of John W. Linville
> Sent: Friday, July 11, 2014 7:49 AM
> To: Stephen Hemminger
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] librte_pmd_packet: add PMD for
> AF_PACKET- based virtual devices
>
> On Fr
Olivier,
>> It's because we haven't gotten to testing the patch yet, and figuring > out
>> all the problems. Putting it in and modifying MBUF needs a bit of > time -
>> one other option that I've looked at is to let the transmit > offload parts
>> (except for the VLAN) flow onto the second
-Venky
-Original Message-
From: Thomas Monjalon [mailto:thomas.monja...@6wind.com]
Sent: Thursday, May 22, 2014 8:02 AM
To: Ananyev, Konstantin; Shaw, Jeffrey B; Richardson, Bruce; Venkatesan, Venky;
nhorman at tuxdriver.com; stephen at networkplumber.org
Cc: Olivier Matz; dev at dpdk.or
From: Didier Pallard
Currently, if there is more memory in hugepages than the amount requested by
dpdk application, the memory is allocated by taking as much memory as possible
from each socket, starting from first one.
For example if a system is configured with 8 GB in 2 sockets (4 GB per sock
-
From: Neil Horman [mailto:nhor...@tuxdriver.com]
Sent: Monday, May 12, 2014 11:40 AM
To: Venkatesan, Venky
Cc: Olivier MATZ; Thomas Monjalon; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an
offset
On Mon, May 12, 2014 at 04:06:23PM +, Venkatesan, Venky
y, May 12, 2014 8:07 AM
To: Neil Horman; Venkatesan, Venky
Cc: Thomas Monjalon; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC 06/11] mbuf: replace data pointer by an
offset
Hi Venky,
On 05/12/2014 04:41 PM, Neil Horman wrote:
>> This is a hugely problematic change, and has a pre
Olivier,
This is a hugely problematic change, and has a pretty large performance impact
(because the dependency to compute and access). We debated this for a long time
during the early days of DPDK and decided against it. This is also a repeated
sequence - the driver will do it twice (Rx + Tx
David,
Sorry for the late response. Yes, your suggestion would work. Let?s implement
it ?
Regards,
-Venky
From: David Marchand [mailto:david.march...@6wind.com]
Sent: Monday, May 05, 2014 2:26 AM
To: Venkatesan, Venky
Cc: Burakov, Anatoly; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH RFC
Olivier,
We should look at how to make the memseg capable of doing alloc/free (including
re-assembly of fragments) after the 1.7 release. Is that something you are
considering doing (or are there any other DPDKers considering this), or should
I look at putting together a patch for that?
Rega
Agree with Anatoly - I would much rather not change legacy option behaviour
that has existed for a while, especially when --socket-mem is available to do
exactly what is needed.
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Burakov, Anatoly
Sent: Friday
Agree that the patch sets are a step towards fixing that, but there is a lot
more to be done on this. Could we start discussion on what the "ideal"
abstraction should be? I'd like to pool those into a formal proposal that we
can discuss and drive through a series of patches to make that happen.
One caveat - a compiler_barrier should be enough when both sides are using
strongly-ordered memory operations (as in the case of the rings). Weakly
ordered operations will still need fencing.
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Stephen Hemminge
If in-lining is that big a concern, you could create your own wrapper function
and explicitly mark it no-inline. Personally, I haven't seen any inordinate
increase in i-cache miss rates because of in-lining on the applications we have
- prefetchers on IA are usually capable of keeping up. Howeve
Dan,
One other thing to think about - as we add more functionality into DPDK (e.g.
new libraries for other packet functions), we integrate them into the DPDK
framework. If you extract compilation flags and setup your own makefile, you
would have to do this re-integration every time you want to
Pepe ,
Was the DPDK library compiled on a different machine and the used in the VM? It
looks like it has been compiled for native AVX (hence the vzeroupper). Could
you dump cpuinfo in the VM and see what instruction set the VM supports?
-Venky
Sent from my iPhone
> On Jan 3, 2014, at 2:32
Stephen,
Agree on the checksum flag definition. I'm presuming that we should do this on
the L3 and L4 checksums separately (that ol_flags field is another one that
needs extension in the mbuf).
Regards,
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of
Stephen,
Agree. Growing to two cache lines is an inevitability. Re-organizing the mbuf a
bit to alleviate some of the immediate space with as minimal a performance as
possible (including separating the QoS fields out completely into its own
separate area) is a good idea - the first cache line
Qinglai,
Looks good. I will try it out tonight.
Thanks ...
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Qinglai Xiao
Sent: Thursday, September 26, 2013 6:35 AM
To: dev at dpdk.org
Subject: [dpdk-dev] [PATCH] Add support for Tx->Rx loopback mode for 82
That should work perfectly ... :)
Regards,
-Venky
-Original Message-
From: jigsaw [mailto:jig...@gmail.com]
Sent: Wednesday, September 25, 2013 10:00 AM
To: Venkatesan, Venky
Cc: Ivan Boule; dev at dpdk.org
Subject: Re: [dpdk-dev] [PATCH] Add support for 82599 Tx->Rx loopback operat
then merge the setup_link code into something
like dev_rxtx_start() [ open to suggestions ].
Regards,
-Venky
-Original Message-
From: jigsaw [mailto:jig...@gmail.com]
Sent: Wednesday, September 25, 2013 7:39 AM
To: Venkatesan, Venky
Cc: Ivan Boule; dev at dpdk.org
Subject: Re: [dpdk
Qinglai/Ivan,
I for one would prefer that the changes not really modify any files in the
librte_pmd_ixgbe/ixgbe directory. Those files are derived directly from the BSD
driver baseline, and any changes will make future merges of newer code more
challenging. The changes should be limited to fil
Chris,
The numbers you are getting are correct. :)
Practically speaking, most motherboards pin out between 4 and 5 x8 slots to
every CPU socket. At PCI-E Gen 2 speeds (5 GT/s), each slot is capable of
carrying 20 Gb/s of traffic (limited to ~16 Gb/s of 64B packets). I would have
expected the
Dmitry,
One other question - what version of DPDK are you doing on?
-Venky
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Robert Sanford
Sent: Thursday, September 19, 2013 12:40 PM
To: Dmitry Vyal
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] How to fight forwardi
41 matches
Mail list logo