> -Original Message-
> From: Stephen Hemminger [mailto:stephen at networkplumber.org]
> Sent: Monday, August 11, 2014 2:48 PM
> To: Richardson, Bruce
> Cc: dev at dpdk.org
> Subject: Re: [dpdk-dev] [RFC PATCH 11/14] ixgbe: make mbuf_initializer queue
> variable global
>
> On Mon, 11 Aug 2
Provide a wrapper routine to enable receive of scattered packets with a
vector driver.
Signed-off-by: Bruce Richardson
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 8 +-
lib/librte_pmd_ixgbe/ixgbe_rxtx.h | 1 +
lib/librte_pmd_ixgbe/ixgbe_rxtx_vec.c | 164 --
Cleanups:
* use typedefs for markers within mbuf struct
* split up vlan_macip field as the l2/l3 lengths are for TX so go on the
second cache line.
* created a tx_ol field in second cache line for data used for tx
offloads
* rename the hash field to the filter field as it contains more than
j
When writing to the mbuf array for receiving packets, do not assume
16-byte alignment by using aligned stores. If the pointers are only
8-byte aligned, the program will crash due to incorrect alignment.
Changing "store" to "storeu" fixes this.
Signed-off-by: Bruce Richardson
---
lib/librte_pmd_i
On descriptor rearm, the mbuf_initializer variable can be used to do a
single-shot write to an mbuf to initialize all variables that can be
set. This is currently used only by vector PMD function, but now allow
it to be generally used by other RX code paths.
Signed-off-by: Bruce Richardson
---
l
Previously we set the next pointer to NULL on allocation, we now set it
to NULL on free, as the next pointer is on a second cache line.
Signed-off-by: Bruce Richardson
---
lib/librte_mbuf/rte_mbuf.h| 4 +++-
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 18 +-
2 files changed, 12
Adjust the fast-path code to fix the regression caused by the pool
pointer moving to the second cache line. This change adjusts the
prefetching and also the way in which the mbufs are freed back to the
mempool.
Signed-off-by: Bruce Richardson
---
lib/librte_pmd_ixgbe/ixgbe_rxtx.c | 23 +-
This change splits the mbuf in two to move the pool and next pointers to
the second cache line. This frees up 16 bytes in first cache line.
Signed-off-by: Bruce Richardson
---
app/test/test_mbuf.c | 2 +-
lib/librte_mbuf/rte_mbuf.h | 5 +
2 files changed, 6 insertions(+), 1 deletion(-)
The vector PMD expects fields to be in a specific order so that it can
do vector operations on multiple fields at a time. Following mbuf
rework, adjust driver to take account of the new layout and re-enable it
in the config.
Signed-off-by: Bruce Richardson
---
config/common_linuxapp
* Reorder the fields in the mbuf so that we have fields that are used
together side-by-side in the structure. This means that we have a
contiguous block of 8-bytes in the mbuf which are used to reset an mbuf
of descriptor rearm.
* Where needed add in a dummy fields to overwrite values 8 or 16 byt
In some cases we may want to tag a packet for a particular destination
or output port, so rename the "in_port" field in the mbuf to just "port"
so that it can be re-used for this purpose if an application needs it.
Signed-off-by: Bruce Richardson
---
examples/dpdk_qat/main.c
From: Olivier Matz
Original patch:
The mbuf structure already contains a pointer to the beginning of the
buffer (m->buf_addr). It is not needed to use 8 bytes again to store
another pointer to the beginning of the data.
Using a 16 bits unsigned integer is enough as we know that a mbuf is
ne
From: Olivier Matz
The rte_pktmbuf structure was initially included in the rte_mbuf
structure. This was needed when there was 2 types of mbuf (ctrl and
packet). As the control mbuf has been removed, we can merge the
rte_pktmbuf into the rte_mbuf structure.
Advantages of doing this:
- the acces
From: Olivier Matz
The initial role of rte_ctrlmbuf is to carry generic messages (data
pointer + data length) but it's not used by the DPDK or it applications.
Keeping it implies:
- loosing 1 byte in the rte_mbuf structure
- having some dead code rte_mbuf.[ch]
This patch removes this feature
From: Olivier Matz
It seems that RTE_MBUF_SCATTER_GATHER is not the proper name for the
feature it provides. "Scatter gather" means that data is stored using
several buffers. RTE_MBUF_REFCNT seems to be a better name for that
feature as it provides a reference counter for mbufs.
The macro RTE_MB
This patch set expands and enhances the mbuf data structure. This set includes
patches previously
submitted by Olivier to rework the mbuf, but takes the rework further than
proposed there.
NOTE: This is still a work in progress! Feedback at this stage is still welcome
though.
Outline of change
Hi there,
We want to run DPDK application on unmodified VM and unmodified Open
vSwitch.
We tried http://dpdk.org/doc/virtio-net-pmd and on l3fwd application we
got only 90Kpps. At the same time, without DPDK at all - 150Kpps on
sending traffic between two eths.
Does anyone know why that happe
Dear mailing list, I have a question concerning SFP modules hotplugging.
I made some experiments and want to confirm my findings.
Looks like hotplug is basically supported out of the box, the only thing
one has to do is to register
callbacks for RTE_ETH_EVENT_INTR_LSC and avoid sending mbufs to
On Mon, 11 Aug 2014 21:44:47 +0100
Bruce Richardson wrote:
> On descriptor rearm, the mbuf_initializer variable can be used to do a
> single-shot write to an mbuf to initialize all variables that can be
> set. This is currently used only by vector PMD function, but now allow
> it to be generally
On Mon, 11 Aug 2014 21:44:37 +0100
Bruce Richardson wrote:
> From: Olivier Matz
>
> It seems that RTE_MBUF_SCATTER_GATHER is not the proper name for the
> feature it provides. "Scatter gather" means that data is stored using
> several buffers. RTE_MBUF_REFCNT seems to be a better name for that
Hi team,
Currently I am using DPDK version 1.2, for which can I configure
VMDQ_DCB both in Rx/Tx Mq modes ?
In v1.2 I see only Rx MQ mode is defined but not Tx as below.
Does it mean application can only receive packets from multiple traffic
classes but not transmit ?
If yes, do I need to upgra
Signed-off-by: Takayuki Usui
---
lib/librte_table/rte_table_hash_ext.c | 2 +-
lib/librte_table/rte_table_hash_lru.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/lib/librte_table/rte_table_hash_ext.c
b/lib/librte_table/rte_table_hash_ext.c
index 6e26d98..8b86fab 100644
> Do you mean the configurable number of rx/tx queues in VF? For Niantic,
> hardware just supports only one queue in VF, so there is no flexibility for
> that.
> For later NICs like i40e, we will have that flexibility.
Yes, you are right but only in when DCB and RSS/TSS are off. When using DCB
a
Only if you enable DCB option you will get the multiple queues per VF.
regards
Kannan Babu
-Original Message-
From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Zhang, Helin
Sent: Monday, August 11, 2014 1:44 PM
To: Wodkowski, PawelX
Cc: dev at dpdk.org
Subject: Re: [dpdk-dev] SRIOV mod
> -Original Message-
> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Wodkowski, PawelX
> Sent: Monday, August 11, 2014 4:05 PM
> To: dev at dpdk.org
> Subject: [dpdk-dev] SRIOV mode and different RX and TX configuration
>
> Hi,
> I am wondering if there is a sense in having dif
Hi,
I am wondering if there is a sense in having different configuration
in for RX and TX mode in SR-IOV mode. Ex RX mode ETH_MQ_RX_NONE
and TX mode ETH_MQ_TX_VMDQ_DCB or something similar.
I am asking because in code there is no difference between number of
RX queues and TX queues reported to VF a
26 matches
Mail list logo