On 10/15/21 10:24 PM, Olivier Matz wrote:
> The flags PKT_RX_L4_CKSUM_BAD and PKT_RX_IP_CKSUM_BAD are defined
> twice with the same value. Remove one of the occurence, which was
> marked as "deprecated".
>
> Signed-off-by: Olivier Matz
Acked-by: Andrew Rybchenko
On 10/15/21 10:24 PM, Olivier Matz wrote:
> The flags PKT_TX_VLAN_PKT and PKT_TX_QINQ_PKT are
> marked as deprecated since commit 380a7aab1ae2 ("mbuf: rename deprecated
> VLAN flags") (2017). But they were not using the RTE_DEPRECATED
> macro, because it did not exist at this time. Add it, and repl
On 10/15/21 10:24 PM, Olivier Matz wrote:
> Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
> name. The old flags remain usable, but a deprecation warning is issued
> at compilation.
>
> Signed-off-by: Olivier Matz
Acked-by: Andrew Rybchenko
Geoff,
I have given this some more thoughts.
Most bytes transferred in real life are transferred in large packets, so faster
processing of large packets is a great improvement!
Furthermore, a quick analysis of a recent packet sample from an ISP customer of
ours shows that less than 8 % of the
Georg, I apologize for calling you Geoff below! Just realized my mistake.
Med venlig hilsen / Kind regards,
-Morten Brørup
> -Original Message-
> From: Morten Brørup
> Sent: Saturday, 16 October 2021 10.21
> To: 'Georg Sauthoff'
> Cc: 'dev@dpdk.org'; 'Ferruh Yigit'; 'Olivier Matz'; 'Thom
Hi, Maxime
I agree with you.The inline should be added to
vhost_update_single_packet_xstats function.
I will fix it in [PATCH v3].
Thanks,
Gaoxiang
发自 网易邮箱大师
回复的原邮件
| 发件人 | Maxime Coquelin |
| 日期 | 2021年10月15日 20:16 |
| 收件人 | Gaoxiang
Liu、chenbo@intel.com |
| 抄送至 |
dev@dpdk
> +/* Macro to add a capability */
> +#define QAT_SYM_PLAIN_AUTH_CAP(n, b, d)
Can you add a comment for each of the defines, specifying what these
variables (n,b,d,k,a,I etc)depict.
> \
> + { \
> + .op = RTE_CRYPT
To detect number flow Verbs flow priorities, PMD try to create Verbs
flows in different priority. While Verbs is not designed to support
ports larger than 255.
When DevX supported by kernel driver, 16 Verbs priorities must be
supported, no need to create Verbs flows.
Signed-off-by: Xueming Li
--
Introduce netlink API to get rdma port state.
Port state is restrieved based on RDMA device name and port index.
Signed-off-by: Xueming Li
---
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_nl.c | 136 +++---
drivers/common/mlx5/linux/mlx5_nl
IB spec doesn't allow 255 ports on a single HCA, port number of 256 was
cast to u8 value 0 which invalid to ibv_query_port()
This patch invokes Netlink api to query port state when port number
greater than 255.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/linux/mlx5_os.c | 46
For egress packet on representor, the vport ID in transport domain
is E-Switch manager vport ID since representor shares resources of
E-Switch manager. E-Switch manager vport ID and Tx queue internal device
index are used to match representor egress packet.
This patch adds flow item port ID match
Extends txq flow pattern to support both hairpin and regular txq.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5_flow_dv.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f06ce54f7e7
This patch set supports representor number of a PF to be more than 255.
CX6 and current OFED driver supports maxium 512 SFs. CX5 supports max 255 SFs.
v2:
- fixed FDB root table flow priority
- add error check to Netlink port state API
- commit log update and other minor fixes
Xueming Li (8):
Verbs API doesn't support device port number larger than 255 by design.
To support more VF or SubFunction port representors, forces DevX API
check when max Verbs device link ports larger than 255.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/linux/mlx5_os.c | 11 +--
1 file changed, 5
When creating internal transfer flow on root table with lowerest
priority, the flow was created with max UINT32_MAX priority. It is wrong
since the flow is created in kernel and max priority supported is 16.
This patch fixes this by adding internal flow check.
Fixes: 5f8ae44dd454 ("net/mlx5: enl
Verbs API does not support Infiniband device port number larger 255 by
design. To support more representors on a single Infiniband device DevX
API should be engaged.
While creating Send Queue (SQ) object with Verbs API, the PMD assigned
IB device port attribute and kernel created the default miss
> As described in [1] and as announced in [2], The field ``dataunit_len``
> of the ``struct rte_crypto_cipher_xform`` moved to the end of the
> structure and extended to ``uint32_t``.
>
> In this way, sizes bigger than 64K bytes can be supported for data-unit
> lengths.
>
> [1] commit d014dddb2d6
In current DPDK framework, all Rx queues is pre-loaded with mbufs for
incoming packets. When number of representors scale out in a switch
domain, the memory consumption became significant. Further more,
polling all ports leads to high cache miss, high latency and low
throughputs.
This patch introd
Adds "--rxq-share=X" parameter to enable shared RxQ, share if device
supports, otherwise fallback to standard RxQ.
Share group number grows per X ports. X defaults to MAX, implies all
ports join share group 1.
Forwarding engine "shared-rxq" should be used which Rx only and update
stream statistic
In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.
This pat
In case of shared Rx queue, polling any member port returns mbufs for
all members. This patch dumps mbuf->port for each packet.
Signed-off-by: Xueming Li
---
app/test-pmd/util.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 51506e49404..e9
Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.
It's suggested to use same number of Rx queues and polling cores.
Signed-off-by: Xueming Li
---
app/test-pmd/config.c | 100 +
To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.
Signed-off-by: Xueming Li
---
app/test-pmd/meson.build| 1 +
app/test-pmd/share
Hi Akhil,
Ciara managed to include Pablo's fix into the ipse-mb patches in V4.
https://patchwork.dpdk.org/project/dpdk/patch/20211015143957.842499-6-ciara.po...@intel.com/
https://patchwork.dpdk.org/project/dpdk/patch/20211015143957.842499-7-ciara.po...@intel.com/
Regards,
Fan
> -Original Me
RQ user index is saved in CQE when packet received by RQ.
Signed-off-by: Xueming Li
---
drivers/common/mlx5/mlx5_prm.h | 8 +++-
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 8
drivers/regex/mlx5/mlx5_regex_fastpath.c | 2 +-
3 files changed, 12 insertions(+), 6 deletions(-)
Implemetation of Shared Rx queue.
Depends-on: series-19708 ("ethdev: introduce shared Rx queue")
Depends-on: series-19698 ("Flow entites behavior on port restart")
v1:
- initial version
v2:
- rebased on latest dependent series
- fully tested
Viacheslav Ovsiienko (1):
net/mlx5: add shared Rx qu
Rx queue reference count is counter of RQ, used on RQ table.
To prepare for shared Rx queue, move it from rxq_ctrl to Rx queue
private data.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5_rx.h | 8 +-
drivers/net/mlx5/mlx5_rxq.c | 173 +---
drivers/net
Port info is invisible from shared Rx queue, split MPR mempool from
device to Rx queue, also changed pool flag to mp_sc.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5.c | 1 -
drivers/net/mlx5/mlx5_rx.h | 4 +-
drivers/net/mlx5/mlx5_rxq.c | 109
Removes unused rxq code.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5_rxq.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 4036bcbe544..1cb99de1ae7 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/d
Hairpin info of Rx queue can't be shared, moves to private queue data.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5_rx.h | 4 ++--
drivers/net/mlx5/mlx5_rxq.c | 13 +
drivers/net/mlx5/mlx5_trigger.c | 24
3 files changed, 19 insertions(+), 22
If error happened during Rx queue mbuf allocation, boolean value
returned. From description, return value should be error number.
This patch returns negative error number.
Fixes: 0f20acbf5eda ("net/mlx5: implement vectorized MPRQ burst")
Cc: akozy...@nvidia.com
Signed-off-by: Xueming Li
---
dr
To prepare shared RX queue, splits rxq data into shareable and private.
Struct mlx5_rxq_priv is per queue data.
Struct mlx5_rxq_ctrl is shared queue resources and data.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5.c| 4 +++
drivers/net/mlx5/mlx5.h| 5 ++-
drivers/net/mlx
Adds DevX supports of PRM shared receive memory pool(RMP) object.
RMP is used to support shared Rx queue. Multiple RQ could share same
RMP. Memory buffers are supplied to RMP.
This patch makes RMP RQ optional, created only if mlx5_devx_rq.rmp
is set.
Signed-off-by: Xueming Li
---
drivers/common
This patch introduces shared RXQ. All share Rx queues with same group
and queue id shares same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.
Shared rxq_data is set into device
To support shared RX queue, move DevX RQ which is per queue resource to
Rx queue private data.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/linux/mlx5_verbs.c | 154 +++
drivers/net/mlx5/mlx5.h | 11 +-
drivers/net/mlx5/mlx5_devx.c| 227 ++--
To prepare for shared Rx queue, removes port info from shareable Rx
queue control.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/mlx5_devx.c | 2 +-
drivers/net/mlx5/mlx5_mr.c | 7 ---
drivers/net/mlx5/mlx5_rx.c | 15 +++
drivers/net/mlx5/mlx5_rx.h | 5 +
From: Viacheslav Ovsiienko
When receive packet, mlx5 PMD saves mbuf port number from
rxq data.
To support shared rxq, save port number into RQ context as
user index. Received packet resolve port number from
CQE user index which derived from RQ context.
Legacy Verbs API doesn't support RQ user i
Rx queue data list(priv->rxqs) can be replaced by Rx queue
list(priv->rxq_privs), removes it and replace with universal wrapper
API.
Signed-off-by: Xueming Li
---
drivers/net/mlx5/linux/mlx5_verbs.c | 7 ++---
drivers/net/mlx5/mlx5.c | 10 +--
drivers/net/mlx5/mlx5.h
On Fri, 2021-10-15 at 18:20 +0100, Ferruh Yigit wrote:
> On 10/12/2021 3:39 PM, Xueming Li wrote:
> > index 6d80514ba7a..041da6ee52f 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1044,6 +1044,13 @@ struct rte_eth_rxconf {
> > uint8_t rx_drop_en; /**< Drop pa
Hi Akhil,
I didn't work on the asym problem. As stated in the email I could think of the
solution is to add new API to create asym session pool - or you may have better
solution.
BTW current test_cryptodev_asym.c the function testsuite_setup() creates the
queue pair before creating the sessio
Hi:
I am using Intel SR-IOV XL710 VF with DPDK v20.11 to create a dpdkbond but
failed.
[cid:image001.png@01D7C1F3.41CDF7B0]
As the picture shows, when dpdk bond start and slave link up, it
triggers lsc callback function which registers in the eal thread, and then in
the activat
Hi,
> -Original Message-
> From: Akhil Goyal
> Sent: Friday, October 15, 2021 7:47 PM
> To: Zhang, Roy Fan ; dev@dpdk.org
> Cc: tho...@monjalon.net; david.march...@redhat.com;
> hemant.agra...@nxp.com; Anoob Joseph ; De Lara
> Guarch, Pablo ; Trahe, Fiona
> ; Doherty, Declan ;
> ma...@nvi
>
> > > Subject: [PATCH v2] examples/ipsec-secgw: accept inline proto pkts in
> single
> > > sa
> > >
> > > In inline protocol inbound SA's, plain ipv4 and ipv6 packets are
> > > delivered to application unlike inline crypto or lookaside.
> > > Hence fix the application to not drop them when worki
> Adds max queue pairs limit devargs for crypto cnxk driver. This
> can be used to set a limit on the number of maximum queue pairs
> supported by the device. The default value is 63.
>
> Signed-off-by: Ankur Dwivedi
> Reviewed-by: Anoob Joseph
> Reviewed-by: Jerin Jacob Kollanukkaran
> ---
App
> This patch fixes a possible buffer overrun problem in crypto perf test.
> Previously when user configured aad size is over 12 bytes the copy of
> template aad will cause a buffer overrun.
> The problem is fixed by only copy up to 12 bytes of aad template.
>
> Fixes: 8a5b494a7f99 ("app/test-crypt
> > This patch fixes a possible buffer overrun problem in crypto perf test.
> > Previously when user configured aad size is over 12 bytes the copy of
> > template aad will cause a buffer overrun.
> > The problem is fixed by only copy up to 12 bytes of aad template.
> >
> > Fixes: 8a5b494a7f99 ("app
Hi Nipun,
Few nits below.
Nicolas, Any more comments on this patchset, Can you ack?
> +++ b/drivers/baseband/la12xx/version.map
> @@ -0,0 +1,3 @@
> +DPDK_21 {
> + local: *;
> +};
This should be DPDK_22
> diff --git a/drivers/baseband/meson.build b/drivers/baseband/meson.build
> index 5ee61d
> From: Hemant Agrawal
>
> This patch adds dev args to take max queues as input
>
> Signed-off-by: Nipun Gupta
> Signed-off-by: Hemant Agrawal
> ---
Documentation for dev args missing in this patch.
> +Prerequisites
> +-
> +
> +Currently supported by DPDK:
> +
> +- NXP LA1224 BSP **1.0+**.
> +- NXP LA1224 PCIe Modem card connected to ARM host.
> +
> +- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup
> the basic DPDK environment.
> +
> +* Use dev arg option ``modem=
> As described in [1] and as announced in [2], The field ``dataunit_len``
> of the ``struct rte_crypto_cipher_xform`` moved to the end of the
> structure and extended to ``uint32_t``.
>
> In this way, sizes bigger than 64K bytes can be supported for data-unit
> lengths.
>
> [1] commit d014dddb2d6
> Add support for mlx5 crypto pmd on Windows OS.
> Add changes to release note and pmd guide.
>
> Signed-off-by: Tal Shnaiderman
> ---
> doc/guides/cryptodevs/mlx5.rst | 15 ---
> doc/guides/rel_notes/release_21_11.rst | 1 +
> drivers/common/mlx5/version.map
Any progress on this issue?
Perhaps we should just disable BPF with clang build?
On Thu, 16 Sep 2021 03:07:41 +
bugzi...@dpdk.org wrote:
> https://bugs.dpdk.org/show_bug.cgi?id=811
>
> Bug ID: 811
>Summary: BPF tests fail with clang
>Product: DPDK
>
Hello,
On Fri, Oct 15, 2021 at 04:39:02PM +0200, Olivier Matz wrote:
> On Sat, Sep 18, 2021 at 01:49:30PM +0200, Georg Sauthoff wrote:
> > That means a superfluous cast is removed and aliasing through a uint8_t
> > pointer is eliminated. Note that uint8_t doesn't have the same
> > strict-aliasing
Hello,
On Sat, Oct 16, 2021 at 10:21:03AM +0200, Morten Brørup wrote:
> I have given this some more thoughts.
>
> Most bytes transferred in real life are transferred in large packets,
> so faster processing of large packets is a great improvement!
>
> Furthermore, a quick analysis of a recent pa
MLX5 hardware has its internal IOMMU where PMD registers the memory.
On the data path, PMD translates VA into a key consumed by the device
IOMMU. It is impractical for the PMD to register all allocated memory
because of increased lookup cost both in HW and SW. Most often mbuf
memory comes from me
Data path performance can benefit if the PMD knows which memory it will
need to handle in advance, before the first mbuf is sent to the PMD.
It is impractical, however, to consider all allocated memory for this
purpose. Most often mbuf memory comes from mempools that can come and
go. PMD can enumer
Mempool is a generic allocator that is not necessarily used
for device IO operations and its memory for DMA.
Add MEMPOOL_F_NON_IO flag to mark such mempools automatically
a) if their objects are not contiguous;
b) if IOVA is not available for any object.
Other components can inspect this flag
in or
Add internal API to register mempools, that is, to create memory
regions (MR) for their memory and store them in a separate database.
Implementation deals with multi-process, so that class drivers don't
need to. Each protection domain has its own database. Memory regions
can be shared within a data
When the first port in a given protection domain (PD) starts,
install a mempool event callback for this PD and register all existing
memory regions (MR) for it. When the last port in a PD closes,
remove the callback and unregister all mempools for this PD.
This behavior can be switched off with a n
> -Original Message-
> From: Ferruh Yigit
> Sent: 15 октября 2021 г. 19:27
> To: Dmitry Kozlyuk ; dev@dpdk.org; Andrew Rybchenko
> ; Ori Kam ; Raslan
> Darawsheh
> Cc: NBU-Contact-Thomas Monjalon ; Qi Zhang
> ; jer...@marvell.com; Maxime Coquelin
>
> Subject: Re: [PATCH 2/5] ethdev: add
On Sat, Oct 16, 2021 at 1:43 AM Xueming Li wrote:
>
> In current DPDK framework, each Rx queue is pre-loaded with mbufs to
> save incoming packets. For some PMDs, when number of representors scale
> out in a switch domain, the memory consumption became significant.
> Polling all ports also leads t
On Sat, Oct 16, 2021 at 12:34 AM wrote:
>
> From: Pavan Nikhilesh
>
> Mark rte_trace global variables as internal i.e. remove them
> from experimental section of version map.
> Some of them are used in inline APIs, mark those as global.
>
> Signed-off-by: Pavan Nikhilesh
> Acked-by: Ray Kinsella
From: Nipun Gupta
This series introduces the BBDEV LA12xx poll mode driver (PMD) to support
an implementation for offloading High Phy processing functions like
LDPC Encode / Decode 5GNR wireless acceleration function, using PCI based
LA12xx Software defined radio.
Please check the documentation
From: Nicolas Chautru
Adding device information to capture explicitly the assumption
of the input/output data byte endianness being processed.
Signed-off-by: Nicolas Chautru
Signed-off-by: Nipun Gupta
---
doc/guides/rel_notes/release_21_11.rst | 1 +
drivers/baseband/acc100/rte_ac
From: Nipun Gupta
This patch introduce the baseband device drivers for NXP's
LA1200 series software defined baseband modem.
Signed-off-by: Nipun Gupta
Signed-off-by: Hemant Agrawal
---
MAINTAINERS | 10 ++
doc/guides/bbdevs/index.rst | 1
From: Hemant Agrawal
This patch adds dev args to take max queues as input
Signed-off-by: Nipun Gupta
Signed-off-by: Hemant Agrawal
---
doc/guides/bbdevs/la12xx.rst | 4 ++
drivers/baseband/la12xx/bbdev_la12xx.c | 73 +-
2 files changed, 75 insertions(+), 2
From: Hemant Agrawal
This patch add support for multiple modems by assigning
a modem id as dev args in vdev creation.
Signed-off-by: Hemant Agrawal
---
doc/guides/bbdevs/la12xx.rst | 5 ++
drivers/baseband/la12xx/bbdev_la12xx.c | 64 +++---
drivers/baseband/l
From: Hemant Agrawal
This patch add support for connecting with modem
and creating the ipc channel as queues with modem
for the exchange of data.
Signed-off-by: Nipun Gupta
Signed-off-by: Hemant Agrawal
---
drivers/baseband/la12xx/bbdev_la12xx.c | 559 -
drivers/baseba
From: Hemant Agrawal
Add support for enqueue and dequeue the LDPC enc/dec
from the modem device.
Signed-off-by: Nipun Gupta
Signed-off-by: Hemant Agrawal
---
doc/guides/bbdevs/features/la12xx.ini | 13 +
doc/guides/bbdevs/la12xx.rst | 44 +++
doc/guides/rel_notes/release_
From: Hemant Agrawal
this patch adds la12xx driver in test bbdev
Signed-off-by: Hemant Agrawal
---
app/test-bbdev/meson.build | 3 +++
1 file changed, 3 insertions(+)
diff --git a/app/test-bbdev/meson.build b/app/test-bbdev/meson.build
index edb9deef84..a726a5b3fa 100644
--- a/app/test-bbdev/
From: Nipun Gupta
With data input, output and harq also supported in big
endian format, this patch updates the testbbdev application
to handle the endianness conversion as directed by the
the driver being used.
The test vectors assumes the data in the little endian order, and
thus if the driver
71 matches
Mail list logo