On 4/11/22 13:00, David Marchand wrote:
vq->async is initialised and must be accessed under vq->access_lock.
Top level "_thread_unsafe" functions could be checked at runtime (clang
provides a lock aware assert()-like check), but they are simply skipped
because those functions are not called i
On 4/11/22 13:00, David Marchand wrote:
When a reply from the slave is required (VHOST_USER_NEED_REPLY flag),
a spinlock is taken before sending the message.
This spinlock is released if an error occurs when sending the message, and
once a reply is received.
A problem is that this lock is tak
On 4/11/22 13:00, David Marchand wrote:
vdpa_device_list access must be protected with vdpa_device_list_lock
spinlock.
Signed-off-by: David Marchand
---
lib/vhost/vdpa.c | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
Reviewed-by: Maxime Coquelin
Thanks,
Maxi
On 4/11/22 13:00, David Marchand wrote:
This change simply annotates existing paths of the code leading to
manipulations of the IOTLB r/w locks.
clang does not support conditionally held locks, so always take iotlb
locks regardless of VIRTIO_F_IOMMU_PLATFORM feature.
vdpa and vhost_crypto co
On 4/11/22 13:00, David Marchand wrote:
Now that all locks in this library are annotated, we can enable the
check.
Signed-off-by: David Marchand
---
lib/vhost/meson.build | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
index bc7272053b..
On Thu, Apr 21, 2022 at 11:04 AM Bruce Richardson
wrote:
> > We need some minimal testing for telemetry commands.
> >
> > It could be a test automatically calling all available /ethdev/
> > commands on a running testpmd.
> > This test could be really simple, not even checking what is returned.
> >
"examples/vdpa: fix disabled virtqueue statistics query"
On 2/24/22 14:24, Xueming Li wrote:
Quit VirtQ statistics query instead of reporting error.
Fixes: 6505865aa8ed ("examples/vdpa: add statistics show command")
Cc: sta...@dpdk.org
Signed-off-by: Xueming Li
---
examples/vdpa/main.c | 21
On 21/04/2022 21:08, Stephen Hemminger wrote:
On Thu, 21 Apr 2022 19:08:58 +
Sean Morrissey wrote:
diff --git a/lib/timer/rte_timer.c b/lib/timer/rte_timer.c
index c51a393e5c..f52ccc33ed 100644
--- a/lib/timer/rte_timer.c
+++ b/lib/timer/rte_timer.c
@@ -5,12 +5,9 @@
#include
#includ
Stephen Hemminger writes:
> On Thu, 21 Apr 2022 11:40:00 -0400
> Ray Kinsella wrote:
>
>> Stephen Hemminger writes:
>>
>> > On Thu, 21 Apr 2022 12:38:26 +0800
>> > Stephen Coleman wrote:
>> >
>> >> KNI ioctl functions copy data from userspace lib, and this interface
>> >> of kmod is not c
https://bugs.dpdk.org/show_bug.cgi?id=999
Bug ID: 999
Summary: memory access overflow in skeleton_rawdev
Product: DPDK
Version: 21.11
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Pr
https://bugs.dpdk.org/show_bug.cgi?id=1000
Bug ID: 1000
Summary: memory access overflow in skeleton_rawdev
Product: DPDK
Version: 21.11
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
On 20/04/2022 07:55, Morten Brørup wrote:
From: Kevin Laatz [mailto:kevin.la...@intel.com]
Sent: Tuesday, 19 April 2022 18.15
During EAL init, all buses are probed and the devices found are
initialized. On eal_cleanup(), the inverse does not happen, meaning any
allocated memory and other configu
> -Original Message-
> From: Yuan Wang
> Sent: Thursday, April 21, 2022 7:16 PM
> To: maxime.coque...@redhat.com; Xia, Chenbo
> Cc: dev@dpdk.org; Hu, Jiayu ; He, Xingguang
> ; Wang, YuanX
> Subject: [PATCH] net/virtio: unmap PCI device in secondary process
>
Tested-by: Wei Ling
This patchset adds secp384r1 (P-384) elliptic
curve to Intel QuickAssist Technology crypto PMD.
v2:
- added release notes
Arek Kusztal (2):
crypto/qat: refactor asym algorithm macros and logs
crypto/qat: add secp384r1 curve
doc/guides/rel_notes/release_22_07.rst | 4 +
drivers/common/qat/
This commit unifies macros for asymmetric parameters,
therefore making code easier to maintain.
It additionally changes some of PMD output logs that
right now can only be seen in debug mode.
Signed-off-by: Arek Kusztal
---
drivers/crypto/qat/qat_asym.c | 230 ++---
This commit adds secp384r1 (P-384) elliptic
curve to Intel QuickAssist Technology crypto PMD.
Signed-off-by: Arek Kusztal
---
doc/guides/rel_notes/release_22_07.rst | 4 ++
drivers/common/qat/qat_adf/qat_pke.h | 12 ++
drivers/crypto/qat/qat_ec.h| 76 ++
By default, TSO feature should be disabled because it requires
application's support to be functionnal as mentionned in the
documentation.
However, if "tso" devarg was not specified, the feature did
not get disabled.
This patch fixes this issue, so that TSO is disabled, even if
"tso=0" is not pas
On Thu, Apr 21, 2022 at 5:25 PM Maxime Coquelin
wrote:
> On 4/11/22 13:00, David Marchand wrote:
> > This change simply annotates existing paths of the code leading to
> > manipulations of the vq->access_lock.
> >
> > One small change is required: vhost_poll_enqueue_completed was getting
> > a que
l3fwd-acl contains duplicate functions to l3fwd.
For this reason we merge l3fwd-acl code into l3fwd
with '--lookup acl' cmdline option to run ACL.
Signed-off-by: Sean Morrissey
Acked-by: Konstantin Ananyev
---
V6:
* fix ipv6 rule parsing
V5:
* remove undefined functions
* remove unused struct me
This commits adds Diffie-Hellman key exchange algorithm
to Intel QuickAssist Technology PMD.
Signed-off-by: Arek Kusztal
---
Depends-on: series-22621 ("crypto/qat: add secp384r1 curve support")
v2:
- updated release notes
- updated qat documentation
doc/guides/cryptodevs/qat.rst | 1
thanks for your replies
I'm aware that kernel guidelines propose ascending ioctl numbers to
max out compatibility, but this will not work with dpdk, especially
our case here.
If you look into kni_net.c you'll see the module is actually
internally depending on the memory layout of mbuf and a few o
From: Kumara Parameshwaran
As the minimum Ethernet frame size is 64 bytes, a 0 length
tcp payload without tcp options would be 54 bytes and hence
there would be padding. So it would be incorrect to use the
packet length to determine the tcp data length.
Fixes: 1e4cf4d6d4fb ("gro: cleanup")
Cc: s
From: Kumara Parameshwaran
As the minimum Ethernet frame size is 64 bytes, a 0 length
tcp payload without tcp options would be 54 bytes and hence
there would be padding. So it would be incorrect to use the
packet length to determine the tcp data length.
Fixes: 1e4cf4d6d4fb ("gro: cleanup")
Cc: s
From: Kumara Parameshwaran
As the minimum Ethernet frame size is 64 bytes, a 0 length
tcp payload without tcp options would be 54 bytes and hence
there would be padding. So it would be incorrect to use the
packet length to determine the tcp data length.
Fixes: 1e4cf4d6d4fb ("gro: cleanup")
Cc: s
From: Subrahmanyam Nilla
Currently only base channel number is configured as default
channel for all the SDP send queues. Due to this, packets
sent on different SQ's are landing on the same output queue
on the host. Channel number in the send queue should be
configured according to the number of
From: Radha Mohan Chintakuntla
The SDP interfaces also need to be configured for NIX receive channel
backpressure for packet receive.
Signed-off-by: Radha Mohan Chintakuntla
---
drivers/common/cnxk/roc_nix_fc.c | 11 +--
drivers/net/cnxk/cnxk_ethdev.c | 3 +++
2 files changed, 8 ins
From: Vidya Sagar Velumuri
With Timestamp enabled, time stamp will be added to second pass packets
from CPT. NPC needs different configuration to parse second pass packets
with and without timestamp.
New pkind is defined for CPT when time stamp is enabled on NIX.
CPT should use this PKIND for sec
From: Satha Rao
Fix SQ flush sequence to issue NIX RX SW Sync after SMQ flush.
This sync ensures that all the packets that were inflight are
flushed out of memory.
This patch also fixes NULL return issues reported by
static analysis tool in Traffic Manager and sync's mbox
to that of Kernel versi
From: Vidya Sagar Velumuri
Add new API to configure the SA table entries with new CPT PKIND
when timestamp is enabled.
Signed-off-by: Vidya Sagar Velumuri
---
drivers/common/cnxk/roc_nix_inl.c | 59 ++
drivers/common/cnxk/roc_nix_inl.h | 2 ++
drivers
From: Rakesh Kudurumalla
SoC run platform file is not present in CN9k so probing
is done for CN10k devices
Signed-off-by: Rakesh Kudurumalla
---
drivers/common/cnxk/roc_model.c | 9 +
1 file changed, 9 insertions(+)
diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/ro
Fix issues in mode where soft expiry is disabled in RoC.
When soft expiry support is not enabled in inline device,
memory is not allocated for the ring base array and should
not be accessed.
Fixes: bea5d990a93b ("net/cnxk: support outbound soft expiry notification")
Signed-off-by: Nithin Dabilpura
From: Akhil Goyal
Inbound SA SPI if not in min-max range specified in devargs,
was marked as a warning. But this is not converted to debug
print because if the entry is found to be duplicate in the mask,
it will give another error print. Hence, warning print is not needed
and is now converted to
Use aggregate level Round Robin Priority from mbox response instead of
fixing it to single macro. This is useful when kernel AF driver
changes the constant.
Signed-off-by: Nithin Dabilpuram
---
drivers/common/cnxk/roc_nix_priv.h | 5 +++--
drivers/common/cnxk/roc_nix_tm.c | 3 ++-
driv
Support internal loopback mode on AF VF's using RoC by setting
Tx channel same as Rx channel.
Signed-off-by: Nithin Dabilpuram
---
drivers/net/cnxk/cnxk_ethdev.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index bd31a
Update link info of LBK ethdev i.e AF's VF's as always up
and 100G. This is because there is no phy for the LBK interfaces
and we won't get a link update notification for the same.
Signed-off-by: Nithin Dabilpuram
---
drivers/net/cnxk/cnxk_link.c | 11 +++
1 file changed, 11 insertions(+
Add barrier after meta batch free in scalar routine when
lmt lines are exactly full to make sure that next LMT line user
in Tx only starts writing the lines only when previous stoerl's
are complete.
Fixes: 4382a7ccf781 ("net/cnxk: support Rx security offload on cn10k")
Cc: sta...@dpdk.org
Signed-
Disable default inner L3/L4 checksum generation for outbound inline
path and enable based on SA options or RTE_MBUF flags as per
the spec. Though the checksum generation is not impacting much
performance, it is overwriting zero checksum for UDP packets
which is not always good.
Signed-off-by: Nith
For transport mode, roundup needs to be based on L4 data
and shouldn't include L3 length.
By including l3 length, rlen that is calculated and put in
send hdr would cross the final length of the packet in some
scenarios where padding is necessary.
Also when outer and inner checksum offload flags are
From: Rakesh Kudurumalla
inline pf func is updated in ethdev_tel_handle_info
when inline device is attached to any dpdk process
Signed-off-by: Rakesh Kudurumalla
---
drivers/net/cnxk/cnxk_ethdev_telemetry.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/cnxk/cnxk_ethdev_tel
From: Akhil Goyal
Changed environment variable name for specifying
debug IV for unit testing of inline IPsec offload
with known test vectors.
Signed-off-by: Akhil Goyal
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/
From: Akhil Goyal
The rx offload flag need to be reset if IP reassembly flag
is not set while calling reassembly_conf_set.
Signed-off-by: Akhil Goyal
---
drivers/net/cnxk/cn10k_ethdev.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk
From: Akhil Goyal
Added support for decrementing TTL(IPv4)/hoplimit(IPv6)
while doing inline IPsec processing if the security session
sa options is enabled with dec_ttl.
Signed-off-by: Akhil Goyal
---
drivers/net/cnxk/cn10k_ethdev.h | 3 ++-
drivers/net/cnxk/cn10k_ethdev_sec.c | 1 +
drive
Optimize Rx fast path for security pkts by preprocessing
most of the operations such as sa pointer compute,
inner wqe pointer fetch and ucode completion translation
before the pkt is characterized as inbound inline pkt.
Preprocessed info will be discarded if pkt is not
found to be security pkt. Als
From: Akhil Goyal
When the packet is processed with inline IPsec offload,
the ol_flags were updated only with RTE_MBUF_F_RX_SEC_OFFLOAD.
But the hardware can also update the L3/L4 csum offload flags.
Hence, ol_flags are updated with RTE_MBUF_F_RX_IP_CKSUM_GOOD,
RTE_MBUF_F_RX_L4_CKSUM_GOOD, etc ba
From: Akhil Goyal
Added supported crypto algorithms for inline IPsec
offload.
Signed-off-by: Akhil Goyal
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 166
1 file changed, 166 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c
b/drivers/net/cnxk/c
From: Akhil Goyal
Added supported capabilities for various IPsec SA options.
Signed-off-by: Akhil Goyal
Signed-off-by: Vamsi Attunuru
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 57 ++---
1 file changed, 53 insertions(+), 4 deletions(-)
diff --git a/drivers/net/
From: Akhil Goyal
Enabled rte_security stats operation based on the configuration
of SA options set while creating session.
Signed-off-by: Vamsi Attunuru
Signed-off-by: Akhil Goyal
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 56 ++---
1 file changed, 52 insertion
Add support for flow control in outbound inline path using
fc updates from CPT.
Signed-off-by: Nithin Dabilpuram
---
drivers/net/cnxk/cn10k_ethdev.c | 3 +++
drivers/net/cnxk/cn10k_ethdev.h | 1 +
drivers/net/cnxk/cn10k_tx.h | 37 -
drivers/net/cnxk/cnxk
Perform early MTU setup for event mode path in order
to update the Rx/Tx offload flags before Rx adapter setup
starts.
Signed-off-by: Nithin Dabilpuram
---
drivers/net/cnxk/cn10k_ethdev.c | 11 +++
drivers/net/cnxk/cn9k_ethdev.c | 11 +++
2 files changed, 22 insertions(+)
diff
Restructure SA setup to allow lesser inbound SA sizes as opposed
to full Inbound SA size of 1024B with max possible Anti-Replay
window. Since inbound SA size is variable, move the memset logic
out of common code.
Signed-off-by: Nithin Dabilpuram
---
drivers/common/cnxk/roc_ie_ot.c | 4
d
Setup inline inbound SA assuming variable size defined
at compile time.
Signed-off-by: Nithin Dabilpuram
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 22 --
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c
b/drivers/net/cnxk/cn
Fix multi-seg extraction in vwqe path to avoid updating mbuf[]
array until it is used via cq0 path.
Fixes: 7fbbc981d54f ("event/cnxk: support vectorized Rx event fast path")
Cc: pbhagavat...@marvell.com
Cc: sta...@dpdk.org
Signed-off-by: Nithin Dabilpuram
---
drivers/net/cnxk/cn10k_rx.h | 8 +++
> -Original Message-
> From: Nithin Dabilpuram
> Sent: Friday, April 22, 2022 4:17 PM
> To: Jerin Jacob Kollanukkaran ; Nithin Kumar
> Dabilpuram ; Kiran Kumar Kokkilagadda
> ; Sunil Kumar Kori ; Satha
> Koteswara Rao Kottidi
> Cc: dev@dpdk.org; Pavan Nikhilesh Bhagavatula
> ; sta...@d
We (at RH) have some issues with our email infrastructure, so I can't
reply inline of the patch.
Copy/pasting the code:
+static __rte_always_inline uint16_t
+async_poll_dequeue_completed_split(struct virtio_net *dev, uint16_t queue_id,
+ struct rte_mbuf **pkts, uint16_t count, uint16_t dma_id,
+
Hi,
On 4/22/22 06:55, lihuisong (C) wrote:
Hi, all.
The RTE_ETH_FLOW_XXX macros, are used to display supported flow types
for PMD based on the rte_eth_dev_info.flow_type_rss_offloads in the
port_infos_display() of testpmd.
That's true and it is wrong in testpmd. RTE_ETH_RSS_* and
RTE_ETH_FL
> -Original Message-
> From: Wu, Wenjun1
> Sent: Friday, April 22, 2022 9:43 AM
> To: dev@dpdk.org; Wu, Jingjing ; Xing, Beilei
> ; Zhang, Qi Z
> Subject: [PATCH v6 0/3] Enable queue rate limit and quanta size configuration
>
> This patch set adds queue rate limit and quanta size conf
> From: Kevin Laatz [mailto:kevin.la...@intel.com]
> Sent: Friday, 22 April 2022 11.18
>
> On 20/04/2022 07:55, Morten Brørup wrote:
> >> From: Kevin Laatz [mailto:kevin.la...@intel.com]
> >> Sent: Tuesday, 19 April 2022 18.15
> >>
> >> During EAL init, all buses are probed and the devices found a
Hi Chenbo,
On 4/21/22 16:09, Xia, Chenbo wrote:
Hi Maxime,
-Original Message-
From: Maxime Coquelin
Sent: Thursday, January 27, 2022 10:57 PM
To: dev@dpdk.org; Xia, Chenbo ;
david.march...@redhat.com
Cc: Maxime Coquelin
Subject: [PATCH 2/5] vhost: add per-virtqueue statistics support
Previously, on lookup hit, the hit key had its timer automatically
rearmed with the same timeout in order to prevent its expiration. Now,
a broader set of actions is available on lookup hit, which has to be
managed explicitly: the key can have its timer rearmed with the same
or with a different tim
Added the rearm counter to the statistics. Updated the learner table
example to the new learner table timer operation.
Signed-off-by: Cristian Dumitrescu
---
examples/pipeline/cli.c | 2 ++
examples/pipeline/examples/learner.spec | 15 +--
2 files changed, 15 inserti
Enable the pipeline to use the improved learner table timer operation
through the new "rearm" instruction.
Signed-off-by: Cristian Dumitrescu
---
lib/pipeline/rte_swx_ctl.h | 3 +
lib/pipeline/rte_swx_pipeline.c | 166 ---
lib/pipeline/rte_swx_pipelin
Hi Xuan,
On 4/19/22 05:43, xuan.d...@intel.com wrote:
From: Xuan Ding
This patch extracts the descriptors to buffers filling from
copy_desc_to_mbuf() into a dedicated function. Besides, enqueue
and dequeue path are refactored to use the same function
sync_fill_seg() for preparing batch element
On 4/19/22 05:43, xuan.d...@intel.com wrote:
From: Xuan Ding
This patch refactors vhost async enqueue path and dequeue path to use
the same function async_fill_seg() for preparing batch elements,
which simplifies the code without performance degradation.
Signed-off-by: Xuan Ding
---
lib/
On 4/19/22 05:43, xuan.d...@intel.com wrote:
From: Xuan Ding
This patches refactors copy_desc_to_mbuf() used by the sync
path to support both sync and async descriptor to mbuf filling.
Signed-off-by: Xuan Ding
---
lib/vhost/vhost.h | 1 +
lib/vhost/virtio_net.c | 48 ++
On 4/22/22 13:06, David Marchand wrote:
We (at RH) have some issues with our email infrastructure, so I can't
reply inline of the patch.
Copy/pasting the code:
+static __rte_always_inline uint16_t
+async_poll_dequeue_completed_split(struct virtio_net *dev, uint16_t queue_id,
+ struct rte_mbu
During EAL init, all buses are probed and the devices found are
initialized. On eal_cleanup(), the inverse does not happen, meaning any
allocated memory and other configuration will not be cleaned up
appropriately on exit.
Currently, in order for device cleanup to take place, applications must
cal
66 matches
Mail list logo