> -Original Message-
> From: Stephen Hemminger [mailto:step...@networkplumber.org]
> Sent: Wednesday, August 21, 2019 12:33 AM
> To: Van Haaren, Harry
> Cc: dev@dpdk.org; Stephen Hemminger
> Subject: [PATCH] service: print errors to rte log
>
> EAL should always use rte_log instead of pu
From: Stephen Hemminger
EAL should always use rte_log instead of putting errors to
stderr (which maybe redirected to /dev/null in a daemon).
Also checks for null before rte_free are unnecessary.
Minor code consistency improvements.
Signed-off-by: Stephen Hemminger
Signed-off-by: Harry van Haar
Some update for this thread.
In the most critical datapath of mlx5 PMD, there are some rte_cio_w/rmb, 'dmb
osh' on aarch64, in use.
C11 atomic is good for replacing the rte_smp_r/wmb to relax the data
synchronization barrier between CPUs.
However, mlx5 PMD needs to write data back to the HW, so
1/3: fix vfio unmap that fails unexpectedly
2/3: fix vfio unmap that succeeds unexpectedly
3/3: add unit tests for eal vfio
Signed-off-by: Chaitanya Babu Talluri
Chaitanya Babu Talluri (3):
lib/eal: fix vfio unmap that fails unexpectedly
lib/eal: fix vfio unmap that succeeds unexpectedly
a
Stephen Hemminger writes:
> The function rte_eal_init_alert ends up printing the same message
> twice. Once via RTE_LOG and once to stderr. Remove the fprintf
> to stderr since it is redundant.
>
> Signed-off-by: Stephen Hemminger
> ---
This was originally added at your suggestion:
http://mail
Unmap of multiple pages fails after a sequence of partial map/unmaps.
The scenario is that multiple maps are created in user_mem_maps,
after multiple map/unmap/remap sequences.
For an example,
Steps:
1. Map 3 pages together
2. Un-map page1
3. Re-map page 1
4. Un-map page 2
5. Re-map page 2
6. Un-m
Un-map of page with valid virtual address and
another page's IOVA succeeds unexpectedly.
An entry in user_mem_maps can refer multiple pages.
Currently in such case to unmap single page, VA
and IOVA related to entry in user_mem_maps is
checked but not based on page (based on the
page size), this is
Unit test cases are added for eal vfio library.
eal_vfio_autotest added to meson build file.
Signed-off-by: Chaitanya Babu Talluri
---
app/test/Makefile| 1 +
app/test/meson.build | 2 +
app/test/test_eal_vfio.c | 728 +++
3 files changed, 731
On 21-Aug-19 2:02 PM, Chaitanya Babu Talluri wrote:
Unmap of multiple pages fails after a sequence of partial map/unmaps.
The scenario is that multiple maps are created in user_mem_maps,
after multiple map/unmap/remap sequences.
For an example,
Steps:
1. Map 3 pages together
2. Un-map page1
3. R
On 21-Aug-19 2:02 PM, Chaitanya Babu Talluri wrote:
Un-map of page with valid virtual address and
another page's IOVA succeeds unexpectedly.
An entry in user_mem_maps can refer multiple pages.
Currently in such case to unmap single page, VA
and IOVA related to entry in user_mem_maps is
checked bu
Chaitanya Babu Talluri writes:
> Unit test cases are added for eal vfio library.
> eal_vfio_autotest added to meson build file.
>
> Signed-off-by: Chaitanya Babu Talluri
> ---
Thanks for adding unit tests for the vfio library.
In this case, there seems to be some failures - can you help determ
When calling to setup RSS on v4 API, ESX will expect
IPv4/6 TCP RSS to be set/requested mandatorily.
This patch will:
- Set IPv4/6 TCP RSS when these have not been set. A warning
message is thrown to make sure we warn the application we are
setting IPv4/6 TCP RSS when not set.
- An additional chec
This code was added 7+ years ago (commit fb022b85ba),
presumably when variant TSCs were still somewhat
common? But this code doesn't do anything except print
a warning, and the warning doesn't give any kind of
advice to the user, so let's just remove it.
While the warning has no functional meanin
This code was added 7+ years ago:
commit fb022b85bae4 ("timer: check TSC reliability")
presumably when variant TSCs were still somewhat
common? But this code doesn't do anything except print
a warning, and the warning doesn't give any kind of
advice to the user, so let's just remove it.
While t
Ideally, get_tsc_freq_arch() is able to provide the
TSC rate using architecture-specific means. When that
is not possible, DPDK reverts to calculating the
TSC rate with a 100ms nanosleep or 1s sleep. The latter
occurs more frequently in VMs which often do not have
access to the data they need fro
From: Pavan Nikhilesh
Add new Rx offload flags `DEV_RX_OFFLOAD_RSS_HASH` and
`DEV_RX_OFFLOAD_FLOW_MARK`. These flags can be used to
enable/disable PMD writes to rte_mbuf fields `hash.rss` and `hash.fdir.hi`
and also `ol_flags:PKT_RX_RSS` and `ol_flags:PKT_RX_FDIR`.
Add new packet type set functi
From: Pavan Nikhilesh
Add `rte_eth_dev_set_supported_ptypes` function that will allow the
application to inform the PMD the packet types it is interested in.
Based on the ptypes set PMDs can optimize their Rx path.
-If application doesn’t want any ptype information it can call
`rte_eth_dev_set_s
From: Pavan Nikhilesh
Add new Rx offload flag `DEV_RX_OFFLOAD_RSS_HASH` which can be used to
enable/disable PMDs write to `rte_mbuf::hash::rss`.
PMDs notify the validity of `rte_mbuf::hash:rss` to the applcation
by enabling `PKT_RX_RSS_HASH ` flag in `rte_mbuf::ol_flags`.
Signed-off-by: Pavan Ni
From: Pavan Nikhilesh
Add DEV_RX_OFFLOAD_RSS_HASH flag for all PMDs that support RSS hash
delivery.
Signed-off-by: Pavan Nikhilesh
---
drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
drivers/net/cxgbe/cxgbe.h| 3 ++-
drivers/net/dpaa/dpaa_ethdev.c | 3 ++-
drivers/net/dpaa2/
From: Pavan Nikhilesh
Add DEV_RX_OFFLOAD_FLOW_MARK flag for all PMDs that support flow action
flag and mark.
Signed-off-by: Pavan Nikhilesh
---
drivers/net/bnxt/bnxt_ethdev.c | 3 ++-
drivers/net/enic/enic_res.c | 3 ++-
drivers/net/i40e/i40e_ethdev.c | 3 ++-
dri
From: Pavan Nikhilesh
Since pipeline_generic uses `rte_mbuf::hash::rss` add the new Rx offload
flag `DEV_RX_OFFLOAD_RSS_HASH` to inform PMD to copy the RSS hash result
into the mbuf.
Signed-off-by: Pavan Nikhilesh
---
Currently, there is no means to retrieve set configuration from an ethdev
w
From: Pavan Nikhilesh
Add new Rx offload flag `DEV_RX_OFFLOAD_FLOW_MARK` that can be used to
enable/disable PMDs write to `rte_mbuf::hash::fdir::hi` and
`rte_mbuf::ol_flags` when flow actions `RTE_FLOW_ACTION_MARK` and
`RTE_FLOW_ACTION_FLAG` are enabled.
PMDs notify the validity of `rte_mbuf::ha
From: Pavan Nikhilesh
Disable packet type parsing in examples that don't use
`rte_mbuf::packet_type` by setting ptype_mask as 0 in
`rte_eth_dev_set_supported_ptypes`
Signed-off-by: Pavan Nikhilesh
---
examples/bbdev_app/main.c | 1 +
examples/bond/main.c
Matan thankfully accepted to replace myself as maintainer for mlx5 PMD.
Good luck!
Signed-off-by: Yongseok Koh
---
MAINTAINERS | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/MAINTAINERS b/MAINTAINERS
index 4100260861..30dbb8be55 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@
Ideally, get_tsc_freq_arch() is able to provide the
TSC rate using architecture-specific means. When that
is not possible, DPDK reverts to calculating the
TSC rate with a 100ms nanosleep or 1s sleep. The latter
occurs more frequently in VMs which often do not have
access to the data they need fro
Please disregard my last message. It was mistakenly sent to the wrong group.
Sorry about that.
Thanks,
Phil Yang
> -Original Message-
> From: dev On Behalf Of Phil Yang (Arm
> Technology China)
> Sent: Wednesday, August 21, 2019 5:58 PM
> To: Honnappa Nagarahalli
> Cc: dev@dpdk.org; nd
On 08/20, alvinx.zh...@intel.com wrote:
>From: Alvin Zhang
>
>If VF driver in VM continuous sending invalid messages by mailbox,
>it will waste CPU cycles on PF driver and impact other VF drivers
>configuration. New feature can count the numbers of invalid and
>unsupported messages from VFs, when
Wednesday, August 21, 2019 11:56 PM, Yongseok Koh:
> Subject: [dpdk-dev] [PATCH] maintainers: update for Mellanox mlx5 PMD
>
> Matan thankfully accepted to replace myself as maintainer for mlx5 PMD.
> Good luck!
>
> Signed-off-by: Yongseok Koh
Thanks you Koh for all the hard work and the mainte
From: Kalesh AP
Refactor init and uninit functions so that the driver can fail
the eth_dev_ops callbacks and accessing Tx and Rx queues
when device is in reset or in error state.
Transmit and receive queues are freed during reset cleanup and
reallocated during recovery. So we block all data path
From: Kalesh AP
Signed-off-by: Kalesh AP
Reviewed-by: Somnath Kotur
Reviewed-by: Ajit Khaparde
---
drivers/net/bnxt/hsi_struct_def_dpdk.h | 137 +
1 file changed, 137 insertions(+)
diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h
b/drivers/net/bnxt/hsi_struct_def_
This patchset adds support to moitor the health of the firmware and the
underlying device and recover to an operational state in case of error.
We can also detect if a FW upgrade is in progress and quiesce all
access to the device and recover once FW indicates everything is ready.
Patchset against
From: Kalesh AP
When the FW upgrade is initiated the current instance
of FW issues a HWRM_ASYNC_EVENT_CMPL_EVENT_ID_RESET_NOTIFY
async notification to the driver. On receiving this notification,
the PMD shall quiesce itself and poll on the HWRM_VER_GET FW
command at regular intervals.
Once the V
From: Kalesh AP
Use latest firmware API to inform firmware about IF state changes.
Firmware has the option to clean up resources during IF down and
to require the driver to reserve resources again during IF up.
Signed-off-by: Kalesh AP
Reviewed-by: Santoshkumar Karanappa Rastapur
Reviewed-by:
From: Kalesh AP
1. Advertise HWRM_FUNC_DRV_RGTR_INPUT_FLAGS_ERROR_RECOVERY_SUPPORT flag
in the FUNC_DRV_RGTR command.
2. request for the async event ASYNC_EVENT_CMPL_EVENT_ID_ERROR_RECOVERY
in the FUNC_DRV_RGTR command.
3. handle the async event EVENT_ID_ERROR_RECOVERY from FW.
Error recov
From: Kalesh AP
Added code to perform FW_RESET. When the driver detects error in FW,
it has to initiate the recovery by resetting the cores. FW advertise
the method to do a core reset, reset register offsets and values
to perform reset in response of HWRM_ERROR_RECOVERY_QCFG command.
There are 2
From: Kalesh AP
When IOMMU is available, EAL picks IOVA as VA as the default IOVA mode.
This causes the bnxt driver to log warning messages saying
"Memzone physical address same as virtual." and "Using rte_mem_virt2iova()"
during load.
Reduce the verbosity of logs to DEBUG.
Signed-off-by: Kales
From: Kalesh AP
HWRM_ERROR_RECOVERY_QCFG command returns the FW status registers offset
for periodic firmware health check monitoring. Map them to GRC window 2.
Signed-off-by: Kalesh AP
Reviewed-by: Somnath Kotur
Signed-off-by: Ajit Khaparde
---
drivers/net/bnxt/bnxt.h| 22 ++
From: Kalesh AP
use BIT macro instead of bit fields.
Signed-off-by: Kalesh AP
Reviewed-by: Somnath Kotur
Signed-off-by: Ajit Khaparde
---
drivers/net/bnxt/bnxt.h | 73 ++--
drivers/net/bnxt/bnxt_util.h | 4 ++
2 files changed, 41 insertions(+), 36 deleti
From: Kalesh AP
In Driver initiated error recovery process, driver has to know about
the registers offset and values to initiate FW reset. The HWRM command
HWRM_ERROR_RECOVERY_QCFG is used to obtain all the registers and values
required to initiate FW reset. This command response includes
FW hear
From: Kalesh AP
When the driver receives the error recovery notify event from fw
for the first time, it has to read the heartbeat count register and
recovery count register and schedule the fw health check task for
periodically monitoring the fw health.
FW may send this event at a later time whe
From: Kalesh AP
When firmware hit some unrecoverable error conditions, firmware initiate
the recovery by sending an async event EVENT_CMPL_EVENT_ID_RESET_NOTIFY
with data1 set to RESET_NOTIFY_EVENT_DATA1_REASON_CODE_FW_EXCEPTION_FATAL
to all host drivers and will reset the chip.
The recovery pro
From: Kalesh AP
Periodically poll the FW heartbeat register and FW recovery counter
registers to check the FW health. Polling frequency will be
advertised by the FW in HWRM_ERROR_RECOVERY_QCFG response.
Schedule the task upon receiving the async event from FW.
Signed-off-by: Kalesh AP
Reviewed-
On 08/21, Jim Harris wrote:
>Ideally, get_tsc_freq_arch() is able to provide the
>TSC rate using architecture-specific means. When that
>is not possible, DPDK reverts to calculating the
>TSC rate with a 100ms nanosleep or 1s sleep. The latter
>occurs more frequently in VMs which often do not have
While using ticket lock, cores repeatedly poll the lock variable.
This is replaced by rte_wait_until_equal API.
Running ticketlock_autotest on ThunderX2, Ampere eMAG80, and Arm N1SDP[1],
there were variances between runs, but no notable performance gain or
degradation were seen with and without th
The rte_wait_until_equalxx APIs abstract the functionality of 'polling
for a memory location to become equal to a given value'.
Signed-off-by: Gavin Hu
Reviewed-by: Ruifeng Wang
Reviewed-by: Steve Capper
Reviewed-by: Ola Liljedahl
Reviewed-by: Honnappa Nagarahalli
Reviewed-by: Phil Yang
Acke
Instead of polling for tail to be updated, use wfe instruction.
Signed-off-by: Gavin Hu
Reviewed-by: Ruifeng Wang
Reviewed-by: Steve Capper
Reviewed-by: Ola Liljedahl
Reviewed-by: Honnappa Nagarahalli
---
lib/librte_ring/rte_ring_c11_mem.h | 4 ++--
lib/librte_ring/rte_ring_generic.h | 3 +--
DPDK has multiple use cases where the core repeatedly polls a location in
memory. This polling results in many cache and memory transactions.
Arm architecture provides WFE (Wait For Event) instruction, which allows
the cpu core to enter a low power state until woken up by the update to the
memory
There are two definitions conflicting each other, for more
details, refer to [1].
include/rte_atomic_64.h:19: error: "dmb" redefined [-Werror]
drivers/bus/fslmc/mc/fsl_mc_sys.h:36: note: this is the location of the
previous definition
#define dmb() {__asm__ __volatile__("" : : : "memory"); }
The
In acquiring a spinlock, cores repeatedly poll the lock variable.
This is replaced by rte_wait_until_equal API.
Running the micro benchmarking and the testpmd and l3fwd traffic tests
on ThunderX2, Ampere eMAG80 and Arm N1SDP, everything went well and no
notable performance gain nor degradation was
Add the RTE_USE_WFE configuration entry for aarch64, disabled by default.
It can be enabled selectively based on the performance benchmarking.
Signed-off-by: Gavin Hu
Reviewed-by: Ruifeng Wang
Reviewed-by: Steve Capper
Reviewed-by: Honnappa Nagarahalli
Reviewed-by: Phil Yang
Acked-by: Pavan N
af_packet driver is leaving stale socket after device is removed.
Ring buffers are memory mapped when device is added using rte_dev_probe.
There is no corresponding munmap call when device is removed/closed.
This commit fixes the issue by calling munmap
from rte_pmd_af_packet_remove().
Bugzilla ID
From: Honnappa Nagarahalli
Add a section to describe a design to integrate QSBR RCU library
with other libraries in DPDK.
Signed-off-by: Honnappa Nagarahalli
Reviewed-by: Gavin Hu
Reviewed-by: Ruifeng Wang
---
doc/guides/prog_guide/rcu_lib.rst | 51 +++
1 file cha
Currently, the tbl8 group is freed even though the readers might be
using the tbl8 group entries. The freed tbl8 group can be reallocated
quickly. This results in incorrect lookup results.
RCU QSBR process is integrated for safe tbl8 group reclaim.
Refer to RCU documentation to understand various
The peek API allows fetching the next available object in the ring
without dequeuing it. This helps in scenarios where dequeuing of
objects depend on their value.
Signed-off-by: Dharmik Thakkar
Signed-off-by: Ruifeng Wang
Reviewed-by: Honnappa Nagarahalli
Reviewed-by: Gavin Hu
---
lib/librte_
This patchset integrates RCU QSBR support with LPM library.
Document is added with suggested design of integrating RCU
library with other libraries in DPDK.
As an example, LPM library adds the integration. RCU is used
to safely free tbl8 groups that can be recycled. Table will not
be reclaimed or
55 matches
Mail list logo