Re: 23.11.2 patches review and test

2024-08-28 Thread YangHang Liu
RedHat QE tested below 18 scenarios on RHEL 9.4 and didn't find any new
dpdk issues.

   - VM with device assignment(PF) throughput testing(1G hugepage size):
   PASS
   - VM with device assignment(PF) throughput testing(2M hugepage size) :
   PASS
   - VM with device assignment(VF) throughput testing: PASS
   - PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
   - PVP vhost-user 2Q throughput testing: PASS
   - PVP vhost-user 1Q - cross numa node throughput testing: PASS
   - VM with vhost-user 2 queues throughput testing: PASS
   - vhost-user reconnect with dpdk-client, qemu-server qemu reconnect: PASS
   - vhost-user reconnect with dpdk-client, qemu-server ovs reconnect: PASS
   - PVP  reconnect with dpdk-client, qemu-server: PASS
   - PVP 1Q live migration testing: PASS
   - PVP 1Q cross numa node live migration testing: PASS
   - VM with ovs+dpdk+vhost-user 1Q live migration testing: PASS
   - VM with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
   - VM with ovs+dpdk+vhost-user 2Q live migration testing: PASS
   - VM with ovs+dpdk+vhost-user 4Q live migration testing: PASS
   - Host PF + DPDK testing: PASS
   - Host VF + DPDK testing: PASS

Test Versions:

   - qemu-kvm-8.2.0
   - kernel 5.14
   - libvirt 10.0
   - openvswitch 3.3
   - git log

commit 8401a3e84b878f69086a6f7feecd0526ea756a67
Author: Xueming Li 
Date:   Thu Aug 22 19:59:58 2024 +0800
version: 23.11.2-rc2
Signed-off-by: Xueming Li 


   - Test device : X540-AT2 NIC(ixgbe, 10G)

Tested-by: Yanghang Liu

On Thu, Aug 22, 2024 at 8:07 PM Xueming Li  wrote:

> Hi all,
>
> Here is a list of patches targeted for stable release 23.11.2.
>
> The planned date for the final release is 31th August.
>
> Please help with testing and validation of your use cases and report
> any issues/results with reply-all to this mail. For the final release
> the fixes and reported validations will be added to the release notes.
>
> A release candidate tarball can be found at:
>
> https://dpdk.org/browse/dpdk-stable/tag/?id=v23.11.2-rc2
>
> These patches are located at branch 23.11 of dpdk-stable repo:
> https://dpdk.org/browse/dpdk-stable/
>
> Thanks.
>
> Xueming Li 
>
> ---
> Abdullah Ömer Yamaç (1):
>   hash: fix RCU reclamation size
>
> Akhil Goyal (1):
>   test/crypto: fix enqueue/dequeue callback case
>
> Alex Vesker (1):
>   net/mlx5/hws: fix port ID on root item convert
>
> Alexander Kozyrev (2):
>   net/mlx5: break flow resource release loop
>   app/testpmd: add postpone option to async flow destroy
>
> Alexander Skorichenko (1):
>   net/netvsc: fix MTU set
>
> Amit Prakash Shukla (1):
>   doc: fix DMA performance test invocation
>
> Anatoly Burakov (7):
>   net/e1000/base: fix link power down
>   fbarray: fix incorrect lookahead behavior
>   fbarray: fix incorrect lookbehind behavior
>   fbarray: fix lookahead ignore mask handling
>   fbarray: fix lookbehind ignore mask handling
>   fbarray: fix finding for unaligned length
>   malloc: fix multi-process wait condition handling
>
> Andrew Boyer (1):
>   net/ionic: fix mbuf double-free when emptying array
>
> Ankur Dwivedi (1):
>   common/cnxk: fix integer overflow
>
> Anoob Joseph (1):
>   common/cnxk: fix segregation of logs based on module
>
> Apeksha Gupta (2):
>   bus/dpaa: fix memory leak in bus scan
>   common/dpaax: fix node array overrun
>
> Arkadiusz Kusztal (2):
>   test/crypto: fix RSA cases in QAT suite
>   crypto/qat: fix placement of OOP offset
>
> Bing Zhao (4):
>   app/testpmd: fix indirect action flush
>   net/mlx5: fix end condition of reading xstats
>   net/mlx5: fix uplink port probing in bonding mode
>   common/mlx5: remove unneeded field when modify RQ table
>
> Brian Dooley (1):
>   crypto/qat: fix GEN4 write
>
> Bruce Richardson (2):
>   net/cpfl: fix 32-bit build
>   ethdev: fix device init without socket-local memory
>
> Chaoyong He (10):
>   net/nfp: fix resource leak in secondary process
>   net/nfp: fix configuration BAR
>   net/nfp: fix xstats for multi PF firmware
>   app/testpmd: fix help string of BPF load command
>   net/nfp: fix IPv6 TTL and DSCP flow action
>   net/nfp: fix allocation of switch domain
>   net/nfp: fix flow mask table entry
>   net/nfp: remove redundant function call
>   net/nfp: forbid offload flow rules with empty action list
>   net/nfp: fix firmware abnormal cleanup
>
> Chengwen Feng (3):
>   ethdev: fix strict aliasing in link up
>   net/hns3: check Rx DMA address alignmnent
>   dma/hisilicon: remove support for HIP09 platform
>
> Chenming Chang (1):
>   hash: fix return code description in Doxygen
>
> Chinh Cao (1):
>   net/ice/base: fix return type of bitmap hamming weight
>
> Ciara Loftus (4):
>   net/af_xdp: fix port ID in Rx mbuf
>   net/af_xdp: count mbuf allocation failures
>   net/af_xdp: fix stats reset
>   net/af_xdp: rem

Re: [PATCH] net/ice: support for more flexible loading of DDP package

2024-08-28 Thread Bruce Richardson
On Wed, Aug 28, 2024 at 11:53:35AM +0800, Zhichao Zeng wrote:
> The "Dynamic Device Personalization" package is loaded at initialization
> time by the driver, but the specific package file loaded depends upon
> what package file is found first by searching through a hard-coded list
> of firmware paths.
> 
> To enable greater control over the package loading, this commit two ways
> to support custom DDP packages:
> 1. Add device option to choose a specific DDP package file to load.
>For example:
>-a 80:00.0,ddp_pkg_file=/path/to/ice-version.pkg
> 2. Read firmware search path from
>"/sys/module/firmware_class/parameters/path" like the kernel behavior.
> 
> Signed-off-by: Bruce Richardson 
> Signed-off-by: Zhichao Zeng 

Hi Zhichao,

since there are two different methods being supported for picking a DDP
package this patch would be better split into two, one for each method
added.

The support for #1 above is already on-list as a standalone patch[1], so you
really only need to do a new patch for #2 above. However, I'm ok for you to
take my patch and include it in a 2-patch set for this if you prefer, since
both patches will be related to choosing a DDP file. I'll leave it up to
you whether v2 is a single patch for the search path, or a 2-patch set
including [1].

Regards,
/Bruce

[1] 
https://patches.dpdk.org/project/dpdk/patch/20240812152815.1132697-5-bruce.richard...@intel.com/

> ---
>  doc/guides/nics/ice.rst  | 12 +++
>  drivers/net/ice/ice_ethdev.c | 61 
>  drivers/net/ice/ice_ethdev.h |  2 ++
>  3 files changed, 75 insertions(+)
> 
> diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
> index ae975d19ad..0484fafbc1 100644
> --- a/doc/guides/nics/ice.rst
> +++ b/doc/guides/nics/ice.rst
> @@ -108,6 +108,18 @@ Runtime Configuration
>  
>  -a 80:00.0,default-mac-disable=1
>  
> +- ``DDP Package File``
> +
> +  Rather than have the driver search for the DDP package to load,
> +  or to override what package is used,
> +  the ``ddp_pkg_file`` option can be used to provide the path to a specific 
> package file.
> +  For example::
> +
> +-a 80:00.0,ddp_pkg_file=/path/to/ice-version.pkg
> +
> +  There is also support for customizing the firmware search path, will read 
> the search path
> +  from "/sys/module/firmware_class/parameters/path" and try to load DDP 
> package.
> +
>  - ``Protocol extraction for per queue``
>  
>Configure the RX queues to do protocol extraction into mbuf for protocol
> diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
> index 304f959b7e..bd78c14000 100644
> --- a/drivers/net/ice/ice_ethdev.c
> +++ b/drivers/net/ice/ice_ethdev.c
> @@ -36,6 +36,7 @@
>  #define ICE_ONE_PPS_OUT_ARG   "pps_out"
>  #define ICE_RX_LOW_LATENCY_ARG"rx_low_latency"
>  #define ICE_MBUF_CHECK_ARG   "mbuf_check"
> +#define ICE_DDP_FILENAME  "ddp_pkg_file"
>  
>  #define ICE_CYCLECOUNTER_MASK  0xULL
>  
> @@ -52,6 +53,7 @@ static const char * const ice_valid_args[] = {
>   ICE_RX_LOW_LATENCY_ARG,
>   ICE_DEFAULT_MAC_DISABLE,
>   ICE_MBUF_CHECK_ARG,
> + ICE_DDP_FILENAME,
>   NULL
>  };
>  
> @@ -692,6 +694,18 @@ handle_field_name_arg(__rte_unused const char *key, 
> const char *value,
>   return 0;
>  }
>  
> +static int
> +handle_ddp_filename_arg(__rte_unused const char *key, const char *value, 
> void *name_args)
> +{
> + const char **filename = name_args;
> + if (strlen(value) >= ICE_MAX_PKG_FILENAME_SIZE) {
> + PMD_DRV_LOG(ERR, "The DDP package filename is too long : '%s'", 
> value);
> + return -1;
> + }
> + *filename = strdup(value);
> + return 0;
> +}
> +
>  static void
>  ice_check_proto_xtr_support(struct ice_hw *hw)
>  {
> @@ -1873,6 +1887,22 @@ ice_load_pkg_type(struct ice_hw *hw)
>   return package_type;
>  }
>  
> +static int ice_read_customized_path(char *pkg_file)
> +{
> + char buf[ICE_MAX_PKG_FILENAME_SIZE];
> + FILE *fp = fopen(ICE_PKG_FILE_CUSTOMIZED_PATH, "r");
> + if (fp == NULL) {
> + PMD_INIT_LOG(ERR, "Failed to read CUSTOMIZED_PATH");
> + return -EIO;
> + }
> + if (fscanf(fp, "%s\n", buf) > 0)
> + strncpy(pkg_file, buf, ICE_MAX_PKG_FILENAME_SIZE);
> + else
> + return -EIO;
> +
> + return 0;
> +}
> +
>  int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn)
>  {
>   struct ice_hw *hw = &adapter->hw;
> @@ -1882,12 +1912,28 @@ int ice_load_pkg(struct ice_adapter *adapter, bool 
> use_dsn, uint64_t dsn)
>   size_t bufsz;
>   int err;
>  
> + if (adapter->devargs.ddp_filename != NULL) {
> + strlcpy(pkg_file, adapter->devargs.ddp_filename, 
> sizeof(pkg_file));
> + if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0) {
> + goto load_fw;
> + } else {
> + PMD_INIT_LOG(ERR, "Cannot load DDP file: %s\n", 
> pkg_fi

Re: 23.11.2 patches review and test

2024-08-28 Thread Xueming Li
Hi YangHang,

Thanks for the verification and feedback!

Best Regards,
Xueming

From: YangHang Liu 
Sent: Wednesday, August 28, 2024 3:23 PM
To: Xueming Li 
Cc: sta...@dpdk.org ; dev@dpdk.org ; Abhishek 
Marathe ; Ali Alnubani ; 
David Christensen ; Hemant Agrawal 
; Ian Stokes ; Jerin Jacob 
; John McNamara ; Ju-Hyoung Lee 
; Kevin Traynor ; Luca Boccassi 
; Pei Zhang ; Raslan Darawsheh 
; NBU-Contact-Thomas Monjalon (EXTERNAL) 
; benjamin.wal...@intel.com ; 
qian.q...@intel.com ; yuan.p...@intel.com 
; zhaoyan.c...@intel.com ; Chao 
Yang 
Subject: Re: 23.11.2 patches review and test

RedHat QE tested below 18 scenarios on RHEL 9.4 and didn't find any new dpdk 
issues.

  *   VM with device assignment(PF) throughput testing(1G hugepage size): PASS
  *   VM with device assignment(PF) throughput testing(2M hugepage size) : PASS
  *   VM with device assignment(VF) throughput testing: PASS
  *   PVP (host dpdk testpmd as vswitch) 1Q: throughput testing: PASS
  *   PVP vhost-user 2Q throughput testing: PASS
  *   PVP vhost-user 1Q - cross numa node throughput testing: PASS
  *   VM with vhost-user 2 queues throughput testing: PASS
  *   vhost-user reconnect with dpdk-client, qemu-server qemu reconnect: PASS
  *   vhost-user reconnect with dpdk-client, qemu-server ovs reconnect: PASS
  *   PVP  reconnect with dpdk-client, qemu-server: PASS
  *   PVP 1Q live migration testing: PASS
  *   PVP 1Q cross numa node live migration testing: PASS
  *   VM with ovs+dpdk+vhost-user 1Q live migration testing: PASS
  *   VM with ovs+dpdk+vhost-user 1Q live migration testing (2M): PASS
  *   VM with ovs+dpdk+vhost-user 2Q live migration testing: PASS
  *   VM with ovs+dpdk+vhost-user 4Q live migration testing: PASS
  *   Host PF + DPDK testing: PASS
  *   Host VF + DPDK testing: PASS

Test Versions:

  *   qemu-kvm-8.2.0
  *   kernel 5.14
  *   libvirt 10.0
  *   openvswitch 3.3
  *   git log

commit 8401a3e84b878f69086a6f7feecd0526ea756a67
Author: Xueming Li mailto:xuemi...@nvidia.com>>
Date:   Thu Aug 22 19:59:58 2024 +0800
version: 23.11.2-rc2
Signed-off-by: Xueming Li mailto:xuemi...@nvidia.com>>

  *   Test device : X540-AT2 NIC(ixgbe, 10G)

Tested-by: Yanghang Liumailto:yangh...@redhat.com>>

On Thu, Aug 22, 2024 at 8:07 PM Xueming Li 
mailto:xuemi...@nvidia.com>> wrote:
Hi all,

Here is a list of patches targeted for stable release 23.11.2.

The planned date for the final release is 31th August.

Please help with testing and validation of your use cases and report
any issues/results with reply-all to this mail. For the final release
the fixes and reported validations will be added to the release notes.

A release candidate tarball can be found at:

https://dpdk.org/browse/dpdk-stable/tag/?id=v23.11.2-rc2

These patches are located at branch 23.11 of dpdk-stable repo:
https://dpdk.org/browse/dpdk-stable/

Thanks.

Xueming Li mailto:xuemi...@nvidia.com>>

---
Abdullah Ömer Yamaç (1):
  hash: fix RCU reclamation size

Akhil Goyal (1):
  test/crypto: fix enqueue/dequeue callback case

Alex Vesker (1):
  net/mlx5/hws: fix port ID on root item convert

Alexander Kozyrev (2):
  net/mlx5: break flow resource release loop
  app/testpmd: add postpone option to async flow destroy

Alexander Skorichenko (1):
  net/netvsc: fix MTU set

Amit Prakash Shukla (1):
  doc: fix DMA performance test invocation

Anatoly Burakov (7):
  net/e1000/base: fix link power down
  fbarray: fix incorrect lookahead behavior
  fbarray: fix incorrect lookbehind behavior
  fbarray: fix lookahead ignore mask handling
  fbarray: fix lookbehind ignore mask handling
  fbarray: fix finding for unaligned length
  malloc: fix multi-process wait condition handling

Andrew Boyer (1):
  net/ionic: fix mbuf double-free when emptying array

Ankur Dwivedi (1):
  common/cnxk: fix integer overflow

Anoob Joseph (1):
  common/cnxk: fix segregation of logs based on module

Apeksha Gupta (2):
  bus/dpaa: fix memory leak in bus scan
  common/dpaax: fix node array overrun

Arkadiusz Kusztal (2):
  test/crypto: fix RSA cases in QAT suite
  crypto/qat: fix placement of OOP offset

Bing Zhao (4):
  app/testpmd: fix indirect action flush
  net/mlx5: fix end condition of reading xstats
  net/mlx5: fix uplink port probing in bonding mode
  common/mlx5: remove unneeded field when modify RQ table

Brian Dooley (1):
  crypto/qat: fix GEN4 write

Bruce Richardson (2):
  net/cpfl: fix 32-bit build
  ethdev: fix device init without socket-local memory

Chaoyong He (10):
  net/nfp: fix resource leak in secondary process
  net/nfp: fix configuration BAR
  net/nfp: fix xstats for multi PF firmware
  app/testpmd: fix help string of BPF load command
  net/nfp: fix IPv6 TTL and DSCP flow action
  net/nfp: fix allocation of switch domain
  net/nfp: fix flow mask table entry
  net/nfp: remove redundant function 

RE: [PATCH] net/mana: support building the driver on arm64

2024-08-28 Thread Long Li
 
> I released this mail from the moderation queue, so the patch could make it to
> patchwork.
> 
> > The sender is lon...@linuxonhyperv.com, the author is
> lon...@microsoft.com. I hope that is okay.
> 
> The @linuxonhyperv.com address one is not registered to the dev ml, which is
> the reason why the mail got moderated.
> 
> Hope it is clear now..

Thank you, I have subscribed as lon...@linuxonhyperv.com. I don't know the 
reason why it got bounced and membership disabled in the past.

Long



Re: [RFC 0/2] introduce LLC aware functions

2024-08-28 Thread Burakov, Anatoly

On 8/27/2024 5:10 PM, Vipin Varghese wrote:

As core density continues to increase, chiplet-based
core packing has become a key trend. In AMD SoC EPYC
architectures, core complexes within the same chiplet
share a Last-Level Cache (LLC). By packing logical cores
within the same LLC, we can enhance pipeline processing
stages due to reduced latency and improved data locality.

To leverage these benefits, DPDK libraries and examples
can utilize localized lcores. This approach ensures more
consistent latencies by minimizing the dispersion of lcores
across different chiplet complexes and enhances packet
processing by ensuring that data for subsequent pipeline
stages is likely to reside within the LLC.

< Function: Purpose >
-
  - rte_get_llc_first_lcores: Retrieves all the first lcores in the shared LLC.
  - rte_get_llc_lcore: Retrieves all lcores that share the LLC.
  - rte_get_llc_n_lcore: Retrieves the first n or skips the first n lcores in 
the shared LLC.

< MACRO: Purpose >
--
RTE_LCORE_FOREACH_LLC_FIRST: iterates through all first lcore from each LLC.
RTE_LCORE_FOREACH_LLC_FIRST_WORKER: iterates through all first worker lcore 
from each LLC.
RTE_LCORE_FOREACH_LLC_WORKER: iterates lcores from LLC based on hint (lcore id).
RTE_LCORE_FOREACH_LLC_SKIP_FIRST_WORKER: iterates lcores from LLC while 
skipping first worker.
RTE_LCORE_FOREACH_LLC_FIRST_N_WORKER: iterates through `n` lcores from each LLC.
RTE_LCORE_FOREACH_LLC_SKIP_N_WORKER: skip first `n` lcores, then iterates 
through reaming lcores in each LLC.



Hi Vipin,

I recently looked into how Intel's Sub-NUMA Clustering would work within 
DPDK, and found that I actually didn't have to do anything, because the 
SNC "clusters" present themselves as NUMA nodes, which DPDK already 
supports natively.


Does AMD's implementation of chiplets not report themselves as separate 
NUMA nodes? Because if it does, I don't really think any changes are 
required because NUMA nodes would give you the same thing, would it not?


--
Thanks,
Anatoly



RE: [PATCH] net/ice: support for more flexible loading of DDP package

2024-08-28 Thread Zeng, ZhichaoX



> -Original Message-
> From: Richardson, Bruce 
> Sent: Wednesday, August 28, 2024 3:55 PM
> To: Zeng, ZhichaoX 
> Cc: dev@dpdk.org
> Subject: Re: [PATCH] net/ice: support for more flexible loading of DDP package
> 
> On Wed, Aug 28, 2024 at 11:53:35AM +0800, Zhichao Zeng wrote:
> > The "Dynamic Device Personalization" package is loaded at
> > initialization time by the driver, but the specific package file
> > loaded depends upon what package file is found first by searching
> > through a hard-coded list of firmware paths.
> >
> > To enable greater control over the package loading, this commit two
> > ways to support custom DDP packages:
> > 1. Add device option to choose a specific DDP package file to load.
> >For example:
> >-a 80:00.0,ddp_pkg_file=/path/to/ice-version.pkg
> > 2. Read firmware search path from
> >"/sys/module/firmware_class/parameters/path" like the kernel behavior.
> >
> > Signed-off-by: Bruce Richardson 
> > Signed-off-by: Zhichao Zeng 
> 
> Hi Zhichao,
> 
> since there are two different methods being supported for picking a DDP
> package this patch would be better split into two, one for each method added.
> 
> The support for #1 above is already on-list as a standalone patch[1], so you
> really only need to do a new patch for #2 above. However, I'm ok for you to
> take my patch and include it in a 2-patch set for this if you prefer, since 
> both
> patches will be related to choosing a DDP file. I'll leave it up to you 
> whether v2
> is a single patch for the search path, or a 2-patch set including [1].
> 
> Regards,
> /Bruce
> 
> [1]
> https://patches.dpdk.org/project/dpdk/patch/20240812152815.1132697-
> 5-bruce.richard...@intel.com/
> 
Hi Bruce,

Thanks for your comments, sorry I didn't check the patchwork and didn't notice 
that #1 had been submitted, I'll rework the patch for #2 separately, thanks.

Regards
Zhichao
> > ---
> >  doc/guides/nics/ice.rst  | 12 +++
> >  drivers/net/ice/ice_ethdev.c | 61
> > 
> >  drivers/net/ice/ice_ethdev.h |  2 ++
> >  3 files changed, 75 insertions(+)
> >



RE: Bihash Support in DPDK

2024-08-28 Thread Medvedkin, Vladimir
Hi Rajesh,

rte_hash does not support per bucket locks, instead it uses global rwlock.
But you can try lock free mode (see documentation, in particular 
RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF flag)


From: rajesh goel 
Sent: Tuesday, August 27, 2024 4:57 PM
To: Medvedkin, Vladimir 
Cc: Ferruh Yigit ; Wang, Yipeng1 
; Gobriel, Sameh ; Richardson, 
Bruce ; dev@dpdk.org
Subject: Re: Bihash Support in DPDK

Thanks for the reply.

Bihash I mean bounded index what Vpp supports.

Iam looking for the bucket level lock support. Currently Iam using hash table 
shared by multiple process or multiple core/threads. So I have to take the 
write lock by single core and then read lock by multiple cores to read the 
value wrote in this hash table. Multiple readers are getting blocked due to 
this. I want to avoid this to increase performance.

Let me know your thoughts on this.

Regards
Rajesh

On Tue, 27 Aug, 2024, 14:44 Medvedkin, Vladimir, 
mailto:vladimir.medved...@intel.com>> wrote:
Hi Rajesh,

Please clarify what do you mean by “bihash”? Bidirectional? Bounded index?

As for concurrent lookup/updates, yes, DPDK hash table supports 
multi-process/multi-thread, please see the documentation:
https://doc.dpdk.org/guides/prog_guide/hash_lib.html#multi-process-support


From: rajesh goel mailto:rgoel.bangal...@gmail.com>>
Sent: Tuesday, August 27, 2024 7:04 AM
To: Ferruh Yigit mailto:ferruh.yi...@amd.com>>
Cc: Wang, Yipeng1 mailto:yipeng1.w...@intel.com>>; 
Gobriel, Sameh mailto:sameh.gobr...@intel.com>>; 
Richardson, Bruce 
mailto:bruce.richard...@intel.com>>; Medvedkin, 
Vladimir mailto:vladimir.medved...@intel.com>>; 
dev@dpdk.org
Subject: Re: Bihash Support in DPDK

Hi All,
Can we get some reply.

Thanks
Rajesh

On Thu, Aug 22, 2024 at 9:32 PM Ferruh Yigit 
mailto:ferruh.yi...@amd.com>> wrote:
On 8/22/2024 8:51 AM, rajesh goel wrote:
> Hi All,
> Need info if DPDK hash library supports bihash table where for multi-
> thread and multi-process we can update/del/lookup entries per bucket level.
>
>

+ hash library maintainers.


Re: [RFC PATCH v3 2/2] dts: Initial Implementation For Jumbo Frames Test Suite

2024-08-28 Thread Alex Chapman

Hi,
Ive been looking into the MTU terminology and would just like to clarify 
some naming conventions and doc strings.


On 7/26/24 15:13, Nicholas Pratte wrote:


+IP_HEADER_LEN = 20
+ETHER_STANDARD_FRAME = 1500
+ETHER_JUMBO_FRAME_MTU = 9000


For these constants, I am confused why one is "FRAME" and the other is 
"MTU". The value of 'ETHER_STANDARD_FRAME' is 1500 (the standard MTU 
size), it would make sense to rename it to 'ETHER_STANDARD_MTU', to keep 
naming consistent.


If the value was 1518 instead of 1500, then `ETHER_STANDARD_FRAME` would 
be appropriate.




+def test_jumboframes_normal_nojumbo(self) -> None:
+"""Assess the boundaries of packets sent less than or equal to the 
standard MTU length.
+
+PMDs are set to the standard MTU length of 1518 to assess behavior of 
sent packets less than
+or equal to this size. Sends two packets: one that is less than 1518 
bytes, and another that
+is equal to 1518 bytes. The test case expects to receive both packets.
+
+Test:
+Start testpmd and send packets of sizes 1517 and 1518.
+"""
+with TestPmdShell(
+self.sut_node, tx_offloads=0x8000, mbuf_size=[9200], mbcache=200
+) as testpmd:
+testpmd.configure_port_mtu_all(ETHER_STANDARD_FRAME)
+testpmd.start()


Renaming 'ETHER_STANDARD_FRAME' to 'ETHER_STANDARD_MTU' would reduce 
confusion here too.

e.g.
`testpmd.configure_port_mtu_all(ETHER_STANDARD_MTU)`

Additionally, you state you are sending packets of sizes 1517 and 1518. 
but you then call:

`self.send_packet_and_verify(ETHER_STANDARD_FRAME - 5)`
`self.send_packet_and_verify(ETHER_STANDARD_FRAME)`

Calculating to:
`self.send_packet_and_verify(1495)`
`self.send_packet_and_verify(1500)`

Which is confusing.
I believe this is because you are accounting for the 4 bytes of VLAN's 
in your calculations, but you might want to explain this.



Overall very solid and clean test suite, just wanted to get 
clarification on a few areas 🙂.

Alex


[PATCH] bus/pci: don't open uio device in secondary process

2024-08-28 Thread Konrad Sztyber
The uio_pci_generic driver clears the bus master bit when the device
file is closed.  So, when the secondary process terminates after probing
a device, that device becomes unusable in the primary process.

To avoid that, the device file is now opened only in the primary
process.  The commit that introduced this regression, 847d78fb95
("bus/pci: fix FD in secondary process"), only mentioned enabling access
to config space from secondary process, which still works, as it doesn't
rely on the device file.

Fixes: 847d78fb95 ("bus/pci: fix FD in secondary process")

Signed-off-by: Konrad Sztyber 
---
 drivers/bus/pci/linux/pci_uio.c | 25 +
 1 file changed, 13 insertions(+), 12 deletions(-)

diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
index 4c1d3327a9..432316afcc 100644
--- a/drivers/bus/pci/linux/pci_uio.c
+++ b/drivers/bus/pci/linux/pci_uio.c
@@ -232,18 +232,6 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
loc->domain, loc->bus, loc->devid, loc->function);
return 1;
}
-   snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
-
-   /* save fd */
-   fd = open(devname, O_RDWR);
-   if (fd < 0) {
-   PCI_LOG(ERR, "Cannot open %s: %s", devname, strerror(errno));
-   goto error;
-   }
-
-   if (rte_intr_fd_set(dev->intr_handle, fd))
-   goto error;
-
snprintf(cfgname, sizeof(cfgname),
"/sys/class/uio/uio%u/device/config", uio_num);
 
@@ -273,6 +261,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return 0;
 
+   /* the uio_pci_generic driver clears the bus master enable bit when the 
device file is
+* closed, so open it only in the primary process */
+   snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
+   /* save fd */
+   fd = open(devname, O_RDWR);
+   if (fd < 0) {
+   PCI_LOG(ERR, "Cannot open %s: %s", devname, strerror(errno));
+   goto error;
+   }
+
+   if (rte_intr_fd_set(dev->intr_handle, fd))
+   goto error;
+
/* allocate the mapping details for secondary processes*/
*uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
if (*uio_res == NULL) {
-- 
2.45.0



Re: [PATCH v3 02/12] net/ice: updates for ptp init in E825C

2024-08-28 Thread Bruce Richardson
On Fri, Aug 23, 2024 at 09:56:40AM +, Soumyadeep Hore wrote:
> From: Norbert Zulinski 
> 
> The implementation was done incorrectly assuming
> the TS PLL parameters would be similar to E822/E823
> devices. Fix it by using proper values.
> 
> Define access to SB (sideband) for second PHY and
> CGU devices in case of E825C devices.
> 
> In E825C soft straps of CGU cannot be read from HW,
> and therefore it must be hard coded to default values.
> 

Minor FYI here that you don't need to wrap the lines in the commit log so
aggressively - up to 72 character line length is fine. Only the title
should be shorter. I'll be tweaking commit log messages on apply anyway, so
no need for further action on your part.

/Bruce


Re: Bihash Support in DPDK

2024-08-28 Thread rajesh goel
Thanks Vladimir for the confirmation.
Is there any plan to support bucket level lock support in dpdk hash.

Thanks
Rajesh

On Wed, Aug 28, 2024 at 2:33 PM Medvedkin, Vladimir <
vladimir.medved...@intel.com> wrote:

> Hi Rajesh,
>
>
>
> rte_hash does not support per bucket locks, instead it uses global rwlock.
>
> But you can try lock free mode (see documentation, in particular
> RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF flag)
>
>
>
>
>
> *From:* rajesh goel 
> *Sent:* Tuesday, August 27, 2024 4:57 PM
> *To:* Medvedkin, Vladimir 
> *Cc:* Ferruh Yigit ; Wang, Yipeng1 <
> yipeng1.w...@intel.com>; Gobriel, Sameh ;
> Richardson, Bruce ; dev@dpdk.org
> *Subject:* Re: Bihash Support in DPDK
>
>
>
> Thanks for the reply.
>
>
>
> Bihash I mean bounded index what Vpp supports.
>
>
>
> Iam looking for the bucket level lock support. Currently Iam using hash
> table shared by multiple process or multiple core/threads. So I have to
> take the write lock by single core and then read lock by multiple cores to
> read the value wrote in this hash table. Multiple readers are getting
> blocked due to this. I want to avoid this to increase performance.
>
>
>
> Let me know your thoughts on this.
>
>
>
> Regards
>
> Rajesh
>
>
>
> On Tue, 27 Aug, 2024, 14:44 Medvedkin, Vladimir, <
> vladimir.medved...@intel.com> wrote:
>
> Hi Rajesh,
>
>
>
> Please clarify what do you mean by “bihash”? Bidirectional? Bounded index?
>
>
>
> As for concurrent lookup/updates, yes, DPDK hash table supports
> multi-process/multi-thread, please see the documentation:
>
> https://doc.dpdk.org/guides/prog_guide/hash_lib.html#multi-process-support
>
>
>
>
>
> *From:* rajesh goel 
> *Sent:* Tuesday, August 27, 2024 7:04 AM
> *To:* Ferruh Yigit 
> *Cc:* Wang, Yipeng1 ; Gobriel, Sameh <
> sameh.gobr...@intel.com>; Richardson, Bruce ;
> Medvedkin, Vladimir ; dev@dpdk.org
> *Subject:* Re: Bihash Support in DPDK
>
>
>
> Hi All,
>
> Can we get some reply.
>
>
>
> Thanks
>
> Rajesh
>
>
>
> On Thu, Aug 22, 2024 at 9:32 PM Ferruh Yigit  wrote:
>
> On 8/22/2024 8:51 AM, rajesh goel wrote:
> > Hi All,
> > Need info if DPDK hash library supports bihash table where for multi-
> > thread and multi-process we can update/del/lookup entries per bucket
> level.
> >
> >
>
> + hash library maintainers.
>
>


[PATCH dpdk] graph: make graphviz export more readable

2024-08-28 Thread Robin Jarry
Change the color of arrows leading to sink nodes to dark orange. Remove
the node oval shape around the sink nodes and make their text dark
orange. This results in a much more readable output for large graphs.
See the link below for an example.

Link: https://f.jarry.cc/rte-graph-dot/ipv6.svg
Signed-off-by: Robin Jarry 
---
 lib/graph/graph.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/lib/graph/graph.c b/lib/graph/graph.c
index d5b8c9f918cf..dff8e690a80d 100644
--- a/lib/graph/graph.c
+++ b/lib/graph/graph.c
@@ -745,7 +745,7 @@ graph_to_dot(FILE *f, struct graph *graph)
if (rc < 0)
goto end;
} else if (graph_node->node->nb_edges == 0) {
-   rc = fprintf(f, " [color=darkorange]");
+   rc = fprintf(f, " [fontcolor=darkorange shape=plain]");
if (rc < 0)
goto end;
}
@@ -753,9 +753,12 @@ graph_to_dot(FILE *f, struct graph *graph)
if (rc < 0)
goto end;
for (i = 0; i < graph_node->node->nb_edges; i++) {
+   const char *node_attrs = attrs;
+   if (graph_node->adjacency_list[i]->node->nb_edges == 0)
+   node_attrs = " [color=darkorange]";
rc = fprintf(f, "\t\"%s\" -> \"%s\"%s;\n", node_name,
 graph_node->adjacency_list[i]->node->name,
-attrs);
+node_attrs);
if (rc < 0)
goto end;
}
-- 
2.46.0



Re: [PATCH v3 04/12] net/ice: avoid reading past end of PFA

2024-08-28 Thread Bruce Richardson
On Fri, Aug 23, 2024 at 09:56:42AM +, Soumyadeep Hore wrote:
> From: Jacob Keller 
> 
> The ice_get_pfa_module_tlv() function iterates over the Preserved Fields
> Area to read data from the Shadow RAM, including the Part Board Assembly
> data, among others.
> 
> If the specific TLV being requested is not found in the current NVM, the
> code will read past the end of the PFA, misinterpreting the last word of
> the PFA and the word just after the PFA as another TLV. This typically
> results in one extra iteration before the length check of the while loop is
> triggered.
> 
> Signed-off-by: Jacob Keller 
> Signed-off-by: Soumyadeep Hore 
> ---
>  drivers/net/ice/base/ice_nvm.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 

Took a bit of digging, but I believe this fixes this previous commit (with
code being edited and moved a bit since then):

Fixes: 5d0b7b5fc491 ("net/ice/base: add read PBA module function")


RE: Bihash Support in DPDK

2024-08-28 Thread Medvedkin, Vladimir
I am not aware of such plans.

From: rajesh goel 
Sent: Wednesday, August 28, 2024 11:22 AM
To: Medvedkin, Vladimir 
Cc: Ferruh Yigit ; Wang, Yipeng1 
; Gobriel, Sameh ; Richardson, 
Bruce ; dev@dpdk.org
Subject: Re: Bihash Support in DPDK

Thanks Vladimir for the confirmation.
Is there any plan to support bucket level lock support in dpdk hash.

Thanks
Rajesh

On Wed, Aug 28, 2024 at 2:33 PM Medvedkin, Vladimir 
mailto:vladimir.medved...@intel.com>> wrote:
Hi Rajesh,

rte_hash does not support per bucket locks, instead it uses global rwlock.
But you can try lock free mode (see documentation, in particular 
RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF flag)


From: rajesh goel mailto:rgoel.bangal...@gmail.com>>
Sent: Tuesday, August 27, 2024 4:57 PM
To: Medvedkin, Vladimir 
mailto:vladimir.medved...@intel.com>>
Cc: Ferruh Yigit mailto:ferruh.yi...@amd.com>>; Wang, 
Yipeng1 mailto:yipeng1.w...@intel.com>>; Gobriel, Sameh 
mailto:sameh.gobr...@intel.com>>; Richardson, Bruce 
mailto:bruce.richard...@intel.com>>; 
dev@dpdk.org
Subject: Re: Bihash Support in DPDK

Thanks for the reply.

Bihash I mean bounded index what Vpp supports.

Iam looking for the bucket level lock support. Currently Iam using hash table 
shared by multiple process or multiple core/threads. So I have to take the 
write lock by single core and then read lock by multiple cores to read the 
value wrote in this hash table. Multiple readers are getting blocked due to 
this. I want to avoid this to increase performance.

Let me know your thoughts on this.

Regards
Rajesh

On Tue, 27 Aug, 2024, 14:44 Medvedkin, Vladimir, 
mailto:vladimir.medved...@intel.com>> wrote:
Hi Rajesh,

Please clarify what do you mean by “bihash”? Bidirectional? Bounded index?

As for concurrent lookup/updates, yes, DPDK hash table supports 
multi-process/multi-thread, please see the documentation:
https://doc.dpdk.org/guides/prog_guide/hash_lib.html#multi-process-support


From: rajesh goel mailto:rgoel.bangal...@gmail.com>>
Sent: Tuesday, August 27, 2024 7:04 AM
To: Ferruh Yigit mailto:ferruh.yi...@amd.com>>
Cc: Wang, Yipeng1 mailto:yipeng1.w...@intel.com>>; 
Gobriel, Sameh mailto:sameh.gobr...@intel.com>>; 
Richardson, Bruce 
mailto:bruce.richard...@intel.com>>; Medvedkin, 
Vladimir mailto:vladimir.medved...@intel.com>>; 
dev@dpdk.org
Subject: Re: Bihash Support in DPDK

Hi All,
Can we get some reply.

Thanks
Rajesh

On Thu, Aug 22, 2024 at 9:32 PM Ferruh Yigit 
mailto:ferruh.yi...@amd.com>> wrote:
On 8/22/2024 8:51 AM, rajesh goel wrote:
> Hi All,
> Need info if DPDK hash library supports bihash table where for multi-
> thread and multi-process we can update/del/lookup entries per bucket level.
>
>

+ hash library maintainers.


Re: [PATCH v3 08/12] net/ice: update iteration of TLVs in Preserved Fields Area

2024-08-28 Thread Bruce Richardson
On Fri, Aug 23, 2024 at 09:56:46AM +, Soumyadeep Hore wrote:
> From: Fabio Pricoco 
> 
> Correct the logic for determining the maximum PFA offset to include the
> extra last word. Additionally, make the driver robust against overflows
> by using check_add_overflow. This ensures that even if the NVM
> provides bogus data, the driver will not overflow, and will instead log
> a useful warning message. The check for whether the TLV length exceeds the
> PFA length is also removed, in favor of relying on the overflow warning
> instead.
> 
> Signed-off-by: Fabio Pricoco 
> Signed-off-by: Soumyadeep Hore 
> ---
>  drivers/net/ice/base/ice_nvm.c | 29 ++---
>  1 file changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
> index 0124cef04c..56c6c96a95 100644
> --- a/drivers/net/ice/base/ice_nvm.c
> +++ b/drivers/net/ice/base/ice_nvm.c
> @@ -469,6 +469,8 @@ int ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 
> *data)
>   return status;
>  }
>  
> +#define check_add_overflow __builtin_add_overflow
> +
>  /**
>   * ice_get_pfa_module_tlv - Reads sub module TLV from NVM PFA
>   * @hw: pointer to hardware structure
> @@ -484,8 +486,7 @@ int
>  ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 
> *module_tlv_len,
>  u16 module_type)
>  {
> - u16 pfa_len, pfa_ptr;
> - u32 next_tlv;
> + u16 pfa_len, pfa_ptr, next_tlv, max_tlv;
>   int status;
>  
>   status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr);
> @@ -498,6 +499,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 
> *module_tlv, u16 *module_tlv_len,
>   ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n");
>   return status;
>   }
> +
> + if (check_add_overflow(pfa_ptr, (u16)(pfa_len - 1), &max_tlv)) {
> + ice_debug(hw, ICE_DBG_INIT, "PFA starts at offset %u. PFA 
> length of %u caused 16-bit arithmetic overflow.\n",
> +   pfa_ptr, pfa_len);
> + return ICE_ERR_INVAL_SIZE;
> + }
> +
>   /* The Preserved Fields Area contains a sequence of TLVs which define
>* its contents. The PFA length includes all of the TLVs, plus its
>* initial length word itself, *and* one final word at the end of all
> @@ -507,7 +515,7 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 
> *module_tlv, u16 *module_tlv_len,
>* of TLVs to find the requested one.
>*/
>   next_tlv = pfa_ptr + 1;
> - while (next_tlv < ((u32)pfa_ptr + pfa_len - 1)) {
> + while (next_tlv < max_tlv) {

This is essentially overwriting the change made in patch 4 of this set -
except for the comment change. Therefore, I believe patches 4 and 8 should
be merged to avoid touching this line of code multiple times.

/Bruce


Re: [PATCH v3 00/12] Align ICE shared code with Base driver

2024-08-28 Thread Bruce Richardson
On Fri, Aug 23, 2024 at 09:56:38AM +, Soumyadeep Hore wrote:
> Updating the latest shared code patches to ICE base driver.
> 
> ---
> v3:
> - Addressed comments givn by reviewer
> ---
> v2:
> - Addressed comments given by reviewer
> - Corrected errors in Camel Case
> ---
> 
I've performed on apply the few cleanups I highlighted in emails in this
thread. Please review carefully the resulting 11 commits on
dpdk-next-net-intel tree.

Series-acked-by: Bruce Richardson 

Applied to dpdk-next-net-intel

Thanks,
/Bruce


Re: [PATCH v8 2/3] eventdev: add support for independent enqueue

2024-08-28 Thread Mattias Rönnblom

On 2024-08-24 22:41, Pathak, Pravin wrote:




-Original Message-
From: Mattias Rönnblom 
Sent: Friday, August 23, 2024 7:03 AM
To: Sevincer, Abdullah ; dev@dpdk.org
Cc: jer...@marvell.com; Richardson, Bruce ;
Pathak, Pravin ; mattias.ronnb...@ericsson.com;
Aggarwal, Manish 
Subject: Re: [PATCH v8 2/3] eventdev: add support for independent enqueue

On 2024-08-12 22:00, Abdullah Sevincer wrote:

This commit adds support for independent enqueue feature and updates
Event Device and PMD feature list.

A new capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ is introduced to
support independent enqueue to support PMD to enqueue in any order
even the underlined hardware device needs enqueues in a strict dequeue


This sentence needs to be rephrased.

My attempt:
"A new capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ is introduced. An
application may, on an event device where independent enqueue is supported,
using an event port where it is enabled, enqueue RTE_EVENT_OP_FORWARD or
RELEASE type events in any order."


order.


Will this work:
A new capability, RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ, is introduced. It
allows out-of-order enqueuing of RTE_EVENT_OP_FORWARD or RELEASE type
events on an event port where this capability is enabled.



Sounds good and better than my attempt.



To use this capability applications need to set flag
RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ during port setup only if the
capability RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ exists.

Signed-off-by: Abdullah Sevincer 
---
   doc/guides/eventdevs/features/default.ini |  1 +
   doc/guides/eventdevs/features/dlb2.ini|  1 +
   doc/guides/rel_notes/release_24_11.rst|  5 +++
   lib/eventdev/rte_eventdev.h   | 37 +++
   4 files changed, 44 insertions(+)

diff --git a/doc/guides/eventdevs/features/default.ini
b/doc/guides/eventdevs/features/default.ini
index 1cc4303fe5..7c4ee99238 100644
--- a/doc/guides/eventdevs/features/default.ini
+++ b/doc/guides/eventdevs/features/default.ini
@@ -22,6 +22,7 @@ carry_flow_id  =
   maintenance_free   =
   runtime_queue_attr =
   profile_links  =
+independent_enq=

   ;
   ; Features of a default Ethernet Rx adapter.
diff --git a/doc/guides/eventdevs/features/dlb2.ini
b/doc/guides/eventdevs/features/dlb2.ini
index 7b80286927..c7193b47c1 100644
--- a/doc/guides/eventdevs/features/dlb2.ini
+++ b/doc/guides/eventdevs/features/dlb2.ini
@@ -15,6 +15,7 @@ implicit_release_disable   = Y
   runtime_port_link  = Y
   multiple_queue_port= Y
   maintenance_free   = Y
+independent_enq= Y

   [Eth Rx adapter Features]

diff --git a/doc/guides/rel_notes/release_24_11.rst
b/doc/guides/rel_notes/release_24_11.rst
index f0ec07c263..04f389876a 100644
--- a/doc/guides/rel_notes/release_24_11.rst
+++ b/doc/guides/rel_notes/release_24_11.rst
@@ -30,6 +30,11 @@ New Features
 ``RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ`` to enable the feature if the

capability

 ``RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ`` exists.

+* **Updated Event Device Library for independent enqueue feature**
+
+  * Added support for independent enqueue feature. Updated Event Device

and

+PMD feature list.
+

   Removed Items
   -
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index 08e5f9320b..48e6eadda9 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -446,6 +446,31 @@ struct rte_event;
* @see RTE_SCHED_TYPE_PARALLEL
*/

+#define RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ  (1ULL << 16) /**< Event
+device is capable of independent enqueue.
+ * A new capability, RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ, will indicate
+that Eventdev
+ * supports the enqueue in any order or specifically in a different
+order than the
+ * dequeue. Eventdev PMD can either transmit events in the changed
+order in which
+ * they are enqueued or restore the original order before sending
+them to the
+ * underlying hardware device. A flag is provided during the port
+configuration to
+ * inform Eventdev PMD that the application intends to use an
+independent enqueue
+ * order on a particular port. Note that this capability only matters
+for Eventdevs
+ * supporting burst mode.
+ *
+ * To Inform PMD that the application plans to use independent
+enqueue order on a port
+ * this code example can be used:
+ *
+ *  if (capability & RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ)
+ * port_config = port_config |

RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ;

+ *
+ * When an implicit release is enabled on a port, Eventdev PMD will
+also handle
+ * the insertion of RELEASE events in place of dropped events. The
+independent enqueue
+ * feature only applies to FORWARD and RELEASE events. New events
+(op=RTE_EVENT_OP_NEW)
+ * will be transmitted in the order the application enqueues them and
+do not maintain
+ * any order relative to FORWARD/RELEASE events. FORWARD vs NEW
+relaxed ordering
+ * only applies to ports that have enabled indepe

RE: [PATCH v8 2/3] eventdev: add support for independent enqueue

2024-08-28 Thread Sevincer, Abdullah
Thanks Mattias,

Hi Jerin,

Are you okay with the changes so far? 


Re: Bihash Support in DPDK

2024-08-28 Thread Stephen Hemminger
On Wed, 28 Aug 2024 15:54:27 +
"Medvedkin, Vladimir"  wrote:

> Thanks for the reply.
> 
> Bihash I mean bounded index what Vpp supports.
> 
> Iam looking for the bucket level lock support. Currently Iam using hash table 
> shared by multiple process or multiple core/threads. So I have to take the 
> write lock by single core and then read lock by multiple cores to read the 
> value wrote in this hash table. Multiple readers are getting blocked due to 
> this. I want to avoid this to increase performance.
> 
> Let me know your thoughts on this.
> 
> Regards
> Rajesh

RCU is always faster than reader/writer locks.
Reader/Writer locks are slower than simple spin lock unless reader holds for a 
long time.


Re: [PATCH v3 11/12] dts: add Rx offload capabilities

2024-08-28 Thread Jeremy Spewock
On Wed, Aug 21, 2024 at 10:53 AM Juraj Linkeš
 wrote:

> diff --git a/dts/framework/remote_session/testpmd_shell.py 
> b/dts/framework/remote_session/testpmd_shell.py
> index 48c31124d1..f83569669e 100644
> --- a/dts/framework/remote_session/testpmd_shell.py
> +++ b/dts/framework/remote_session/testpmd_shell.py
> @@ -659,6 +659,103 @@ class TestPmdPortStats(TextParser):
>  tx_bps: int = field(metadata=TextParser.find_int(r"Tx-bps:\s+(\d+)"))
>
>
> +class RxOffloadCapability(Flag):
> +"""Rx offload capabilities of a device."""
> +
> +#:
> +RX_OFFLOAD_VLAN_STRIP = auto()

One other thought that I had about this; was there a specific reason
that you decided to prefix all of these with `RX_OFFLOAD_`? I am
working on a test suite right now that uses both RX and TX offloads
and thought that it would be a great use of capabilities, so I am
working on adding a TxOffloadCapability flag as well and, since the
output is essentially the same, it made a lot of sense to make it a
sibling class of this one with similar parsing functionality. In what
I was writing, I found it much easier to remove this prefix so that
the parsing method can be the same for both RX and TX, and I didn't
have to restate some options that are shared between both (like
IPv4_CKSUM, UDP_CKSUM, etc.). Is there a reason you can think of why
removing this prefix is a bad idea? Hopefully I will have a patch out
soon that shows this extension that I've made so that you can see
in-code what I was thinking.

> +#: Device supports L3 checksum offload.
> +RX_OFFLOAD_IPV4_CKSUM = auto()
> +#: Device supports L4 checksum offload.
> +RX_OFFLOAD_UDP_CKSUM = auto()
> +#: Device supports L4 checksum offload.
> +RX_OFFLOAD_TCP_CKSUM = auto()
> +#: Device supports Large Receive Offload.
> +RX_OFFLOAD_TCP_LRO = auto()
> +#: Device supports QinQ (queue in queue) offload.
> +RX_OFFLOAD_QINQ_STRIP = auto()
> +#: Device supports inner packet L3 checksum.
> +RX_OFFLOAD_OUTER_IPV4_CKSUM = auto()
> +#: Device supports MACsec.
> +RX_OFFLOAD_MACSEC_STRIP = auto()
> +#: Device supports filtering of a VLAN Tag identifier.
> +RX_OFFLOAD_VLAN_FILTER = 1 << 9
> +#: Device supports VLAN offload.
> +RX_OFFLOAD_VLAN_EXTEND = auto()
> +#: Device supports receiving segmented mbufs.
> +RX_OFFLOAD_SCATTER = 1 << 13
> +#: Device supports Timestamp.
> +RX_OFFLOAD_TIMESTAMP = auto()
> +#: Device supports crypto processing while packet is received in NIC.
> +RX_OFFLOAD_SECURITY = auto()
> +#: Device supports CRC stripping.
> +RX_OFFLOAD_KEEP_CRC = auto()
> +#: Device supports L4 checksum offload.
> +RX_OFFLOAD_SCTP_CKSUM = auto()
> +#: Device supports inner packet L4 checksum.
> +RX_OFFLOAD_OUTER_UDP_CKSUM = auto()
> +#: Device supports RSS hashing.
> +RX_OFFLOAD_RSS_HASH = auto()
> +#: Device supports
> +RX_OFFLOAD_BUFFER_SPLIT = auto()
> +#: Device supports all checksum capabilities.
> +RX_OFFLOAD_CHECKSUM = RX_OFFLOAD_IPV4_CKSUM | RX_OFFLOAD_UDP_CKSUM | 
> RX_OFFLOAD_TCP_CKSUM
> +#: Device supports all VLAN capabilities.
> +RX_OFFLOAD_VLAN = (
> +RX_OFFLOAD_VLAN_STRIP
> +| RX_OFFLOAD_VLAN_FILTER
> +| RX_OFFLOAD_VLAN_EXTEND
> +| RX_OFFLOAD_QINQ_STRIP
> +)

>


Re: [PATCH v3 03/12] dts: add test case decorators

2024-08-28 Thread Dean Marx
On Wed, Aug 21, 2024 at 10:53 AM Juraj Linkeš 
wrote:

> Add decorators for functional and performance test cases. These
> decorators add attributes to the decorated test cases.
>
> With the addition of decorators, we change the test case discovery
> mechanism from looking at test case names according to a regex to simply
> checking an attribute of the function added with one of the decorators.
>
> The decorators allow us to add further variables to test cases.
>
> Also move the test case filtering to TestSuite while changing the
> mechanism to separate the logic in a more sensible manner.
>
> Bugzilla ID: 1460
>
> Signed-off-by: Juraj Linkeš 
>

Reviewed-by: Dean Marx 


Re: [PATCH v3 04/12] dts: add mechanism to skip test cases or suites

2024-08-28 Thread Dean Marx
On Wed, Aug 21, 2024 at 10:53 AM Juraj Linkeš 
wrote:

> If a test case is not relevant to the testing environment (such as when
> a NIC doesn't support a tested feature), the framework should skip it.
> The mechanism is a skeleton without actual logic that would set a test
> case or suite to be skipped.
>
> The mechanism uses a protocol to extend test suites and test cases with
> additional attributes that track whether the test case or suite should
> be skipped the reason for skipping it.
>
> Also update the results module with the new SKIP result.
>
> Signed-off-by: Juraj Linkeš 
>

Reviewed-by: Dean Marx 


Re: [PATCH v3 05/12] dts: add support for simpler topologies

2024-08-28 Thread Dean Marx
On Wed, Aug 21, 2024 at 10:53 AM Juraj Linkeš 
wrote:

> We currently assume there are two links between the SUT and TG nodes,
> but that's too strict, even for some of the already existing test cases.
> Add support for topologies with less than two links.
>
> For topologies with no links, dummy ports are used. The expectation is
> that test suites or cases that don't require any links won't be using
> methods that use ports. Any test suites or cases requiring links will be
> skipped in topologies with no links, but this feature is not implemented
> in this commit.
>
> Signed-off-by: Juraj Linkeš 
>

Reviewed-by: Dean Marx 


memif insufficient padding

2024-08-28 Thread Morten Brørup
Jakub,

While browsing virtual interfaces in DPDK, I noticed a possible performance 
issue in the memif driver:

If "head" and "tail" are accessed by different lcores, they are not 
sufficiently far away from each other (and other hot fields) to prevent false 
sharing-like effects on systems with a next-N-lines hardware prefetcher, which 
will prefetch "tail" when fetching "head", and prefetch "head" when fetching 
"flags".

I suggest updating the structure somewhat like this:

-#define MEMIF_CACHELINE_ALIGN_MARK(mark) \
-   alignas(RTE_CACHE_LINE_SIZE) RTE_MARKER mark;
-
-typedef struct {
-   MEMIF_CACHELINE_ALIGN_MARK(cacheline0);
+typedef struct __rte_cache_aligned {
uint32_t cookie;/**< MEMIF_COOKIE */
uint16_t flags; /**< flags */
#define MEMIF_RING_FLAG_MASK_INT 1  /**< disable interrupt mode */
+   RTE_CACHE_GUARD; /* isolate head from flags */
RTE_ATOMIC(uint16_t) head;  /**< pointer to ring 
buffer head */
-   MEMIF_CACHELINE_ALIGN_MARK(cacheline1);
+   RTE_CACHE_GUARD; /* isolate tail from head */
RTE_ATOMIC(uint16_t) tail;  /**< pointer to ring 
buffer tail */
-   MEMIF_CACHELINE_ALIGN_MARK(cacheline2);
+   RTE_CACHE_GUARD; /* isolate descriptors from tail */
-   memif_desc_t desc[0];   /**< buffer descriptors */
+   memif_desc_t desc[];/**< buffer descriptors */
} memif_ring_t;


Med venlig hilsen / Kind regards,
-Morten Brørup



32-bit virtio failing on DPDK v23.11.1 (and tags)

2024-08-28 Thread Chris Brezovec (cbrezove)
HI Maxime,

My name is Chris Brezovec, we met and talked about some 32 bit virtio issues we 
were seeing at Cisco during the DPDK summit last year.  There was also a back 
and forth between you and Dave Johnson at Cisco last September regarding the 
same issue.  I have attached some of the email chain from that conversation 
that resulted in this commit being made to dpdk v23.11 
(https://github.com/DPDK/dpdk/commit/8c41645be010ec7fa0df4f6c3790b167945154b4).

We recently picked up the v23.11.1 DPDK release and saw that 32 bit virtio is 
not working again, but 64-bit virtio is working.  We are noticing CVQ timeouts 
- PMD receives no response from host and this leads to failure of the port to 
start.  We were able to recreate this issue using testpmd.  We have done some 
tracing through the virtio changes made during the development of the v23.xx 
DPDK release, and believe we have identified the following rework commit to 
have caused a failure 
(https://github.com/DPDK/dpdk/commit/a632f0f64ffba3553a18bdb51a670c1b603c0ce6).

We have also tested v23.07, v23.11, v23.11.2-rc2, v24.07 and they all seem to 
see the same issue when running in 32-bit mode using testpmd.

We were hoping you might be able to take a quick look at the two commits and 
see if there might be something obvious missing in the refactor work that might 
have caused this issue.  I am thinking there might a location or two in the 
code that should be using the VIRTIO_MBUF_ADDR() or similar macro that might 
have been missed.

Regards,
ChrisB

This is some of the testpmd output seen on v23.11.2-rc2:

LD_LIBRARY_PATH=/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/lib 
/home/rmelton/scratch/dpdk-v23.11.2-rc2.git/build/app/dpdk-testpmd -l 2-3 -a 
:07:00.0 --log-level pmd.net.iavf.*,8 --log-level lib.eal.*,8 
--log-level=lib.eal:info --log-level=lib.eal:debug --log-level=lib.ethdev:info 
--log-level=lib.ethdev:debug --log-level=lib.virtio:warning 
--log-level=lib.virtio:info --log-level=lib.virtio:debug 
--log-level=pmd.*:debug --iova-mode=pa -- -i

— snip —

virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 
0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_disable(): Failed to disable promisc
Failed to disable promiscuous mode for device (port 0): Resource temporarily 
unavailable
Error during restoring configuration for device (port 0): Resource temporarily 
unavailable
virtio_dev_stop(): stop
Fail to start port 0: Resource temporarily unavailable
Done
virtio_send_command(): vq->vq_desc_head_idx = 0, status = 255, vq->hw->cvq = 
0x76d9acc0 vq = 0x76d9ac80
virtio_send_command_split(): vq->vq_queue_index = 2
virtio_send_command_split(): vq->vq_free_cnt=64
vq->vq_desc_head_idx=0
virtio_dev_promiscuous_enable(): Failed to enable promisc
Error during enabling promiscuous mode for port 0: Resource temporarily 
unavailable - ignore




Re- Commit broke 32-bit testpmd app.eml
Description: Re- Commit broke 32-bit testpmd app.eml


[PATCH v2] net/ice: support customized search path for DDP package

2024-08-28 Thread Zhichao Zeng
This patch adds support for customizing firmware search path for
DDP package like the kernel behavior, it will read the search path
from "/sys/module/firmware_class/parameters/path",
and try to load DDP package.

Signed-off-by: Zhichao Zeng 

---
v2: separate the patch and rewrite the log
---
 doc/guides/nics/ice.rst  |  5 +
 drivers/net/ice/ice_ethdev.c | 27 +++
 drivers/net/ice/ice_ethdev.h |  1 +
 3 files changed, 33 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ae975d19ad..741cd42cb7 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -108,6 +108,11 @@ Runtime Configuration
 
 -a 80:00.0,default-mac-disable=1
 
+- ``DDP Package File``
+
+  Support for customizing the firmware search path, will read the search path
+  from "/sys/module/firmware_class/parameters/path" and try to load DDP 
package.
+
 - ``Protocol extraction for per queue``
 
   Configure the RX queues to do protocol extraction into mbuf for protocol
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 304f959b7e..fc0954ff34 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1873,6 +1873,22 @@ ice_load_pkg_type(struct ice_hw *hw)
return package_type;
 }
 
+static int ice_read_customized_path(char *pkg_file)
+{
+   char buf[ICE_MAX_PKG_FILENAME_SIZE];
+   FILE *fp = fopen(ICE_PKG_FILE_CUSTOMIZED_PATH, "r");
+   if (fp == NULL) {
+   PMD_INIT_LOG(ERR, "Failed to read CUSTOMIZED_PATH");
+   return -EIO;
+   }
+   if (fscanf(fp, "%s\n", buf) > 0)
+   strncpy(pkg_file, buf, ICE_MAX_PKG_FILENAME_SIZE);
+   else
+   return -EIO;
+
+   return 0;
+}
+
 int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn)
 {
struct ice_hw *hw = &adapter->hw;
@@ -1888,6 +1904,12 @@ int ice_load_pkg(struct ice_adapter *adapter, bool 
use_dsn, uint64_t dsn)
memset(opt_ddp_filename, 0, ICE_MAX_PKG_FILENAME_SIZE);
snprintf(opt_ddp_filename, ICE_MAX_PKG_FILENAME_SIZE,
"ice-%016" PRIx64 ".pkg", dsn);
+
+   ice_read_customized_path(pkg_file);
+   strcat(pkg_file, opt_ddp_filename);
+   if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0)
+   goto load_fw;
+
strncpy(pkg_file, ICE_PKG_FILE_SEARCH_PATH_UPDATES,
ICE_MAX_PKG_FILENAME_SIZE);
strcat(pkg_file, opt_ddp_filename);
@@ -1901,6 +1923,10 @@ int ice_load_pkg(struct ice_adapter *adapter, bool 
use_dsn, uint64_t dsn)
goto load_fw;
 
 no_dsn:
+   ice_read_customized_path(pkg_file);
+   if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0)
+   goto load_fw;
+
strncpy(pkg_file, ICE_PKG_FILE_UPDATES, ICE_MAX_PKG_FILENAME_SIZE);
if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0)
goto load_fw;
@@ -6981,6 +7007,7 @@ RTE_PMD_REGISTER_PARAM_STRING(net_ice,
  ICE_PROTO_XTR_ARG 
"=[queue:]"
  ICE_SAFE_MODE_SUPPORT_ARG "=<0|1>"
  ICE_DEFAULT_MAC_DISABLE "=<0|1>"
+ ICE_DDP_FILENAME "="
  ICE_RX_LOW_LATENCY_ARG "=<0|1>");
 
 RTE_LOG_REGISTER_SUFFIX(ice_logtype_init, init, NOTICE);
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ea9f37dc8..8b644ed700 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -51,6 +51,7 @@
 #define ICE_PKG_FILE_UPDATES "/lib/firmware/updates/intel/ice/ddp/ice.pkg"
 #define ICE_PKG_FILE_SEARCH_PATH_DEFAULT "/lib/firmware/intel/ice/ddp/"
 #define ICE_PKG_FILE_SEARCH_PATH_UPDATES "/lib/firmware/updates/intel/ice/ddp/"
+#define ICE_PKG_FILE_CUSTOMIZED_PATH 
"/sys/module/firmware_class/parameters/path"
 #define ICE_MAX_PKG_FILENAME_SIZE   256
 
 #define MAX_ACL_NORMAL_ENTRIES256
-- 
2.34.1



答复: [RFC 1/2] eal: add llc aware functions

2024-08-28 Thread Feifei Wang
Hi,

> -邮件原件-
> 发件人: Wathsala Wathawana Vithanage 
> 发送时间: 2024年8月28日 4:56
> 收件人: Vipin Varghese ; ferruh.yi...@amd.com;
> dev@dpdk.org
> 抄送: nd ; nd 
> 主题: RE: [RFC 1/2] eal: add llc aware functions
> 
> > -unsigned int rte_get_next_lcore(unsigned int i, int skip_main, int wrap)
> > +#define LCORE_GET_LLC   \
> > +   "ls -d /sys/bus/cpu/devices/cpu%u/cache/index[0-9] | sort  -r
> > | grep -m1 index[0-9] | awk -F '[x]' '{print $2}' "
> >
> 
> This won't work for some SOCs.
> How to ensure the index you got is for an LLC? Some SOCs may only show
> upper-level caches here, therefore cannot be use blindly without knowing the
> SOC.
> Also, unacceptable to execute a shell script, consider implementing in C.

Maybe:
For arm, maybe we can load MPIDR_EL1 register to achieve cpu cluster topology.
MPIDR_EL1 register bit meaning:
[23:16] AFF3 (Level 3 affinity)
[15:8]  AFF2 (Level 2 affinity)
[7:0]   AFF1(Level 1 affinity)
[7:0]   AFF0(Level 0 affinity)

For x86, we can use apic_id:
Apic_id includes cluster id, die id, smt id and core id.
 
This bypass execute a shell script, and for arm and x86, we set different path 
to implement this.

Best Regards
Feifei
> --wathsala
> 



[PATCH v2] net/ice: support customized search path for DDP package

2024-08-28 Thread Zhichao Zeng
This patch adds support for customizing firmware search path for
DDP package like the kernel behavior, it will read the search path
from "/sys/module/firmware_class/parameters/path",
and try to load DDP package.

Signed-off-by: Zhichao Zeng 

---
v2: separate the patch and rewrite the log
---
 doc/guides/nics/ice.rst  |  5 +
 drivers/net/ice/ice_ethdev.c | 26 ++
 drivers/net/ice/ice_ethdev.h |  1 +
 3 files changed, 32 insertions(+)

diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst
index ae975d19ad..741cd42cb7 100644
--- a/doc/guides/nics/ice.rst
+++ b/doc/guides/nics/ice.rst
@@ -108,6 +108,11 @@ Runtime Configuration
 
 -a 80:00.0,default-mac-disable=1
 
+- ``DDP Package File``
+
+  Support for customizing the firmware search path, will read the search path
+  from "/sys/module/firmware_class/parameters/path" and try to load DDP 
package.
+
 - ``Protocol extraction for per queue``
 
   Configure the RX queues to do protocol extraction into mbuf for protocol
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index 304f959b7e..5dfb3d9c21 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1873,6 +1873,22 @@ ice_load_pkg_type(struct ice_hw *hw)
return package_type;
 }
 
+static int ice_read_customized_path(char *pkg_file)
+{
+   char buf[ICE_MAX_PKG_FILENAME_SIZE];
+   FILE *fp = fopen(ICE_PKG_FILE_CUSTOMIZED_PATH, "r");
+   if (fp == NULL) {
+   PMD_INIT_LOG(ERR, "Failed to read CUSTOMIZED_PATH");
+   return -EIO;
+   }
+   if (fscanf(fp, "%s\n", buf) > 0)
+   strncpy(pkg_file, buf, ICE_MAX_PKG_FILENAME_SIZE);
+   else
+   return -EIO;
+
+   return 0;
+}
+
 int ice_load_pkg(struct ice_adapter *adapter, bool use_dsn, uint64_t dsn)
 {
struct ice_hw *hw = &adapter->hw;
@@ -1888,6 +1904,12 @@ int ice_load_pkg(struct ice_adapter *adapter, bool 
use_dsn, uint64_t dsn)
memset(opt_ddp_filename, 0, ICE_MAX_PKG_FILENAME_SIZE);
snprintf(opt_ddp_filename, ICE_MAX_PKG_FILENAME_SIZE,
"ice-%016" PRIx64 ".pkg", dsn);
+
+   ice_read_customized_path(pkg_file);
+   strcat(pkg_file, opt_ddp_filename);
+   if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0)
+   goto load_fw;
+
strncpy(pkg_file, ICE_PKG_FILE_SEARCH_PATH_UPDATES,
ICE_MAX_PKG_FILENAME_SIZE);
strcat(pkg_file, opt_ddp_filename);
@@ -1901,6 +1923,10 @@ int ice_load_pkg(struct ice_adapter *adapter, bool 
use_dsn, uint64_t dsn)
goto load_fw;
 
 no_dsn:
+   ice_read_customized_path(pkg_file);
+   if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0)
+   goto load_fw;
+
strncpy(pkg_file, ICE_PKG_FILE_UPDATES, ICE_MAX_PKG_FILENAME_SIZE);
if (rte_firmware_read(pkg_file, &buf, &bufsz) == 0)
goto load_fw;
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index 3ea9f37dc8..8b644ed700 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -51,6 +51,7 @@
 #define ICE_PKG_FILE_UPDATES "/lib/firmware/updates/intel/ice/ddp/ice.pkg"
 #define ICE_PKG_FILE_SEARCH_PATH_DEFAULT "/lib/firmware/intel/ice/ddp/"
 #define ICE_PKG_FILE_SEARCH_PATH_UPDATES "/lib/firmware/updates/intel/ice/ddp/"
+#define ICE_PKG_FILE_CUSTOMIZED_PATH 
"/sys/module/firmware_class/parameters/path"
 #define ICE_MAX_PKG_FILENAME_SIZE   256
 
 #define MAX_ACL_NORMAL_ENTRIES256
-- 
2.34.1



RE: [EXTERNAL] [PATCH dpdk] graph: make graphviz export more readable

2024-08-28 Thread Kiran Kumar Kokkilagadda



> -Original Message-
> From: Robin Jarry 
> Sent: Wednesday, August 28, 2024 7:12 PM
> To: dev@dpdk.org; Jerin Jacob ; Kiran Kumar Kokkilagadda
> ; Nithin Kumar Dabilpuram
> ; Zhirun Yan 
> Subject: [EXTERNAL] [PATCH dpdk] graph: make graphviz export more readable
> 
> Change the color of arrows leading to sink nodes to dark orange. Remove the
> node oval shape around the sink nodes and make their text dark orange. This
> results in a much more readable output for large graphs. See the link below 
> for an
> example. 
> Change the color of arrows leading to sink nodes to dark orange. Remove the
> node oval shape around the sink nodes and make their text dark orange. This
> results in a much more readable output for large graphs.
> See the link below for an example.
> 
> Link: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.jarry.cc_rte-
> 2Dgraph-
> 2Ddot_ipv6.svg&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=owEKckYY4FTmil1
> Z6oBURwkTThyuRbLAY9LdfiaT6HA&m=D41w8N-
> HiTO9kFbxr3kWwW4TVmWav2Zozr9byDbHSj7TRx6egfC1ut70K2HKJQ0y&s=0gZZ
> VRoev4w7I_KowoRkSn40vymIJgyxS8vBPgEk90c&e=
> Signed-off-by: Robin Jarry 
> ---

Acked-by: Kiran Kumar Kokkilagadda 

>  lib/graph/graph.c | 7 +--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/graph/graph.c b/lib/graph/graph.c index
> d5b8c9f918cf..dff8e690a80d 100644
> --- a/lib/graph/graph.c
> +++ b/lib/graph/graph.c
> @@ -745,7 +745,7 @@ graph_to_dot(FILE *f, struct graph *graph)
>   if (rc < 0)
>   goto end;
>   } else if (graph_node->node->nb_edges == 0) {
> - rc = fprintf(f, " [color=darkorange]");
> + rc = fprintf(f, " [fontcolor=darkorange shape=plain]");
>   if (rc < 0)
>   goto end;
>   }
> @@ -753,9 +753,12 @@ graph_to_dot(FILE *f, struct graph *graph)
>   if (rc < 0)
>   goto end;
>   for (i = 0; i < graph_node->node->nb_edges; i++) {
> + const char *node_attrs = attrs;
> + if (graph_node->adjacency_list[i]->node->nb_edges ==
> 0)
> + node_attrs = " [color=darkorange]";
>   rc = fprintf(f, "\t\"%s\" -> \"%s\"%s;\n", node_name,
>graph_node->adjacency_list[i]->node-
> >name,
> -  attrs);
> +  node_attrs);
>   if (rc < 0)
>   goto end;
>   }
> --
> 2.46.0



RE: [PATCH] bus/pci: don't open uio device in secondary process

2024-08-28 Thread Chaoyong He
> 
> The uio_pci_generic driver clears the bus master bit when the device file is
> closed.  So, when the secondary process terminates after probing a device,
> that device becomes unusable in the primary process.
> 
> To avoid that, the device file is now opened only in the primary process.  The
> commit that introduced this regression, 847d78fb95
> ("bus/pci: fix FD in secondary process"), only mentioned enabling access to
> config space from secondary process, which still works, as it doesn't rely on
> the device file.

Yes, we can still access the config space from secondary process.

> 
> Fixes: 847d78fb95 ("bus/pci: fix FD in secondary process")

Maybe here also need a 'Cc: sta...@dpdk.org' ?

With this, It looks good to me, thanks.
Acked-by: Chaoyong He 

> 
> Signed-off-by: Konrad Sztyber 
> ---
>  drivers/bus/pci/linux/pci_uio.c | 25 +
>  1 file changed, 13 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/bus/pci/linux/pci_uio.c b/drivers/bus/pci/linux/pci_uio.c
> index 4c1d3327a9..432316afcc 100644
> --- a/drivers/bus/pci/linux/pci_uio.c
> +++ b/drivers/bus/pci/linux/pci_uio.c
> @@ -232,18 +232,6 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
> loc->domain, loc->bus, loc->devid, loc->function);
> return 1;
> }
> -   snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
> -
> -   /* save fd */
> -   fd = open(devname, O_RDWR);
> -   if (fd < 0) {
> -   PCI_LOG(ERR, "Cannot open %s: %s", devname, strerror(errno));
> -   goto error;
> -   }
> -
> -   if (rte_intr_fd_set(dev->intr_handle, fd))
> -   goto error;
> -
> snprintf(cfgname, sizeof(cfgname),
> "/sys/class/uio/uio%u/device/config", uio_num);
> 
> @@ -273,6 +261,19 @@ pci_uio_alloc_resource(struct rte_pci_device *dev,
> if (rte_eal_process_type() != RTE_PROC_PRIMARY)
> return 0;
> 
> +   /* the uio_pci_generic driver clears the bus master enable bit when 
> the
> device file is
> +* closed, so open it only in the primary process */
> +   snprintf(devname, sizeof(devname), "/dev/uio%u", uio_num);
> +   /* save fd */
> +   fd = open(devname, O_RDWR);
> +   if (fd < 0) {
> +   PCI_LOG(ERR, "Cannot open %s: %s", devname, strerror(errno));
> +   goto error;
> +   }
> +
> +   if (rte_intr_fd_set(dev->intr_handle, fd))
> +   goto error;
> +
> /* allocate the mapping details for secondary processes*/
> *uio_res = rte_zmalloc("UIO_RES", sizeof(**uio_res), 0);
> if (*uio_res == NULL) {
> --
> 2.45.0