Delete the driver CPU affinity info and use the core's napi config
instead.
Signed-off-by: Ahmed Zaki
---
drivers/net/ethernet/intel/idpf/idpf_lib.c | 1 +
drivers/net/ethernet/intel/idpf/idpf_txrx.c | 22 +++--
drivers/net/ethernet/intel/idpf/idpf_txrx.h | 6 ++
3 files c
Add a new netdev flag "rx_cpu_rmap_auto". Drivers supporting ARFS should
set the flag via netif_enable_cpu_rmap() and core will allocate and manage
the ARFS rmap. Freeing the rmap is also done by core when the netdev is
freed.
For better IRQ affinity management, move the IRQ rmap notifier inside t
Delete the driver CPU affinity info and use the core's napi config
instead.
Signed-off-by: Ahmed Zaki
---
drivers/net/ethernet/intel/ice/ice.h | 3 --
drivers/net/ethernet/intel/ice/ice_base.c | 7 +---
drivers/net/ethernet/intel/ice/ice_lib.c | 6 ---
drivers/net/ethernet/intel/ice/ice
Delete the driver CPU affinity info and use the core's napi config
instead.
Signed-off-by: Ahmed Zaki
---
drivers/net/ethernet/broadcom/bnxt/bnxt.c | 25 +++
drivers/net/ethernet/broadcom/bnxt/bnxt.h | 2 --
2 files changed, 3 insertions(+), 24 deletions(-)
diff --git a/dri
A common task for most drivers is to remember the user-set CPU affinity
to its IRQs. On each netdev reset, the driver should re-assign the
user's settings to the IRQs.
Add CPU affinity mask to napi_config. To delegate the CPU affinity
management to the core, drivers must:
1 - set the new netdev f
Drivers usually need to re-apply the user-set IRQ affinity to their IRQs
after reset. However, since there can be only one IRQ affinity notifier
for each IRQ, registering IRQ notifiers conflicts with the ARFS rmap
management in the core (which also registers separate IRQ affinity
notifiers).
Mo
configs may be tested in the coming days.
tested configs:
arc randconfig-001-20250117gcc-13.2.0
arc randconfig-002-20250117gcc-13.2.0
arm randconfig-001-20250117clang-18
arm randconfig-002-20250117gcc-14.2.0
arm
have been built successfully.
More configs may be tested in the coming days.
tested configs:
alpha allnoconfiggcc-14.2.0
arc randconfig-001-20250117gcc-13.2.0
arc randconfig-002-20250117gcc-13.2.0
arm
On Fri, 17 Jan 2025 16:18:57 +0100 Maciej Fijalkowski wrote:
> Subject: [PATCH v2 intel-net 0/3] ice: fix Rx data path for heavy 9k MTU
> traffic
nit: could you use iwl-net and iwl-next as the tree names?
That's what we match on in NIPA to categorize Intel patches.
successfully.
More configs may be tested in the coming days.
tested configs:
arc allmodconfiggcc-13.2.0
arc allyesconfiggcc-13.2.0
arc randconfig-001-20250117gcc-13.2.0
arc randconfig-002-20250117
On Fri, Jan 17, 2025 at 11:01:22AM +0100, Przemek Kitszel wrote:
> On 1/16/25 17:21, Simon Horman wrote:
> > On Wed, Jan 15, 2025 at 09:11:17AM +0530, Dheeraj Reddy Jonnalagadda wrote:
> > > The ixgbe driver was missing proper endian conversion for ACI descriptor
> > > register operations. Add the
gcc-14.2.0
arc allyesconfiggcc-13.2.0
arc randconfig-001-20250117clang-20
arc randconfig-001-20250117gcc-13.2.0
arc randconfig-002-20250117clang-20
arc randconfig-002-20250117
successfully.
More configs may be tested in the coming days.
tested configs:
alpha allnoconfiggcc-14.2.0
arc allnoconfiggcc-13.2.0
arc randconfig-001-20250117gcc-13.2.0
arc randconfig-002-20250117
13.2.0
arcnsimosci_defconfiggcc-13.2.0
arc randconfig-001-20250117gcc-13.2.0
arc randconfig-002-20250117gcc-13.2.0
arm allmodconfiggcc-14.2.0
arm allyesconfiggcc-
The commit c824125cbb18 ("ixgbe: Fix passing 0 to ERR_PTR in
ixgbe_run_xdp()") stopped utilizing the ERR-like macros for xdp status
encoding. Propagate this logic to the ixgbe_put_rx_buffer().
The commit also relaxed the skb NULL pointer check - caught by Smatch.
Restore this check.
Fixes: c82412
If we store the pgcnt on few fragments while being in the middle of
gathering the whole frame and we stumbled upon DD bit not being set, we
terminate the NAPI Rx processing loop and come back later on. Then on
next NAPI execution we work on previously stored pgcnt.
Imagine that second half of page
Idea behind having ice_rx_buf::act was to simplify and speed up the Rx
data path by walking through buffers that were representing cleaned HW
Rx descriptors. Since it caused us a major headache recently and we
rolled back to old approach that 'puts' Rx buffers right after running
XDP prog/creating
Introduce a new helper ice_put_rx_mbuf() that will go through gathered
frags from current frame and will call ice_put_rx_buf() on them. Current
logic that was supposed to simplify and optimize the driver where we go
through a batch of all buffers processed in current NAPI instance turned
out to be
v1->v2:
* pass ntc to ice_put_rx_mbuf() (pointed out by Petr Oros) in patch 1
* add review tags from Przemek Kitszel (thanks!)
* make sure patches compile and work ;)
Hello in 2025,
this patchset fixes a pretty nasty issue that was reported by RedHat
folks which occured after ~30 minutes (this v
On Thu, Jan 09, 2025 at 06:45:12PM +0100, Sebastian Andrzej Siewior wrote:
> On 2025-01-09 13:46:47 [-0300], Wander Lairson Costa wrote:
> > > If the issue is indeed the use of threaded interrupts then the fix
> > > should not be limited to be PREEMPT_RT only.
> > >
> > Although I was not aware of
This patch series introduces support for Precision Time Protocol (PTP) to
Intel(R) Infrastructure Data Path Function (IDPF) driver. PTP feature is
supported when the PTP capability is negotiated with the Control
Plane (CP). IDPF creates a PTP clock and sets a set of supported
functions.
During the
>-Original Message-
>From: Fijalkowski, Maciej
>Sent: Thursday, January 16, 2025 5:31 PM
>To: Kwapulinski, Piotr
>Cc: intel-wired-...@lists.osuosl.org; net...@vger.kernel.org;
>dan.carpen...@linaro.org; yuehaib...@huawei.com; Kitszel, Przemyslaw
>
>Subject: Re: [PATCH iwl-next] ixgbe:
Add functions to request Tx timestamp for the PTP packets, read the Tx
timestamp when the completion tag for that packet is being received,
extend the Tx timestamp value and set the supported timestamping modes.
Tx timestamp is requested for the PTP packets by setting a TSYN bit and
index value in
Since workqueues are created per CPU, the works scheduled to this
workqueues are run on the CPU they were assigned. It may result in
overloaded CPU that is not able to handle virtchnl messages in
relatively short time. Allocating workqueue with WQ_UNBOUND and
WQ_HIGHPRI flags allows scheduler to qu
Add Rx timestamp function when the Rx timestamp value is read directly
from the Rx descriptor. In order to extend the Rx timestamp value to 64
bit in hot path, the PHC time is cached in the receive groups.
Add supported Rx timestamp modes.
Reviewed-by: Willem de Bruijn
Signed-off-by: Milena Olech
Tx timestamp capabilities are negotiated for the uplink Vport.
Driver receives information about the number of available Tx timestamp
latches, the size of Tx timestamp value and the set of indexes used
for Tx timestamping.
Add function to get the Tx timestamp capabilities and parse the uplink
vpor
PTP clock configuration operations - set time, adjust time and adjust
frequency are required to control the clock and maintain synchronization
process.
Extend get PTP capabilities function to request for the clock adjustments
and add functions to enable these actions using dedicated virtchnl
messa
When the access to read PTP clock is specified as mailbox, the driver
needs to send virtchnl message to perform PTP actions. Message is sent
using idpf_mbq_opc_send_msg_to_peer_drv mailbox opcode, with the parameters
received during PTP capabilities negotiation.
Add functions to recognize PTP mess
PTP capabilities are negotiated using virtchnl command. Add get
capabilities function, direct access to read the PTP clock time and
direct access to read the cross timestamp - system time and PTP clock
time. Set initial PTP capabilities exposed to the stack.
Reviewed-by: Alexander Lobakin
Reviewe
Move virtchnl structures to the header file to expose them for the PTP
virtchnl file.
Reviewed-by: Alexander Lobakin
Reviewed-by: Willem de Bruijn
Signed-off-by: Milena Olech
---
v1 -> v2: fix commit message title
.../net/ethernet/intel/idpf/idpf_virtchnl.c | 86 +--
.../net
PTP capabilities are negotiated using virtchnl commands. There are two
available modes of the PTP support: direct and mailbox. When the direct
access to PTP resources is negotiated, virtchnl messages returns a set
of registers that allow read/write directly. When the mailbox access to
PTP resources
PTP feature is supported if the VIRTCHNL2_CAP_PTP is negotiated during the
capabilities recognition. Initial PTP support includes PTP initialization
and registration of the clock.
Reviewed-by: Alexander Lobakin
Reviewed-by: Vadim Fedorenko
Reviewed-by: Willem de Bruijn
Signed-off-by: Milena Ole
On 2025/1/17 2:02, Jesper Dangaard Brouer wrote:
>
> Benchmark (bench_page_pool_simple) results from before and after
> patchset with patches 1-5m and rcu lock removal as requested.
>
> | Test name |Cycles | 1-5 | | Nanosec | 1-5 | | % |
> | (tasklet_*)|Before | After |diff|
On 1/16/25 17:21, Simon Horman wrote:
On Wed, Jan 15, 2025 at 09:11:17AM +0530, Dheeraj Reddy Jonnalagadda wrote:
The ixgbe driver was missing proper endian conversion for ACI descriptor
register operations. Add the necessary conversions when reading and
writing to the registers.
Fixes: 46761fd
On Thu, Jan 16, 2025 at 06:55:30AM -0700, Ahmed Zaki wrote:
> The vport config lock protects the vports queues and config data. These
> mainly change in soft reset path. Since there is no dependency across
> vports, there is no need for this lock to be global.
>
> Move the lock to be per-vport and
The Flow Director function ice_fdir_create_dflt_rules() calls few
times function ice_create_init_fdir_rule() each time with different
enum ice_fltr_ptype parameter. Next step is to return error code if
error occurred.
Change the code to store all necessary default rules in constant array
and call
36 matches
Mail list logo