g->flags);
- igc_set_queue_napi(adapter, queue_id, napi);
if (needs_reset) {
napi_enable(napi);
base-commit: 3c9231ea6497dfc50ac0ef69fff484da27d0df66
igc_set_queue_napi() could be made static as it only used within
igc_main.c after this change.
Reviewed-by: Gerhard Engleder
On 10.02.25 10:19, Kurt Kanzenbach wrote:
When running the igc with XDP/ZC in busy polling mode with deferral of hard
interrupts, interrupts still happen from time to time. That is caused by
the igc task watchdog which triggers Rx interrupts periodically.
igc or igb?
On 06.01.25 12:17, Simon Horman wrote:
On Thu, Dec 19, 2024 at 08:27:43PM +0100, Gerhard Engleder wrote:
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all
writes are flushed. As a
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all
writes are flushed. As a result, DMA transfers of other targets suffer
from delay in the range of 50us. This results in timing
On 18.12.24 16:23, Alexander Lobakin wrote:
From: Gerhard Engleder
Date: Sat, 14 Dec 2024 20:16:23 +0100
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all
writes are flushed. As
On 18.12.24 16:08, Avigail Dahan wrote:
On 14/12/2024 21:16, Gerhard Engleder wrote:
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all
writes are flushed. As a result, DMA
On 18.12.24 09:36, Przemek Kitszel wrote:
On 12/16/24 20:23, Gerhard Engleder wrote:
@@ -331,8 +331,15 @@ void e1000e_update_mc_addr_list_generic(struct
e1000_hw *hw,
}
/* replace the entire MTA table */
- for (i = hw->mac.mta_reg_count - 1; i >= 0; i--)
+ for (
@@ -331,8 +331,15 @@ void e1000e_update_mc_addr_list_generic(struct
e1000_hw *hw,
}
/* replace the entire MTA table */
- for (i = hw->mac.mta_reg_count - 1; i >= 0; i--)
+ for (i = hw->mac.mta_reg_count - 1; i >= 0; i--) {
E1000_WRITE_REG_ARRAY(hw, E1000_MTA, i, hw->ma
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all
writes are flushed. As a result, DMA transfers of other targets suffer
from delay in the range of 50us. This results in timing
On 10.12.24 16:27, Bjorn Helgaas wrote:
On Sun, Dec 08, 2024 at 07:49:50PM +0100, Gerhard Engleder wrote:
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all writes
are flushed. As a result, DMA transfers
On 09.12.24 12:34, Paul Menzel wrote:
[Cc: +PCI folks]
Dear Gerhard,
Thank you for your patch.
Am 08.12.24 um 19:49 schrieb Gerhard Engleder:
From: Gerhard Engleder
From: Gerhard Engleder
The from line is present twice. No idea, if git is going to remove both.
It seems git send
From: Gerhard Engleder
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all writes
are flushed. As a result, DMA transfers of other targets suffer from delay
in the range of 50us
On 04.12.24 11:10, Przemek Kitszel wrote:
On 12/3/24 21:28, Gerhard Engleder wrote:
From: Gerhard Engleder
From: Gerhard Engleder
duplicated From: line
Nervous fingers, sorry, will be fixed.
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a
On 12.10.24 20:42, Andrew Lunn wrote:
On Fri, Oct 11, 2024 at 09:54:12PM +0200, Gerhard Engleder wrote:
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all writes
are flushed. As a
From: Gerhard Engleder
Link down and up triggers update of MTA table. This update executes many
PCIe writes and a final flush. Thus, PCIe will be blocked until all writes
are flushed. As a result, DMA transfers of other targets suffer from delay
in the range of 50us. The result are timing
f->flags) :
- clear_bit(ICE_FLAG_CLS_FLOWER, pf->flags);
+ assign_bit(ICE_FLAG_CLS_FLOWER, pf->flags, ena);
}
if (changed & NETIF_F_LOOPBACK)
Reviewed-by: Gerhard Engleder
_dev *pdev)
rtnl_lock();
if (netif_running(netdev)) {
if (igc_open(netdev)) {
+ rtnl_unlock();
netdev_err(netdev, "igc_open failed after reset\n");
return;
}
Reviewed-by: Gerhard Engleder
17 matches
Mail list logo