There is no need for tx_locking if you are already netif stopped
(transmit path will never be entered).
With this change under high speed forwarding i see anywhere
between 2-4Kpps improvement on a 2 CPU environment with twoo e1000s tied
to different CPUs forwarding between each other. Actually the
performance improvement should be attributed to the use of
TX_WAKE_THRESHOLD - more drivers should use that technique.

cheers,
jamal

signed-off-by: Jamal Hadi Salim <[EMAIL PROTECTED]>

-----

diff --git a/drivers/net/e1000/e1000_main.c b/drivers/net/e1000/e1000_main.c
index da62db8..559e334 100644
--- a/drivers/net/e1000/e1000_main.c
+++ b/drivers/net/e1000/e1000_main.c
@@ -3517,11 +3517,8 @@ #endif
 #define TX_WAKE_THRESHOLD 32
 	if (unlikely(cleaned && netif_queue_stopped(netdev) &&
 	             netif_carrier_ok(netdev))) {
-		spin_lock(&tx_ring->tx_lock);
-		if (netif_queue_stopped(netdev) &&
-		    (E1000_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD))
+		if (E1000_DESC_UNUSED(tx_ring) >= TX_WAKE_THRESHOLD)
 			netif_wake_queue(netdev);
-		spin_unlock(&tx_ring->tx_lock);
 	}
 
 	if (adapter->detect_tx_hung) {

Reply via email to