在 2019/1/8 20:45, Jia-Ju Bai 写道:
In drivers/net/ethernet/nvidia/forcedeth.c, the functions
nv_start_xmit() and nv_start_xmit_optimized() can be concurrently
executed with nv_poll_controller().

nv_start_xmit
   line 2321: prev_tx_ctx->skb = skb;

nv_start_xmit_optimized
   line 2479: prev_tx_ctx->skb = skb;

nv_poll_controller
   nv_do_nic_poll
     line 4134: spin_lock(&np->lock);
     nv_drain_rxtx
       nv_drain_tx
         nv_release_txskb
           line 2004: dev_kfree_skb_any(tx_skb->skb);

Thus, two possible concurrency use-after-free bugs may occur.

To fix these possible bugs,


Does this really occur? Can you reproduce this ?


  the calls to spin_lock_irqsave() in
nv_start_xmit() and nv_start_xmit_optimized() are moved to the
front of "prev_tx_ctx->skb = skb;"

Signed-off-by: Jia-Ju Bai <baijiaju1...@gmail.com>
---
  drivers/net/ethernet/nvidia/forcedeth.c | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/nvidia/forcedeth.c 
b/drivers/net/ethernet/nvidia/forcedeth.c
index 1d9b0d44ddb6..48fa5a0bd2cb 100644
--- a/drivers/net/ethernet/nvidia/forcedeth.c
+++ b/drivers/net/ethernet/nvidia/forcedeth.c
@@ -2317,6 +2317,8 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, 
struct net_device *dev)
        /* set last fragment flag  */
        prev_tx->flaglen |= cpu_to_le32(tx_flags_extra);
+ spin_lock_irqsave(&np->lock, flags);
+
        /* save skb in this slot's context area */
        prev_tx_ctx->skb = skb;
@@ -2326,8 +2328,6 @@ static netdev_tx_t nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
                tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ?
                         NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0;
- spin_lock_irqsave(&np->lock, flags);
-
        /* set tx flags */
        start_tx->flaglen |= cpu_to_le32(tx_flags | tx_flags_extra);
@@ -2475,6 +2475,8 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb,
        /* set last fragment flag  */
        prev_tx->flaglen |= cpu_to_le32(NV_TX2_LASTPACKET);
+ spin_lock_irqsave(&np->lock, flags);
+
        /* save skb in this slot's context area */
        prev_tx_ctx->skb = skb;
@@ -2491,8 +2493,6 @@ static netdev_tx_t nv_start_xmit_optimized(struct sk_buff *skb,
        else
                start_tx->txvlan = 0;
- spin_lock_irqsave(&np->lock, flags);
-
        if (np->tx_limit) {
                /* Limit the number of outstanding tx. Setup all fragments, but
                 * do not set the VALID bit on the first descriptor. Save a 
pointer

Reply via email to