Hi,

sorry for short followup, replied on previous commit without
reading through whole mailbox. Given the fallout of r348303
I'd suggest to consider my one liner as a proper solution to
the problem.

On Tue, May 28, 2019 at 11:45:00AM +0000, Andrey V. Elsukov wrote:
A> Author: ae
A> Date: Tue May 28 11:45:00 2019
A> New Revision: 348324
A> URL: https://svnweb.freebsd.org/changeset/base/348324
A> 
A> Log:
A>   Rework r348303 to reduce the time of holding global BPF lock.
A>   
A>   It appeared that using NET_EPOCH_WAIT() while holding global BPF lock
A>   can lead to another panic:
A>   
A>   spin lock 0xfffff800183c9840 (turnstile lock) held by 0xfffff80018e2c5a0 
(tid 100325) too long
A>   panic: spin lock held too long
A>   ...
A>   #0  sched_switch (td=0xfffff80018e2c5a0, newtd=0xfffff8000389e000, 
flags=<optimized out>) at /usr/src/sys/kern/sched_ule.c:2133
A>   #1  0xffffffff80bf9912 in mi_switch (flags=256, newtd=0x0) at 
/usr/src/sys/kern/kern_synch.c:439
A>   #2  0xffffffff80c21db7 in sched_bind (td=<optimized out>, cpu=<optimized 
out>) at /usr/src/sys/kern/sched_ule.c:2704
A>   #3  0xffffffff80c34c33 in epoch_block_handler_preempt (global=<optimized 
out>, cr=0xfffffe00005a1a00, arg=<optimized out>)
A>       at /usr/src/sys/kern/subr_epoch.c:394
A>   #4  0xffffffff803c741b in epoch_block (global=<optimized out>, 
cr=<optimized out>, cb=<optimized out>, ct=<optimized out>)
A>       at /usr/src/sys/contrib/ck/src/ck_epoch.c:416
A>   #5  ck_epoch_synchronize_wait (global=0xfffff8000380cd80, cb=<optimized 
out>, ct=<optimized out>) at /usr/src/sys/contrib/ck/src/ck_epoch.c:465
A>   #6  0xffffffff80c3475e in epoch_wait_preempt (epoch=0xfffff8000380cd80) at 
/usr/src/sys/kern/subr_epoch.c:513
A>   #7  0xffffffff80ce970b in bpf_detachd_locked (d=0xfffff801d309cc00, 
detached_ifp=<optimized out>) at /usr/src/sys/net/bpf.c:856
A>   #8  0xffffffff80ced166 in bpf_detachd (d=<optimized out>) at 
/usr/src/sys/net/bpf.c:836
A>   #9  bpf_dtor (data=0xfffff801d309cc00) at /usr/src/sys/net/bpf.c:914
A>   
A>   To fix this add the check to the catchpacket() that BPF descriptor was
A>   not detached just before we acquired BPFD_LOCK().
A>   
A>   Reported by:       slavash
A>   Tested by: slavash
A>   MFC after: 1 week
A> 
A> Modified:
A>   head/sys/net/bpf.c
A> 
A> Modified: head/sys/net/bpf.c
A> 
==============================================================================
A> --- head/sys/net/bpf.c       Tue May 28 10:55:59 2019        (r348323)
A> +++ head/sys/net/bpf.c       Tue May 28 11:45:00 2019        (r348324)
A> @@ -850,15 +850,10 @@ bpf_detachd_locked(struct bpf_d *d, bool detached_ifp)
A>      /* Check if descriptor is attached */
A>      if ((bp = d->bd_bif) == NULL)
A>              return;
A> -    /*
A> -     * Remove d from the interface's descriptor list.
A> -     * And wait until bpf_[m]tap*() will finish their possible work
A> -     * with descriptor.
A> -     */
A> -    CK_LIST_REMOVE(d, bd_next);
A> -    NET_EPOCH_WAIT();
A>  
A>      BPFD_LOCK(d);
A> +    /* Remove d from the interface's descriptor list. */
A> +    CK_LIST_REMOVE(d, bd_next);
A>      /* Save bd_writer value */
A>      error = d->bd_writer;
A>      ifp = bp->bif_ifp;
A> @@ -2494,6 +2489,11 @@ catchpacket(struct bpf_d *d, u_char *pkt, u_int pktlen
A>      int tstype;
A>  
A>      BPFD_LOCK_ASSERT(d);
A> +    if (d->bd_bif == NULL) {
A> +            /* Descriptor was detached in concurrent thread */
A> +            counter_u64_add(d->bd_dcount, 1);
A> +            return;
A> +    }
A>  
A>      /*
A>       * Detect whether user space has released a buffer back to us, and if
A> 

-- 
Gleb Smirnoff
_______________________________________________
svn-src-head@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to