On 2020-05-22 16:57:07 [+0200], Peter Zijlstra wrote:
> > @@ -725,21 +735,48 @@ void lru_add_drain_all(void)
> >     if (WARN_ON(!mm_percpu_wq))
> >             return;
> >  
> 
> > +   this_gen = READ_ONCE(lru_drain_gen);
> > +   smp_rmb();
> 
>       this_gen = smp_load_acquire(&lru_drain_gen);
> >  
> >     mutex_lock(&lock);
> >  
> >     /*
> > +    * (C) Exit the draining operation if a newer generation, from another
> > +    * lru_add_drain_all(), was already scheduled for draining. Check (A).
> >      */
> > +   if (unlikely(this_gen != lru_drain_gen))
> >             goto done;
> >  
> 
> > +   WRITE_ONCE(lru_drain_gen, lru_drain_gen + 1);
> > +   smp_wmb();
> 
> You can leave this smp_wmb() out and rely on the smp_mb() implied by
> queue_work_on()'s test_and_set_bit().

This is to avoid smp_store_release() ?

Sebastian

Reply via email to