On Fri 04-10-19 15:32:01, Konstantin Khlebnikov wrote:
> 
> 
> On 04/10/2019 15.27, Michal Hocko wrote:
> > On Fri 04-10-19 05:10:17, Matthew Wilcox wrote:
> > > On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
> > > > This is very slow operation. There is no reason to do it again if 
> > > > somebody
> > > > else already drained all per-cpu vectors after we waited for lock.
> > > > +       seq = raw_read_seqcount_latch(&seqcount);
> > > > +
> > > >         mutex_lock(&lock);
> > > > +
> > > > +       /* Piggyback on drain done by somebody else. */
> > > > +       if (__read_seqcount_retry(&seqcount, seq))
> > > > +               goto done;
> > > > +
> > > > +       raw_write_seqcount_latch(&seqcount);
> > > > +
> > > 
> > > Do we really need the seqcount to do this?  Wouldn't a mutex_trylock()
> > > have the same effect?
> > 
> > Yeah, this makes sense. From correctness point of view it should be ok
> > because no caller can expect that per-cpu pvecs are empty on return.
> > This might have some runtime effects that some paths might retry more -
> > e.g. offlining path drains pcp pvces before migrating the range away, if
> > there are pages still waiting for a worker to drain them then the
> > migration would fail and we would retry. But this not a correctness
> > issue.
> > 
> 
> Caller might expect that pages added by him before are drained.
> Exiting after mutex_trylock() will not guarantee that.
> 
> For example POSIX_FADV_DONTNEED uses that.

OK, I was not aware of this case. Please make sure to document that in
the changelog and a comment in the code wouldn't hurt either. It would
certainly explain more thatn "Piggyback on drain done by somebody
else.".

Thanks!
-- 
Michal Hocko
SUSE Labs

Reply via email to