On Wed, 2018-06-27 at 11:10 -0700, Andy Lutomirski wrote: > > You left this comment: > > /* > * We don't currently support having a real mm loaded > without > * our cpu set in mm_cpumask(). We have all the > bookkeeping > * in place to figure out whether we would need to > flush > * if our cpu were cleared in mm_cpumask(), but we > don't > * currently use it. > */ > > Presumably you should either clear the cpu from mm_cpumask when lazy > or you shoudl update the comment.
The lazy TLB mode leaves the mm loaded, AND the cpu set in mm_cpumask(). However, I guess while the comment is technically accurate, it is no longer relevant, so I will update it :) > > + /* > > + * Switching straight from one thread in a process > > to another > > + * thread in the same process requires no TLB flush > > at all. > > + */ > > + if (!was_lazy) > > + return; > > Comment doesn't match code. Maybe add "... if we weren't lazy"? Done. > > + > > + /* > > + * The code below checks whether there was a TLB > > flush while > > + * this CPU was in lazy TLB mode. The barrier > > ensures ordering > > + * with the TLB invalidation code advancing > > .tlb_gen. > > + */ > > + smp_rmb(); > > I think it may need to be smp_mb(). You're trying to order > this_cpu_write() against subsequent reads. I have updated the barrier to an smp_mb(). > In general, the changes to this function are very hard to review > because you're mixing semantic changes and restructuring the > function. > Is there any way you could avoid that? Or maybe just open-code a > tlb_gen check in the unlazying path? > > > > + /* > > + * Instead of sending IPIs to CPUs in lazy TLB mode, move > > that > > + * CPU's TLB state to TLBSTATE_FLUSH, causing the TLB to be > > flushed > > + * at the next context switch, or at page table free time. > > + */ > > Stale comment? Will fix. I am running some last tests now, and will send v3 soon. -- All Rights Reversed.
signature.asc
Description: This is a digitally signed message part