On Wed, Mar 03, 2021 at 01:57:02AM -0800, Nadav Amit wrote:
> From: Nadav Amit <na...@vmware.com>
> 
> Userfaultfd self-test fails occasionally, indicating a memory
> corruption.

It's failing very constantly now for me after I got it run on a 40 cores
system...  While indeed not easy to fail on my laptop.

[...]

> Fixes: 292924b26024 ("userfaultfd: wp: apply _PAGE_UFFD_WP bit")
> Signed-off-by: Nadav Amit <na...@vmware.com>
> 
> ---
> v2->v3:
> * Do not acquire mmap_lock for write, flush conditionally instead [Yu]
> * Change the fixes tag to the patch that made the race apparent [Yu]

Did you forget about this one?  It would still be good to point to 09854ba94c6a
just to show that 5.7/5.8 stable branches shouldn't need this patch as they're
not prone to the tlb data curruption.  Maybe also cc stable with 5.9+?

> * Removing patch to avoid write-protect on uffd unprotect. More
>   comprehensive solution to follow (and avoid the TLB flush as well).
> ---
>  mm/memory.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 9e8576a83147..06da04f98936 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3092,6 +3092,13 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
>               return handle_userfault(vmf, VM_UFFD_WP);
>       }
>  
> +     /*
> +      * Userfaultfd write-protect can defer flushes. Ensure the TLB
> +      * is flushed in this case before copying.
> +      */
> +     if (userfaultfd_wp(vmf->vma) && mm_tlb_flush_pending(vmf->vma->vm_mm))
> +             flush_tlb_page(vmf->vma, vmf->address);
> +
>       vmf->page = vm_normal_page(vma, vmf->address, vmf->orig_pte);
>       if (!vmf->page) {
>               /*
> -- 
> 2.25.1
> 

Thanks for being consistent on fixing this problem.

Maybe it's even better to put that into a "unlikely" to reduce the affect of
normal do_wp_page as much as possible?  But I'll leave it to others.

If with the fixes tag modified:

Reviewed-by: Peter Xu <pet...@redhat.com>
Tested-by: Peter Xu <pet...@redhat.com>

Thanks,

-- 
Peter Xu

Reply via email to