On Mon, Jan 21, 2019 at 12:20:35PM +0200, Mike Rapoport wrote:
> On Mon, Jan 21, 2019 at 03:57:04PM +0800, Peter Xu wrote:
> > From: Shaohua Li <s...@fb.com>
> > 
> > Add API to enable/disable writeprotect a vma range. Unlike mprotect,
> > this doesn't split/merge vmas.
> > 
> > Cc: Andrea Arcangeli <aarca...@redhat.com>
> > Cc: Pavel Emelyanov <xe...@parallels.com>
> > Cc: Rik van Riel <r...@redhat.com>
> > Cc: Kirill A. Shutemov <kir...@shutemov.name>
> > Cc: Mel Gorman <mgor...@suse.de>
> > Cc: Hugh Dickins <hu...@google.com>
> > Cc: Johannes Weiner <han...@cmpxchg.org>
> > Signed-off-by: Shaohua Li <s...@fb.com>
> > Signed-off-by: Andrea Arcangeli <aarca...@redhat.com>
> > Signed-off-by: Peter Xu <pet...@redhat.com>
> > ---
> >  include/linux/userfaultfd_k.h |  2 ++
> >  mm/userfaultfd.c              | 52 +++++++++++++++++++++++++++++++++++
> >  2 files changed, 54 insertions(+)
> > 
> > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> > index 38f748e7186e..e82f3156f4e9 100644
> > --- a/include/linux/userfaultfd_k.h
> > +++ b/include/linux/userfaultfd_k.h
> > @@ -37,6 +37,8 @@ extern ssize_t mfill_zeropage(struct mm_struct *dst_mm,
> >                           unsigned long dst_start,
> >                           unsigned long len,
> >                           bool *mmap_changing);
> > +extern int mwriteprotect_range(struct mm_struct *dst_mm,
> > +           unsigned long start, unsigned long len, bool enable_wp);
> > 
> >  /* mm helpers */
> >  static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct 
> > *vma,
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index 458acda96f20..c38903f501c7 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -615,3 +615,55 @@ ssize_t mfill_zeropage(struct mm_struct *dst_mm, 
> > unsigned long start,
> >  {
> >     return __mcopy_atomic(dst_mm, start, 0, len, true, mmap_changing);
> >  }
> > +
> > +int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
> > +   unsigned long len, bool enable_wp)
> > +{
> > +   struct vm_area_struct *dst_vma;
> > +   pgprot_t newprot;
> > +   int err;
> > +
> > +   /*
> > +    * Sanitize the command parameters:
> > +    */
> > +   BUG_ON(start & ~PAGE_MASK);
> > +   BUG_ON(len & ~PAGE_MASK);
> > +
> > +   /* Does the address range wrap, or is the span zero-sized? */
> > +   BUG_ON(start + len <= start);
> > +
> > +   down_read(&dst_mm->mmap_sem);
> > +
> > +   /*
> > +    * Make sure the vma is not shared, that the dst range is
> > +    * both valid and fully within a single existing vma.
> > +    */
> > +   err = -EINVAL;
> 
> In non-cooperative mode, there can be a race between VM layout changes and
> mcopy_atomic [1]. I believe the same races are possible here, so can we
> please make err = -ENOENT for consistency with mcopy?

Sure.

> 
> > +   dst_vma = find_vma(dst_mm, start);
> > +   if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > +           goto out_unlock;
> > +   if (start < dst_vma->vm_start ||
> > +       start + len > dst_vma->vm_end)
> > +           goto out_unlock;
> > +
> > +   if (!dst_vma->vm_userfaultfd_ctx.ctx)
> > +           goto out_unlock;
> > +   if (!userfaultfd_wp(dst_vma))
> > +           goto out_unlock;
> > +
> > +   if (!vma_is_anonymous(dst_vma))
> > +           goto out_unlock;
> 
> The sanity checks here seem to repeat those in mcopy_atomic(). I'd suggest
> splitting them out to a helper function.

It's a good suggestion.  Thanks!

> 
> > +   if (enable_wp)
> > +           newprot = vm_get_page_prot(dst_vma->vm_flags & ~(VM_WRITE));
> > +   else
> > +           newprot = vm_get_page_prot(dst_vma->vm_flags);
> > +
> > +   change_protection(dst_vma, start, start + len, newprot,
> > +                           !enable_wp, 0);
> > +
> > +   err = 0;
> > +out_unlock:
> > +   up_read(&dst_mm->mmap_sem);
> > +   return err;
> > +}
> > -- 
> > 2.17.1
> 
> [1] 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=27d02568f529e908399514dfbee8ee43bdfd5299
> 
> -- 
> Sincerely yours,
> Mike.
> 

-- 
Peter Xu

Reply via email to