On Fri, Sep 18, 2020 at 12:40:32PM -0400, Peter Xu wrote:

> Firstly in the draft patch mm->has_pinned is introduced and it's written to 1
> as long as FOLL_GUP is called once.  It's never reset after set.

Worth thinking about also adding FOLL_LONGTERM here, at last as long
as it is not a counter. That further limits the impact.

> One more thing (I think) we need to do is to pass the new vma from
> copy_page_range() down into the end because if we want to start cow during
> fork() then we need to operate on that new vma too when new page linked to it
> rather than the parent's.

Makes sense to me

> One issue is when we charge for cgroup we probably can't do that onto the new
> mm/task, since copy_namespaces() is called after copy_mm().  I don't know
> enough about cgroup, I thought the child will inherit the parent's, but I'm 
> not
> sure.  Or, can we change that order of copy_namespaces() && copy_mm()?  I 
> don't
> see a problem so far but I'd like to ask first..

Know nothing about cgroups, but I would have guessed that the page
table allocations would want to be in the cgroup too, is the struct
page a different bucket?

> The other thing is on how to fail.  E.g., when COW failed due to either
> charging of cgroup or ENOMEM, ideally we should fail fork() too.  Though that
> might need more changes - current patch silently kept the shared page for
> simplicity.

I didn't notice anything tricky here.. Something a bit gross but
simple seemed workable:

@@ -852,7 +852,7 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct 
mm_struct *src_mm,
                        continue;
                }
                entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte,
-                                                       vma, addr, rss);
+                                                       vma, addr, rss, &err);
                if (entry.val)
                        break;
                progress += 8;
@@ -865,6 +865,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct 
mm_struct *src_mm,
        pte_unmap_unlock(orig_dst_pte, dst_ptl);
        cond_resched();
 
+       if (err)
+               return err;
+
        if (entry.val) {
                if (add_swap_count_continuation(entry, GFP_KERNEL) < 0)
                        return -ENOMEM;

It is not really any different from add_swap_count_continuation()
failure, which already works..

> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 496c3ff97cce..b3812fa6383f 100644
> +++ b/include/linux/mm_types.h
> @@ -458,6 +458,7 @@ struct mm_struct {
>  
>               unsigned long total_vm;    /* Total pages mapped */
>               unsigned long locked_vm;   /* Pages that have PG_mlocked set */
> +             unsigned long has_pinned;  /* Whether this mm has pinned any 
> page */

Can be unsigned int or smaller, if there is a better hole in mm_struct

> diff --git a/mm/gup.c b/mm/gup.c
> index e5739a1974d5..cab10cefefe4 100644
> +++ b/mm/gup.c
> @@ -1255,6 +1255,17 @@ static __always_inline long 
> __get_user_pages_locked(struct mm_struct *mm,
>               BUG_ON(*locked != 1);
>       }
>  
> +     /*
> +      * Mark the mm struct if there's any page pinning attempt.  We're
> +      * aggresive on this bit since even if the pinned pages were unpinned
> +      * later on, we'll still keep this bit set for this address space just
> +      * to make everything easy.
> +      *
> +      * TODO: Ideally we can use mm->pinned_vm but only until it's stable.
> +      */
> +     if (flags & FOLL_PIN)
> +             WRITE_ONCE(mm->has_pinned, 1);

This should probably be its own commit, here is a stab at a commit
message:

Reduce the chance of false positive from page_maybe_dma_pinned() by
keeping track if the mm_struct has ever been used with
pin_user_pages(). mm_structs that have never been passed to
pin_user_pages() cannot have a positive page_maybe_dma_pinned() by
definition. This allows cases that might drive up the page ref_count
to avoid any penalty from handling dma_pinned pages.

Due to complexities with unpining this trivial version is a permanent
sticky bit, future work will be needed to make this a counter.

> +/*
> + * Do early cow for the page and the pte. Return true if page duplicate
> + * succeeded, false otherwise.
> + */
> +static bool duplicate_page(struct mm_struct *mm, struct vm_area_struct *vma,

Suggest calling 'vma' 'new' here for consistency

> +                        unsigned long address, struct page *page,
> +                        pte_t *newpte)
> +{
> +       struct page *new_page;
> +       pte_t entry;
> +
> +       new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
> +       if (!new_page)
> +               return false;
> +
> +       copy_user_highpage(new_page, page, address, vma);
> +       if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) {
> +            put_page(new_page);
> +            return false;
> +       }
> +       cgroup_throttle_swaprate(new_page, GFP_KERNEL);
> +       __SetPageUptodate(new_page);

It looks like these GFP flags can't be GFP_KERNEL, this is called
inside the pte_alloc_map_lock() which is a spinlock

One thought is to lift this logic out to around
add_swap_count_continuation()? Would need some serious rework to be
able to store the dst_pte though.

Can't help about the cgroup question

Jason

Reply via email to