On Thu, Dec 17, 2020 at 08:12:56PM +0800, Muchun Song wrote:
> In the subsequent patch, we will allocate the vmemmap pages when free
> HugeTLB pages. But update_and_free_page() is called from a non-task
> context(and hold hugetlb_lock), so we can defer the actual freeing in
> a workqueue to prevent from using GFP_ATOMIC to allocate the vmemmap
> pages.

I think we would benefit from a more complete changelog, at least I had
to stare at the code for a while in order to grasp what are we trying
to do and the reasons behind.

> +static void __free_hugepage(struct hstate *h, struct page *page);
> +
> +/*
> + * As update_and_free_page() is be called from a non-task context(and hold
> + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
> + * use GFP_ATOMIC to allocate a lot of vmemmap pages.

The above implies that update_and_free_page() is __always__ called from a 
non-task context, but that is not always the case?

> +static void update_hpage_vmemmap_workfn(struct work_struct *work)
>  {
> -     int i;
> +     struct llist_node *node;
> +     struct page *page;
>  
> +     node = llist_del_all(&hpage_update_freelist);
> +
> +     while (node) {
> +             page = container_of((struct address_space **)node,
> +                                  struct page, mapping);
> +             node = node->next;
> +             page->mapping = NULL;
> +             __free_hugepage(page_hstate(page), page);
> +
> +             cond_resched();
> +     }
> +}
> +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);

I wonder if this should be moved to hugetlb_vmemmap.c

> +/*
> + * This is where the call to allocate vmemmmap pages will be inserted.
> + */

I think this should go in the changelog.

> +static void __free_hugepage(struct hstate *h, struct page *page)
> +{
> +     int i;
> +
>       for (i = 0; i < pages_per_huge_page(h); i++) {
>               page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
>                               1 << PG_referenced | 1 << PG_dirty |
> @@ -1313,13 +1377,17 @@ static void update_and_free_page(struct hstate *h, 
> struct page *page)
>       set_page_refcounted(page);
>       if (hstate_is_gigantic(h)) {
>               /*
> -              * Temporarily drop the hugetlb_lock, because
> -              * we might block in free_gigantic_page().
> +              * Temporarily drop the hugetlb_lock only when this type of
> +              * HugeTLB page does not support vmemmap optimization (which
> +              * context do not hold the hugetlb_lock), because we might
> +              * block in free_gigantic_page().

"
 /*
  * Temporarily drop the hugetlb_lock, because we might block
  * in free_gigantic_page(). Only drop it in case the vmemmap
  * optimization is disabled, since that context does not hold
  * the lock.
  */
" ?

 
Oscar Salvador
SUSE L3

Reply via email to