On Thu, 20 Dec 2018 14:21:32 -0800
Jonathan Lemon <jonathan.le...@gmail.com> wrote:

> Return pfmemalloc pages back to the page allocator, instead of holding them
> in the page pool.
> 
> Signed-off-by: Jonathan Lemon <jonathan.le...@gmail.com>
> ---
>  net/core/page_pool.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index 43a932cb609b..364b893be66f 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -233,7 +233,7 @@ void __page_pool_put_page(struct page_pool *pool,
>        *
>        * refcnt == 1 means page_pool owns page, and can recycle it.
>        */
> -     if (likely(page_ref_count(page) == 1)) {
> +     if (likely(page_ref_count(page) == 1 && !page_is_pfmemalloc(page))) {

I took at closer look at the page_pool issue recycling pages from
emergency reserve (pfmemalloc), and it actually cannot happen, because
page_pool does not use the __GFP_MEMALLOC gfp_t flag. Thus, page_pool
are not allowed to get pages from the emergency reserve in the first
place (unless ksoftirqd current->flags have PF_MEMALLOC, which I don't
think it have).

See: page_pool_dev_alloc_pages() compared to __dev_alloc_pages().

The doc for:
/* %__GFP_MEMALLOC allows access to all memory. This should only be used when
 * the caller guarantees the allocation will allow more memory to be freed
 * very shortly e.g. process exiting or swapping. Users either should
 * be the MM or co-ordinating closely with the VM (e.g. swap over NFS).
 */

With that desc, I don't understand why we actually allow dev_alloc_pages()
to get emergency reserve (pfmemalloc) pages, as we store these in an
RX-ring queue (usual size 512-1024) that isn't used until N-packets
later... even if used as a signal to network stack, to free other
resources, this happens at a later point-in-time, not "very shortly".

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Reply via email to