On Fri 02-12-16 00:22:43, Mel Gorman wrote:
> Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
> defer debugging checks of pages allocated from the PCP") will allow the
> per-cpu list counter to be out of sync with the per-cpu list contents
> if a struct page is corrupted. This patch keeps the accounting in sync.
>
> Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages 
> allocated from the PCP")
> Signed-off-by: Mel Gorman <mgor...@suse.de>
> cc: sta...@vger.kernel.org [4.7+]

I am trying to think about what would happen if we did go out of sync
and cannot spot a problem. Vlastimil has mentioned something about
free_pcppages_bulk looping for ever but I cannot see it happening right
now. So why is this worth stable backport?

Anyway the patch looks correct
Acked-by: Michal Hocko <mho...@suse.com>

> ---
>  mm/page_alloc.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 6de9440e3ae2..777ed59570df 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2192,7 +2192,7 @@ static int rmqueue_bulk(struct zone *zone, unsigned int 
> order,
>                       unsigned long count, struct list_head *list,
>                       int migratetype, bool cold)
>  {
> -     int i;
> +     int i, alloced = 0;
>  
>       spin_lock(&zone->lock);
>       for (i = 0; i < count; ++i) {
> @@ -2217,13 +2217,14 @@ static int rmqueue_bulk(struct zone *zone, unsigned 
> int order,
>               else
>                       list_add_tail(&page->lru, list);
>               list = &page->lru;
> +             alloced++;
>               if (is_migrate_cma(get_pcppage_migratetype(page)))
>                       __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
>                                             -(1 << order));
>       }
>       __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));

I guess this deserves a comment (i vs. alloced is confusing and I bet
somebody will come up with a cleanup...). We leak corrupted pages
intentionally so we should uncharge them from the NR_FREE_PAGES.

>       spin_unlock(&zone->lock);
> -     return i;
> +     return alloced;
>  }
>  
>  #ifdef CONFIG_NUMA
> -- 
> 2.10.2
> 

-- 
Michal Hocko
SUSE Labs

Reply via email to