4.9-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Mel Gorman <mgor...@techsingularity.net>

commit a6de734bc002fe2027ccc074fbbd87d72957b7a4 upstream.

Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
defer debugging checks of pages allocated from the PCP") will allow the
per-cpu list counter to be out of sync with the per-cpu list contents if
a struct page is corrupted.

The consequence is an infinite loop if the per-cpu lists get fully
drained by free_pcppages_bulk because all the lists are empty but the
count is positive.  The infinite loop occurs here

                do {
                        batch_free++;
                        if (++migratetype == MIGRATE_PCPTYPES)
                                migratetype = 0;
                        list = &pcp->lists[migratetype];
                } while (list_empty(list));

What the user sees is a bad page warning followed by a soft lockup with
interrupts disabled in free_pcppages_bulk().

This patch keeps the accounting in sync.

Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated 
from the PCP")
Link: 
http://lkml.kernel.org/r/20161202112951.23346-2-mgor...@techsingularity.net
Signed-off-by: Mel Gorman <mgor...@suse.de>
Acked-by: Vlastimil Babka <vba...@suse.cz>
Acked-by: Michal Hocko <mho...@suse.com>
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
Cc: Christoph Lameter <c...@linux.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Jesper Dangaard Brouer <bro...@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
Signed-off-by: Andrew Morton <a...@linux-foundation.org>
Signed-off-by: Linus Torvalds <torva...@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org>

---
 mm/page_alloc.c |   12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2192,7 +2192,7 @@ static int rmqueue_bulk(struct zone *zon
                        unsigned long count, struct list_head *list,
                        int migratetype, bool cold)
 {
-       int i;
+       int i, alloced = 0;
 
        spin_lock(&zone->lock);
        for (i = 0; i < count; ++i) {
@@ -2217,13 +2217,21 @@ static int rmqueue_bulk(struct zone *zon
                else
                        list_add_tail(&page->lru, list);
                list = &page->lru;
+               alloced++;
                if (is_migrate_cma(get_pcppage_migratetype(page)))
                        __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
                                              -(1 << order));
        }
+
+       /*
+        * i pages were removed from the buddy list even if some leak due
+        * to check_pcp_refill failing so adjust NR_FREE_PAGES based
+        * on i. Do not confuse with 'alloced' which is the number of
+        * pages added to the pcp list.
+        */
        __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
        spin_unlock(&zone->lock);
-       return i;
+       return alloced;
 }
 
 #ifdef CONFIG_NUMA


Reply via email to