updated for comments change from Johannes


>From 2fd278b1ca6c3e260ad249808b62f671d8db5a7b Mon Sep 17 00:00:00 2001
From: Alex Shi <alex....@linux.alibaba.com>
Date: Thu, 5 Nov 2020 11:38:24 +0800
Subject: [PATCH v21 06/19] mm/rmap: stop store reordering issue on
 page->mapping

Hugh Dickins and Minchan Kim observed a long time issue which
discussed here, but actully the mentioned fix missed.
https://lore.kernel.org/lkml/20150504031722.GA2768@blaptop/
The store reordering may cause problem in the scenario:

        CPU 0                                           CPU1
   do_anonymous_page
        page_add_new_anon_rmap()
          page->mapping = anon_vma + PAGE_MAPPING_ANON
        lru_cache_add_inactive_or_unevictable()
          spin_lock(lruvec->lock)
          SetPageLRU()
          spin_unlock(lruvec->lock)
                                                /* idletacking judged it as LRU
                                                 * page so pass the page in
                                                 * page_idle_clear_pte_refs
                                                 */
                                                page_idle_clear_pte_refs
                                                  rmap_walk
                                                    if PageAnon(page)

Johannes give detailed examples how the store reordering could cause
a trouble:
"The concern is the SetPageLRU may get reorder before 'page->mapping'
setting, That would make CPU 1 will observe at page->mapping after
observing PageLRU set on the page.

1. anon_vma + PAGE_MAPPING_ANON

   That's the in-order scenario and is fine.

2. NULL

   That's possible if the page->mapping store gets reordered to occur
   after SetPageLRU. That's fine too because we check for it.

3. anon_vma without the PAGE_MAPPING_ANON bit

   That would be a problem and could lead to all kinds of undesirable
   behavior including crashes and data corruption.

   Is it possible? AFAICT the compiler is allowed to tear the store to
   page->mapping and I don't see anything that would prevent it.

That said, I also don't see how the reader testing PageLRU under the
lru_lock would prevent that in the first place. AFAICT we need that
WRITE_ONCE() around the page->mapping assignment."

Signed-off-by: Alex Shi <alex....@linux.alibaba.com>
Cc: Johannes Weiner <han...@cmpxchg.org>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Hugh Dickins <hu...@google.com>
Cc: Matthew Wilcox <wi...@infradead.org>
Cc: Minchan Kim <minc...@kernel.org>
Cc: Vladimir Davydov <vdavydov....@gmail.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
---
 mm/rmap.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 1b84945d655c..380c6b9956c2 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1054,8 +1054,14 @@ static void __page_set_anon_rmap(struct page *page,
        if (!exclusive)
                anon_vma = anon_vma->root;
 
+       /*
+        * page_idle does a lockless/optimistic rmap scan on page->mapping.
+        * Make sure the compiler doesn't split the stores of anon_vma and
+        * the PAGE_MAPPING_ANON type identifier, otherwise the rmap code
+        * could mistake the mapping for a struct address_space and crash.
+        */
        anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
-       page->mapping = (struct address_space *) anon_vma;
+       WRITE_ONCE(page->mapping, (struct address_space *) anon_vma);
        page->index = linear_page_index(vma, address);
 }
 
-- 
1.8.3.1

Reply via email to