On 6/8/21 3:12 PM, Kirill A. Shutemov wrote:
On Tue, Jun 08, 2021 at 01:22:23PM +0530, Aneesh Kumar K.V wrote:

Hi Hugh,

Hugh Dickins <hu...@google.com> writes:

On Mon, 7 Jun 2021, Aneesh Kumar K.V wrote:

CPU 1                           CPU 2                                   CPU 3

mremap(old_addr, new_addr)      page_shrinker/try_to_unmap_one

mmap_write_lock_killable()

                                addr = old_addr
                                lock(pte_ptl)
lock(pmd_ptl)
pmd = *old_pmd
pmd_clear(old_pmd)
flush_tlb_range(old_addr)

*new_pmd = pmd
                                                                        
*new_addr = 10; and fills
                                                                        TLB 
with new addr
                                                                        and old 
pfn

unlock(pmd_ptl)
                                ptep_clear_flush()
                                old pfn is free.
                                                                        Stale 
TLB entry

Fix this race by holding pmd lock in pageout. This still doesn't handle the race
between MOVE_PUD and pageout.

Fixes: 2c91bd4a4e2e ("mm: speed up mremap by 20x on large regions")
Link: 
https://lore.kernel.org/linux-mm/CAHk-=wgxvr04ebntxqfevontwnp6fdm+oj5vauqxp3s-huw...@mail.gmail.com
Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>

This seems very wrong to me, to require another level of locking in the
rmap lookup, just to fix some new pagetable games in mremap.

But Linus asked "Am I missing something?": neither of you have mentioned
mremap's take_rmap_locks(), so I hope that already meets your need.  And
if it needs to be called more often than before (see "need_rmap_locks"),
that's probably okay.

Hugh


Thanks for reviewing the change. I missed the rmap lock in the code
path. How about the below change?

     mm/mremap: hold the rmap lock in write mode when moving page table entries.
To avoid a race between rmap walk and mremap, mremap does take_rmap_locks().
     The lock was taken to ensure that rmap walk don't miss a page table entry 
due to
     PTE moves via move_pagetables(). The kernel does further optimization of
     this lock such that if we are going to find the newly added vma after the
     old vma, the rmap lock is not taken. This is because rmap walk would find 
the
     vmas in the same order and if we don't find the page table attached to
     older vma we would find it with the new vma which we would iterate later.
     The actual lifetime of the page is still controlled by the PTE lock.
This patch updates the locking requirement to handle another race condition
     explained below with optimized mremap::
Optmized PMD move CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr
                                         lock(pte_ptl)
         lock(pmd_ptl)
         pmd = *old_pmd
         pmd_clear(old_pmd)
         flush_tlb_range(old_addr)
*new_pmd = pmd
                                                                                
 *new_addr = 10; and fills
                                                                                
 TLB with new addr
                                                                                
 and old pfn
unlock(pmd_ptl)
                                         ptep_clear_flush()
                                         old pfn is free.
                                                                                
 Stale TLB entry
Optmized PUD move: CPU 1 CPU 2 CPU 3 mremap(old_addr, new_addr) page_shrinker/try_to_unmap_one mmap_write_lock_killable() addr = old_addr
                                         lock(pte_ptl)
         lock(pud_ptl)
         pud = *old_pud
         pud_clear(old_pud)
         flush_tlb_range(old_addr)
*new_pud = pud
                                                                                
 *new_addr = 10; and fills
                                                                                
 TLB with new addr
                                                                                
 and old pfn
unlock(pud_ptl)
                                         ptep_clear_flush()
                                         old pfn is free.
                                                                                
 Stale TLB entry
Both the above race condition can be fixed if we force mremap path to take rmap lock. Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com>

Looks like it should be enough to address the race.

It would be nice to understand what is performance overhead of the
additional locking. Is it still faster to move single PMD page table under
these locks comparing to moving PTE page table entries without the locks?


The improvements provided by optimized mremap as captured in patch 11 is
large.

mremap HAVE_MOVE_PMD/PUD optimization time comparison for 1GB region:
1GB mremap - Source PTE-aligned, Destination PTE-aligned
  mremap time:      2292772ns
1GB mremap - Source PMD-aligned, Destination PMD-aligned
  mremap time:      1158928ns
1GB mremap - Source PUD-aligned, Destination PUD-aligned
  mremap time:        63886ns

With additional locking, I haven't observed much change in those numbers. But that could also be because there is no contention on these locks when this test is run?

-aneesh


Reply via email to