On Fri, Jun 13, 2014 at 05:26:17PM +0100, Chris Wilson wrote:
> When using remap_pfn_range() from a fault handler, we are exposed to
> races between concurrent faults. Rather than hitting a BUG, report the
> error back to the caller, like vm_insert_pfn().
> 
> Signed-off-by: Chris Wilson <ch...@chris-wilson.co.uk>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: "Kirill A. Shutemov" <kirill.shute...@linux.intel.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: Mel Gorman <mgor...@suse.de>
> Cc: Cyrill Gorcunov <gorcu...@gmail.com>
> Cc: Johannes Weiner <han...@cmpxchg.org>
> Cc: linux...@kvack.org
> ---
>  mm/memory.c | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index 037b812a9531..6603a9e6a731 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2306,19 +2306,23 @@ static int remap_pte_range(struct mm_struct *mm, 
> pmd_t *pmd,
>  {
>       pte_t *pte;
>       spinlock_t *ptl;
> +     int ret = 0;
>  
>       pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
>       if (!pte)
>               return -ENOMEM;
>       arch_enter_lazy_mmu_mode();
>       do {
> -             BUG_ON(!pte_none(*pte));
> +             if (!pte_none(*pte)) {
> +                     ret = -EBUSY;
> +                     break;
> +             }
>               set_pte_at(mm, addr, pte, pte_mkspecial(pfn_pte(pfn, prot)));
>               pfn++;
>       } while (pte++, addr += PAGE_SIZE, addr != end);
>       arch_leave_lazy_mmu_mode();
>       pte_unmap_unlock(pte - 1, ptl);

Oh. That will want the EBUSY path to increment pte or we will try to
unmap the wrong page.
-Chris

-- 
Chris Wilson, Intel Open Source Technology Centre
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/intel-gfx

Reply via email to