All page table buf are pre-mapped, and could use _va to access them. Remove the not needed checking.
Signed-off-by: Yinghai Lu <ying...@kernel.org> Cc: Konrad Rzeszutek Wilk <konrad.w...@oracle.com> Cc: Jeremy Fitzhardinge <jer...@goop.org> --- arch/x86/xen/mmu.c | 8 ++------ 1 files changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c index 7a769b7..9c0956c 100644 --- a/arch/x86/xen/mmu.c +++ b/arch/x86/xen/mmu.c @@ -1412,13 +1412,9 @@ static pte_t __init mask_rw_pte(pte_t *ptep, pte_t pte) /* * If the new pfn is within the range of the newly allocated - * kernel pagetable, and it isn't being mapped into an - * early_ioremap fixmap slot as a freshly allocated page, make sure - * it is RO. + * kernel pagetable, make sure it is RO. */ - if (((!is_early_ioremap_ptep(ptep) && - pfn >= pgt_buf_start && pfn < pgt_buf_top)) || - (is_early_ioremap_ptep(ptep) && pfn != (pgt_buf_end - 1))) + if (pfn >= pgt_buf_start && pfn < pgt_buf_top) pte = pte_wrprotect(pte); return pte; -- 1.7.7 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/