On Wed, 3 Jan 2018, Borislav Petkov wrote: > On Wed, Jan 03, 2018 at 11:16:48AM +0200, Meelis Roos wrote: > > ---[ ESPfix Area ]--- > > 0xffffff0000000000-0xffffff1800000000 96G > > pud > > 0xffffff1800000000-0xffffff1800009000 36K > > pte > > 0xffffff1800009000-0xffffff180000a000 4K ro > > NX pte > > 0xffffff180000a000-0xffffff1800019000 60K > > pte > > 0xffffff1800019000-0xffffff180001a000 4K ro > > NX pte > > 0xffffff180001a000-0xffffff1800029000 60K > > pte > > 0xffffff1800029000-0xffffff180002a000 4K ro > > NX pte > > 0xffffff180002a000-0xffffff1800039000 60K > > pte > > 0xffffff1800039000-0xffffff180003a000 4K ro > > NX pte > > 0xffffff180003a000-0xffffff1800049000 60K > > pte > > 0xffffff1800049000-0xffffff180004a000 4K ro > > NX pte > > 0xffffff180004a000-0xffffff1800059000 60K > > pte > > 0xffffff1800059000-0xffffff180005a000 4K ro > > NX pte > > 0xffffff180005a000-0xffffff1800069000 60K > > pte > > 0xffffff1800069000-0xffffff180006a000 4K ro > > NX pte > > 0xffffff180006a000-0xffffff1800079000 60K > > pte > > ... 131059 entries skipped ... > > ---[ High Kernel Mapping ]--- > > 0xffffffff80000000-0xffffffff81e00000 30M > > pmd > > 0xffffffff81e00000-0xffffffff82000000 2M ro PSE > > GLB x pmd > > Ha, this must be it. What I said yesterday about the guard hole > hypervisor range was wrong because we're looking at VA slice [47:12] and > this one matches:
That's the entry area, which is mapped into kernel _AND_ user space. Now that's special because we switch CR3 while we are executing there. And this one is: 0xffffffff81e00000-0xffffffff82000000 2M ro PSE GLB x pmd and the one we switch to is: 0xffffffff81000000-0xffffffff82000000 16M ro PSE x pmd Meelis, does the patch below fix it for you? Thanks, tglx 8<------------------- --- a/arch/x86/mm/pti.c +++ b/arch/x86/mm/pti.c @@ -367,7 +367,8 @@ static void __init pti_setup_espfix64(vo static void __init pti_clone_entry_text(void) { pti_clone_pmds((unsigned long) __entry_text_start, - (unsigned long) __irqentry_text_end, _PAGE_RW); + (unsigned long) __irqentry_text_end, + _PAGE_RW | _PAGE_GLOBAL); } /*