On Mon, Mar 14, 2016 at 10:33:01AM +0000, Matt Fleming wrote:
> Scott reports that with the new separate EFI page tables he's seeing
> the following error on boot, caused by setting reserved bits in the
> page table structures (fault code is PF_RSVD | PF_PROT),
> 
>   swapper/0: Corrupted page table at address 17b102020
>   PGD 17b0e5063 PUD 1400000e3
>   Bad pagetable: 0009 [#1] SMP
> 
> On first inspection the PUD is using a 1GB page size (_PAGE_PSE) and
> looks fine but that's only true if support for 1GB PUD pages
> ("pdpe1gb") is present in the cpu.
> 
> Scott's Intel Celeron N2820 does not have that feature and so the
> _PAGE_PSE bit is reserved. Fix this issue by making the 1GB mapping
> code in conditional on "cpu_has_gbpages".
> 
> This issue didn't come up in the past because the required mapping for
> the faulting address (0x17b102020) will already have been setup by the
> kernel in early boot before we got to efi_map_regions(), but we no
> longer use the standard kernel page tables during EFI calls.
> 
> Reported-by: Scott Ashcroft <scott.ashcr...@talk21.com>
> Tested-by: Scott Ashcroft <scott.ashcr...@talk21.com>
> Cc: Ard Biesheuvel <ard.biesheu...@linaro.org>
> Cc: Ben Hutchings <b...@decadent.org.uk>
> Cc: Borislav Petkov <b...@alien8.de>
> Cc: Brian Gerst <brge...@gmail.com>
> Cc: Denys Vlasenko <dvlas...@redhat.com>
> Cc: "H. Peter Anvin" <h...@zytor.com>
> Cc: Linus Torvalds <torva...@linux-foundation.org>
> Cc: Maarten Lankhorst <maarten.lankho...@linux.intel.com>
> Cc: Matthew Garrett <mj...@srcf.ucam.org>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Raphael Hertzog <hert...@debian.org>
> Cc: Roger Shimizu <rogershim...@gmail.com>
> Cc: Thomas Gleixner <t...@linutronix.de>
> Cc: linux-...@vger.kernel.org
> Signed-off-by: Matt Fleming <m...@codeblueprint.co.uk>
> ---
>  arch/x86/mm/pageattr.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
> index 14c38ae80409..fcf8e290740a 100644
> --- a/arch/x86/mm/pageattr.c
> +++ b/arch/x86/mm/pageattr.c
> @@ -1055,7 +1055,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned 
> long start, pgd_t *pgd,
>       /*
>        * Map everything starting from the Gb boundary, possibly with 1G pages
>        */
> -     while (end - start >= PUD_SIZE) {
> +     while (cpu_has_gbpages && end - start >= PUD_SIZE) {
>               set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
>                                  massage_pgprot(pud_pgprot)));
>  
> --

Yap, looks ok to me as a minimal fix:

Acked-by: Borislav Petkov <b...@suse.de>

As a future cleanup, I'd carve out the sections of populate_pud() which
map the stuff up to the Gb boundary and the trailing leftover into a
helper, say, __populate_pud_chunk() or so which goes and populates with
smaller sizes, i.e., 2M and 4K and the lower levels.

This'll make populate_pud() more readable too.

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

Reply via email to