Nicholas Piggin <npig...@gmail.com> writes: > pfn_pte is never given a pte above the addressable physical memory > limit, so the masking is redundant. In case of a software bug, it > is not obviously better to silently truncate the pfn than to corrupt > the pte (either one will result in memory corruption or crashes), > so there is no reason to add this to the fast path. > > Add VM_BUG_ON to catch cases where the pfn is invalid. These would > catch the create_section_mapping bug fixed by a previous commit. > > [16885.256466] ------------[ cut here ]------------ > [16885.256492] kernel BUG at > arch/powerpc/include/asm/book3s/64/pgtable.h:612! > cpu 0x0: Vector: 700 (Program Check) at [c0000000ee0a36d0] > pc: c000000000080738: __map_kernel_page+0x248/0x6f0 > lr: c000000000080ac0: __map_kernel_page+0x5d0/0x6f0 > sp: c0000000ee0a3960 > msr: 9000000000029033 > current = 0xc0000000ec63b400 > paca = 0xc0000000017f0000 irqmask: 0x03 irq_happened: 0x01 > pid = 85, comm = sh > kernel BUG at arch/powerpc/include/asm/book3s/64/pgtable.h:612! > Linux version 5.3.0-rc1-00001-g0fe93e5f3394 > enter ? for help > [c0000000ee0a3a00] c000000000d37378 create_physical_mapping+0x260/0x360 > [c0000000ee0a3b10] c000000000d370bc create_section_mapping+0x1c/0x3c > [c0000000ee0a3b30] c000000000071f54 arch_add_memory+0x74/0x130 >
Reviewed-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com> > Signed-off-by: Nicholas Piggin <npig...@gmail.com> > --- > arch/powerpc/include/asm/book3s/64/pgtable.h | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h > b/arch/powerpc/include/asm/book3s/64/pgtable.h > index 8308f32e9782..8e47fb85dfa6 100644 > --- a/arch/powerpc/include/asm/book3s/64/pgtable.h > +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h > @@ -608,8 +608,10 @@ static inline bool pte_access_permitted(pte_t pte, bool > write) > */ > static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) > { > - return __pte((((pte_basic_t)(pfn) << PAGE_SHIFT) & PTE_RPN_MASK) | > - pgprot_val(pgprot)); > + VM_BUG_ON(pfn >> (64 - PAGE_SHIFT)); > + VM_BUG_ON((pfn << PAGE_SHIFT) & ~PTE_RPN_MASK); > + > + return __pte(((pte_basic_t)pfn << PAGE_SHIFT) | pgprot_val(pgprot)); > } > > static inline unsigned long pte_pfn(pte_t pte) > -- > 2.22.0