On Fri, 2019-05-17 at 13:29:58 UTC, Michael Ellerman wrote: > From: "Aneesh Kumar K.V" <aneesh.ku...@linux.ibm.com> > > Accesses by userspace to random addresses outside the user or kernel > address range will generate an SLB fault. When we handle that fault we > classify the effective address into several classes, eg. user, kernel > linear, kernel virtual etc. > > For addresses that are completely outside of any valid range, we > should not insert an SLB entry at all, and instead immediately an > exception. > > In the past this was handled in two ways. Firstly we would check the > top nibble of the address (using REGION_ID(ea)) and that would tell us > if the address was user (0), kernel linear (c), kernel virtual (d), or > vmemmap (f). If the address didn't match any of these it was invalid. > > Then for each type of address we would do a secondary check. For the > user region we check against H_PGTABLE_RANGE, for kernel linear we > would mask the top nibble of the address and then check the address > against MAX_PHYSMEM_BITS. > > As part of commit 0034d395f89d ("powerpc/mm/hash64: Map all the kernel > regions in the same 0xc range") we replaced REGION_ID() with > get_region_id() and changed the masking of the top nibble to only mask > the top two bits, which introduced a bug. > > Addresses less than (4 << 60) are still handled correctly, they are > either less than (1 << 60) in which case they are subject to the > H_PGTABLE_RANGE check, or they are correctly checked against > MAX_PHYSMEM_BITS. > > However addresses from (4 << 60) to ((0xc << 60) - 1), are incorrectly > treated as kernel linear addresses in get_region_id(). Then the top > two bits are cleared by EA_MASK in slb_allocate_kernel() and the > address is checked against MAX_PHYSMEM_BITS, which it passes due to > the masking. The end result is we incorrectly insert SLB entries for > those addresses. > > That is not actually catastrophic, having inserted the SLB entry we > will then go on to take a page fault for the address and at that point > we detect the problem and report it as a bad fault. > > Still we should not be inserting those entries, or treating them as > kernel linear addresses in the first place. So fix get_region_id() to > detect addresses in that range and return an invalid region id, which > we cause use to not insert an SLB entry and directly report an > exception. > > Fixes: 0034d395f89d ("powerpc/mm/hash64: Map all the kernel regions in the > same 0xc range") > Signed-off-by: Aneesh Kumar K.V <aneesh.ku...@linux.ibm.com> > [mpe: Drop change to EA_MASK for now, rewrite change log] > Signed-off-by: Michael Ellerman <m...@ellerman.id.au>
Applied to powerpc fixes. https://git.kernel.org/powerpc/c/c179976cf4cbd2e65f29741d5bc07ccf cheers