Christophe Leroy <christophe.le...@c-s.fr> writes: > Le 08/03/2019 à 02:16, Michael Ellerman a écrit : >> When KUAP is enabled we have logic to detect page faults that occur >> outside of a valid user access region and are blocked by the AMR. >> >> What we don't have at the moment is logic to detect a fault *within* a >> valid user access region, that has been incorrectly blocked by AMR. >> This is not meant to ever happen, but it can if we incorrectly >> save/restore the AMR, or if the AMR was overwritten for some other >> reason. >> >> Currently if that happens we assume it's just a regular fault that >> will be corrected by handling the fault normally, so we just return. >> But there is nothing the fault handling code can do to fix it, so the >> fault just happens again and we spin forever, leading to soft lockups. >> >> So add some logic to detect that case and WARN() if we ever see it. >> Arguably it should be a BUG(), but it's more polite to fail the access >> and let the kernel continue, rather than taking down the box. There >> should be no data integrity issue with failing the fault rather than >> BUG'ing, as we're just going to disallow an access that should have >> been allowed. >> >> To make the code a little easier to follow, unroll the condition at >> the end of bad_kernel_fault() and comment each case, before adding the >> call to bad_kuap_fault(). >> >> Signed-off-by: Michael Ellerman <m...@ellerman.id.au> >> --- >> >> v5: New. >> >> .../powerpc/include/asm/book3s/64/kup-radix.h | 12 +++++++++ >> arch/powerpc/include/asm/kup.h | 1 + >> arch/powerpc/mm/fault.c | 25 ++++++++++++++++--- >> 3 files changed, 35 insertions(+), 3 deletions(-) >> >> diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h >> b/arch/powerpc/include/asm/book3s/64/kup-radix.h >> index 3d60b04fc3f6..8d2ddc61e92e 100644 >> --- a/arch/powerpc/include/asm/book3s/64/kup-radix.h >> +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h >> @@ -100,6 +100,18 @@ static inline void prevent_user_access(void __user *to, >> const void __user *from, >> set_kuap(AMR_KUAP_BLOCKED); >> } >> >> +static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) >> +{ >> + if (mmu_has_feature(MMU_FTR_RADIX_KUAP) && >> + ((is_write && (regs->kuap & AMR_KUAP_BLOCK_WRITE)) || >> + (!is_write && (regs->kuap & AMR_KUAP_BLOCK_READ)))) { > > Should this { go on the previous line ?
Yeah I guess. >> + WARN(true, "Bug: %s fault blocked by AMR!", is_write ? "Write" >> : "Read"); >> + return true; > > Could just be > return WARN(true, ....) > > Or even > return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) && > ((is_write && (regs->kuap & AMR_KUAP_BLOCK_WRITE)) || > (!is_write && (regs->kuap & AMR_KUAP_BLOCK_READ))), ...); That's not any more readable IMO. >> diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h >> index f79d4d970852..ccbd2a249575 100644 >> --- a/arch/powerpc/include/asm/kup.h >> +++ b/arch/powerpc/include/asm/kup.h >> @@ -28,6 +28,7 @@ static inline void prevent_user_access(void __user *to, >> const void __user *from, >> unsigned long size) { } >> static inline void allow_read_from_user(const void __user *from, unsigned >> long size) {} >> static inline void allow_write_to_user(void __user *to, unsigned long >> size) {} >> +static inline bool bad_kuap_fault(struct pt_regs *regs, bool is_write) { >> return false; } >> #endif /* CONFIG_PPC_KUAP */ >> >> static inline void prevent_read_from_user(const void __user *from, >> unsigned long size) >> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c >> index 463d1e9d026e..b5d3578d9f65 100644 >> --- a/arch/powerpc/mm/fault.c >> +++ b/arch/powerpc/mm/fault.c >> @@ -224,7 +225,7 @@ static int mm_fault_error(struct pt_regs *regs, unsigned >> long addr, >> >> /* Is this a bad kernel fault ? */ >> static bool bad_kernel_fault(struct pt_regs *regs, unsigned long >> error_code, >> - unsigned long address) >> + unsigned long address, bool is_write) > > We have regs, do we need is_write in addition ? It comes from error_code, which we also have. But I don't see any harm passing it as we already have it calculated and sitting in a GPR. cheers