We have a might_fault() check in __unsafe_put_user_goto(), but that is dangerous as it potentially calls lots of code while user access is enabled.
It also triggers the check Alexey added to the irq restore path to catch cases like that: WARNING: CPU: 30 PID: 1 at arch/powerpc/include/asm/book3s/64/kup.h:324 arch_local_irq_restore+0x160/0x190 NIP arch_local_irq_restore+0x160/0x190 LR lock_is_held_type+0x140/0x200 Call Trace: 0xc00000007f392ff8 (unreliable) ___might_sleep+0x180/0x320 __might_fault+0x50/0xe0 filldir64+0x2d0/0x5d0 call_filldir+0xc8/0x180 ext4_readdir+0x948/0xb40 iterate_dir+0x1ec/0x240 sys_getdents64+0x80/0x290 system_call_exception+0x160/0x280 system_call_common+0xf0/0x27c So remove the might fault check from unsafe_put_user(). Any call to unsafe_put_user() must be inside a region that's had user access enabled with user_access_begin(), so move the might_fault() in there. That also allows us to drop the is_kernel_addr() test, because there should be no code using user_access_begin() in order to access a kernel address. Signed-off-by: Michael Ellerman <m...@ellerman.id.au> --- arch/powerpc/include/asm/uaccess.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h index 70347ee34c94..71640eca7341 100644 --- a/arch/powerpc/include/asm/uaccess.h +++ b/arch/powerpc/include/asm/uaccess.h @@ -214,8 +214,6 @@ do { \ #define __unsafe_put_user_goto(x, ptr, size, label) \ do { \ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \ - if (!is_kernel_addr((unsigned long)__pu_addr)) \ - might_fault(); \ __chk_user_ptr(ptr); \ __put_user_size_goto((x), __pu_addr, (size), label); \ } while (0) @@ -494,6 +492,8 @@ extern void memcpy_page_flushcache(char *to, struct page *page, size_t offset, static __must_check inline bool user_access_begin(const void __user *ptr, size_t len) { + might_fault(); + if (unlikely(!access_ok(ptr, len))) return false; allow_read_write_user((void __user *)ptr, ptr, len); -- 2.25.1