On 12/20/24 18:41, Sebastian Andrzej Siewior wrote:
> __module_address() can be invoked within a RCU section, there is no
> requirement to have preemption disabled.
> 
> I'm not sure if using rcu_read_lock() will introduce the regression that
> has been fixed in commit 14c4c8e41511a ("cfi: Use
> rcu_read_{un}lock_sched_notrace").
> 
> Cc: Elliot Berman <quic_eber...@quicinc.com>
> Cc: Kees Cook <k...@kernel.org>
> Cc: Nathan Chancellor <nat...@kernel.org>
> Cc: Sami Tolvanen <samitolva...@google.com>
> Cc: Steven Rostedt <rost...@goodmis.org>
> Cc: l...@lists.linux.dev
> Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>
> ---
>  kernel/cfi.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/kernel/cfi.c b/kernel/cfi.c
> index 08caad7767176..c8f2b5a51b2e6 100644
> --- a/kernel/cfi.c
> +++ b/kernel/cfi.c
> @@ -71,6 +71,10 @@ static bool is_module_cfi_trap(unsigned long addr)
>       struct module *mod;
>       bool found = false;
>  
> +     /*
> +      * XXX this could be RCU protected but would it introcude the regression
> +      * fixed in 14c4c8e41511a ("cfi: Use rcu_read_{un}lock_sched_notrace")
> +      */
>       rcu_read_lock_sched_notrace();
>  
>       mod = __module_address(addr);

I think that since 89245600941e ("cfi: Switch to -fsanitize=kcfi"), this
can be a call to rcu_read_lock_sched(), or in your case rcu_read_lock().
The recursive case where __cfi_slowpath_diag() could end up calling
itself is no longer present, as all that logic is gone. I then don't see
another reason this should use the notrace variant.

@Sami, could you please confirm this?

-- 
Thanks,
Petr

Reply via email to