Hi,

thanks for the updated version.

On Fri, 9 May 2025, Sebastian Andrzej Siewior wrote:

> From: Peter Zijlstra <pet...@infradead.org>
> 
> With the goal of deprecating / removing VOLUNTARY preempt, live-patch
> needs to stop relying on cond_resched() to make forward progress.
> 
> Instead, rely on schedule() with TASK_FREEZABLE set. Just like
> live-patching, the freezer needs to be able to stop tasks in a safe /
> known state.
> 
> Compile tested only.

livepatch selftests pass and I also ran some more.
 
> [bigeasy: use likely() in __klp_sched_try_switch() and update comments]
> 
> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
> Signed-off-by: Sebastian Andrzej Siewior <bige...@linutronix.de>

Acked-by: Miroslav Benes <mbe...@suse.cz>

A nit below if there is an another version, otherwise Petr might fix it 
when merging.

> @@ -365,27 +356,20 @@ static bool klp_try_switch_task(struct task_struct 
> *task)
>  
>  void __klp_sched_try_switch(void)
>  {
> +     /*
> +      * This function is called from __schedule() while a context switch is
> +      * about to happen. Preemption is already disabled and klp_mutex
> +      * can't be acquired.
> +      * Disabled preemption is used to prevent racing with other callers of
> +      * klp_try_switch_task(). Thanks to task_call_func() they won't be
> +      * able to switch to this task while it's running.
> +      */
> +     lockdep_assert_preemption_disabled();
> +
> +     /* Make sure current didn't get patched */
>       if (likely(!klp_patch_pending(current)))
>                return;

This last comment is not precise. If !klp_patch_pending(), there is 
nothing to do. Fast way out. So if it was up to me, I would remove the 
line all together.

Miroslav

Reply via email to