Since clearing a bit in thread_info is an atomic operation, the spinlock
is redundant and can be removed, reducing lock contention is good for
performance.

Acked-by: Masami Hiramatsu (Google) <mhira...@kernel.org>
Acked-by: Oleg Nesterov <o...@redhat.com>
Signed-off-by: Liao Chang <liaocha...@huawei.com>
---
 kernel/events/uprobes.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 2a0059464383..196366c013f2 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -1986,9 +1986,7 @@ bool uprobe_deny_signal(void)
        WARN_ON_ONCE(utask->state != UTASK_SSTEP);
 
        if (task_sigpending(t)) {
-               spin_lock_irq(&t->sighand->siglock);
                clear_tsk_thread_flag(t, TIF_SIGPENDING);
-               spin_unlock_irq(&t->sighand->siglock);
 
                if (__fatal_signal_pending(t) || 
arch_uprobe_xol_was_trapped(t)) {
                        utask->state = UTASK_SSTEP_TRAPPED;
-- 
2.34.1


Reply via email to