[ Upstream commit a1247d06d01045d7ab2882a9c074fbf21137c690 ]

Even though the atomic_dec_and_mutex_lock() in
__static_key_slow_dec_cpuslocked() can never see a negative value in
key->enabled the subsequent sanity check is re-reading key->enabled, which may
have been set to -1 in the meantime by static_key_slow_inc_cpuslocked().

                CPU  A                               CPU B

 __static_key_slow_dec_cpuslocked():          static_key_slow_inc_cpuslocked():
                               # enabled = 1
   atomic_dec_and_mutex_lock()
                               # enabled = 0
                                              atomic_read() == 0
                                              atomic_set(-1)
                               # enabled = -1
   val = atomic_read()
   # Oops - val == -1!

The test case is TCP's clean_acked_data_enable() / clean_acked_data_disable()
as tickled by KTLS (net/ktls).

Suggested-by: Jakub Kicinski <jakub.kicin...@netronome.com>
Reported-by: Jakub Kicinski <jakub.kicin...@netronome.com>
Tested-by: Jakub Kicinski <jakub.kicin...@netronome.com>
Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org>
Cc: Andrew Morton <a...@linux-foundation.org>
Cc: Linus Torvalds <torva...@linux-foundation.org>
Cc: Paul E. McKenney <paul...@linux.vnet.ibm.com>
Cc: Peter Zijlstra <pet...@infradead.org>
Cc: Thomas Gleixner <t...@linutronix.de>
Cc: Will Deacon <will.dea...@arm.com>
Cc: ard.biesheu...@linaro.org
Cc: oss-driv...@netronome.com
Cc: pbonz...@redhat.com
Signed-off-by: Ingo Molnar <mi...@kernel.org>
Signed-off-by: Sasha Levin <sas...@kernel.org>
---
 kernel/jump_label.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/kernel/jump_label.c b/kernel/jump_label.c
index bad96b476eb6e..a799b1ac6b2fe 100644
--- a/kernel/jump_label.c
+++ b/kernel/jump_label.c
@@ -206,6 +206,8 @@ static void __static_key_slow_dec_cpuslocked(struct 
static_key *key,
                                           unsigned long rate_limit,
                                           struct delayed_work *work)
 {
+       int val;
+
        lockdep_assert_cpus_held();
 
        /*
@@ -215,17 +217,20 @@ static void __static_key_slow_dec_cpuslocked(struct 
static_key *key,
         * returns is unbalanced, because all other static_key_slow_inc()
         * instances block while the update is in progress.
         */
-       if (!atomic_dec_and_mutex_lock(&key->enabled, &jump_label_mutex)) {
-               WARN(atomic_read(&key->enabled) < 0,
-                    "jump label: negative count!\n");
+       val = atomic_fetch_add_unless(&key->enabled, -1, 1);
+       if (val != 1) {
+               WARN(val < 0, "jump label: negative count!\n");
                return;
        }
 
-       if (rate_limit) {
-               atomic_inc(&key->enabled);
-               schedule_delayed_work(work, rate_limit);
-       } else {
-               jump_label_update(key);
+       jump_label_lock();
+       if (atomic_dec_and_test(&key->enabled)) {
+               if (rate_limit) {
+                       atomic_inc(&key->enabled);
+                       schedule_delayed_work(work, rate_limit);
+               } else {
+                       jump_label_update(key);
+               }
        }
        jump_label_unlock();
 }
-- 
2.20.1



Reply via email to