On 04/18/2016 02:31 AM, Davidlohr Bueso wrote:
... remove the redundant second iteration, this is most
likely a copy/past buglet.

Signed-off-by: Davidlohr Bueso<dbu...@suse.de>
---
  kernel/locking/qspinlock_stat.h | 2 --
  1 file changed, 2 deletions(-)

diff --git a/kernel/locking/qspinlock_stat.h b/kernel/locking/qspinlock_stat.h
index d734b7502001..72722334237a 100644
--- a/kernel/locking/qspinlock_stat.h
+++ b/kernel/locking/qspinlock_stat.h
@@ -191,8 +191,6 @@ static ssize_t qstat_write(struct file *file, const char 
__user *user_buf,

                for (i = 0 ; i<  qstat_num; i++)
                        WRITE_ONCE(ptr[i], 0);
-               for (i = 0 ; i<  qstat_num; i++)
-                       WRITE_ONCE(ptr[i], 0);
        }
        return count;
  }

The double write is done on purpose. As the statistics count update isn't atomic, there is a very small chance (p) that clearing the count may happen in the middle of read-modify-write bus transaction. Doing a double write will reduce the chance further to p^2. This isn't failsafe, but I think is good enough.

However, I don't mind eliminate the double write either as we can always view the statistics count after a reset to make sure that they are properly cleared.

Cheers,
Longman

Reply via email to