On 09/14/2015 09:57 AM, Peter Zijlstra wrote:
On Fri, Sep 11, 2015 at 02:37:37PM -0400, Waiman Long wrote:
+#define queued_spin_trylock(l) pv_queued_spin_trylock_unfair(l)
+static inline bool pv_queued_spin_trylock_unfair(struct qspinlock *lock)
+{
+       struct __qspinlock *l = (void *)lock;
+
+       if (READ_ONCE(l->locked))
+               return 0;
+       /*
+        * Wait a bit here to ensure that an actively spinning vCPU has a fair
+        * chance of getting the lock.
+        */
+       cpu_relax();
+
+       return cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0;
+}
+static inline int pvstat_trylock_unfair(struct qspinlock *lock)
+{
+       int ret = pv_queued_spin_trylock_unfair(lock);
+
+       if (ret)
+               pvstat_inc(pvstat_utrylock);
+       return ret;
+}
+#undef  queued_spin_trylock
+#define queued_spin_trylock(l) pvstat_trylock_unfair(l)
These aren't actually ever used...

The pvstat_trylock_unfair() is within the CONFIG_QUEUED_LOCK_STAT block. It will only be activated when the config parameter is set. Otherwise, pv_queued_spin_trylock_unfair() will be used without any counting.

It is used provide count of how many unfair trylock has successfully got the lock.

Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to