This patch optimizes the "quick" RCU update-side fastpath, so that in the
absence of readers, synchronize_qrcu() does four non-atomic comparisons
and three memory barriers, eliminating the need to acquire the global
lock in this case.  Lightly tested.  Algorithm has been validated for
the 3-reader-2-updater and 2-reader-3-updater cases -- 3-readers-3-updaters
case still to be done (I expect to get access to a large-memory machine
in the next few weeks -- need >>20GB).

Not for inclusion.  Patch is against Oleg's original patch, and likely
needs to be rediffed against Jen's patchstack.  I will do this rediffing
later, first want an easy-to-test and easy-to-inpect version.

Signed-off-by: Paul E. McKenney <[EMAIL PROTECTED]>
---

 srcu.c |   42 +++++++++++++++++++++++++++++++++++++-----
 1 file changed, 37 insertions(+), 5 deletions(-)

diff -urpNa -X dontdiff linux-2.6.19-qrcu/kernel/srcu.c 
linux-2.6.19-qrcu-fp/kernel/srcu.c
--- linux-2.6.19-qrcu/kernel/srcu.c     2007-02-05 16:27:50.000000000 -0800
+++ linux-2.6.19-qrcu-fp/kernel/srcu.c  2007-02-11 18:10:35.000000000 -0800
@@ -287,23 +287,55 @@ void synchronize_qrcu(struct qrcu_struct
 {
        int idx;
 
-       smp_mb();
+       smp_mb();  /* Force preceding change to happen before fastpath check. */
+
+       /*
+        * Fastpath: If the two counters sum to "1" at a given point in
+        * time, there are no readers.  However, it takes two separate
+        * loads to sample both counters, which won't occur simultaneously.
+        * So we might race with a counter switch, so that we might see
+        * ctr[0]==0, then the counter might switch, then we might see
+        * ctr[1]==1 (unbeknownst to us because there is a reader still
+        * there).  So we do a read memory barrier and recheck.  If the
+        * same race happens again, there must have been a second counter
+        * switch.  This second counter switch could not have happened
+        * until all preceding readers finished, so if the condition
+        * is true both times, we may safely proceed.
+        *
+        * This relies critically on the atomic increment and atomic
+        * decrement being seen as executing in order.
+        */
+
+       if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1) {
+               smp_rmb();  /* Keep two checks independent. */
+               if (atomic_read(&qp->ctr[0]) + atomic_read(&qp->ctr[1]) <= 1)
+                       goto out;
+       }
+
        mutex_lock(&qp->mutex);
 
        idx = qp->completed & 0x1;
        if (atomic_read(qp->ctr + idx) == 1)
-               goto out;
+               goto out_unlock;
 
        atomic_inc(qp->ctr + (idx ^ 0x1));
-       /* Reduce the likelihood that qrcu_read_lock() will loop */
+
+       /*
+        * Prevent subsequent decrement from being seen before previous
+        * increment -- such an inversion could cause the fastpath
+        * above to falsely conclude that there were no readers.  Also,
+        * reduce the likelihood that qrcu_read_lock() will loop.
+        */
+
        smp_mb__after_atomic_inc();
        qp->completed++;
 
        atomic_dec(qp->ctr + idx);
        __wait_event(qp->wq, !atomic_read(qp->ctr + idx));
-out:
+out_unlock:
        mutex_unlock(&qp->mutex);
-       smp_mb();
+out:
+       smp_mb(); /* force subsequent free after qrcu_read_unlock(). */
 }
 
 EXPORT_SYMBOL_GPL(init_qrcu_struct);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to