On 11/08/2013 04:21 PM, Paul E. McKenney wrote:
On Mon, Nov 04, 2013 at 12:17:20PM -0500, Waiman Long wrote:
There is a pending patch in the rwsem patch series that adds a generic
MCS locking helper functions to do MCS-style locking. This patch
will enable the queue rwlock to use that generic MCS lock/unlock
primitives for internal queuing. This patch should only be merged
after the merging of that generic MCS locking patch.

Signed-off-by: Waiman Long<[email protected]>
This one does might address at least some of the earlier memory-barrier
issues, at least assuming that the MCS lock is properly memory-barriered.

Then again, maybe not.  Please see below.

                                                        Thanx, Paul

        /*
         * At the head of the wait queue now, try to increment the reader
@@ -172,12 +103,36 @@ void queue_read_lock_slowpath(struct qrwlock *lock)
                while (ACCESS_ONCE(lock->cnts.writer))
                        cpu_relax();
        }
-       rspin_until_writer_unlock(lock, 1);
-       signal_next(lock,&node);
+       /*
+        * Increment reader count&  wait until writer unlock
+        */
+       cnts.rw = xadd(&lock->cnts.rw, QRW_READER_BIAS);
+       rspin_until_writer_unlock(lock, cnts);
+       mcs_spin_unlock(&lock->waitq,&node);
But mcs_spin_unlock() is only required to do a RELEASE barrier, which
could still allow critical-section leakage.

Yes, that is a problem. I will try to add an ACQUIRE barrier in reading the writer byte.

-Longman

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to