On 02/13/2014 12:26 PM, Peter Zijlstra wrote:
On Thu, Feb 13, 2014 at 05:35:46PM +0100, Peter Zijlstra wrote:
On Tue, Feb 11, 2014 at 03:12:59PM -0500, Waiman Long wrote:
Using the same locktest program to repetitively take a single rwlock with
programmable number of threads and count their execution times. Each
thread takes the lock 5M times on a 4-socket 40-core Westmere-EX
system. I bound all the threads to different CPUs with the following
3 configurations:
1) Both CPUs and lock are in the same node
2) CPUs and lock are in different nodes
3) Half of the CPUs are in same node as the lock& the other half
are remote
I can't find these configurations in the below numbers; esp the first is
interesting because most computers out there have no nodes.
Two types of qrwlock are tested:
1) Use MCS lock
2) Use ticket lock
arch_spinlock_t; you forget that if you change that to an MCS style lock
this one goes along for free.
Furthermore; comparing the current rwlock to the ticket-rwlock already
shows an improvement, so on that aspect its worth it as well.
As I said in my previous email, I am not against your change.
And there's also the paravirt people to consider; a fair rwlock will
make them unhappy; and I'm hoping that their current paravirt ticket
stuff is sufficient to deal with the ticket-rwlock without them having
to come and wreck things again.
Actually, my original qrwlock patch has an unfair option. With some
minor change, it can be made unfair pretty easily. So we can use the
paravirt config macro to change that to unfair if it is what the
virtualization people want.
Similarly; qspinlock needs paravirt support.
The current paravirt code has hard-coded the use of ticket spinlock.
That is why I have to disable my qspinlock code if paravirt is enabled.
I have thinking about that paravirt support. Since the waiting tasks are
queued up. By maintaining some kind of heart beat signal, it is possible
to make the waiting task jump the queue if the previous one in the queue
doesn't seem to be alive. I will work on that next once I am done with
the current qspinlock patch.
-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/