On Tue, 2013-01-22 at 15:13 -0800, Michel Lespinasse wrote:
> {
> - __q_spin_unlock(lock, node);
> - preempt_enable_no_resched();
> - local_bh_enable_ip((unsigned long)__builtin_return_address(0));
> + unsigned int cpu, i;
> + struct q_spinlock_token *token;
> + for_each_po
On Wed, Jan 23, 2013 at 1:55 PM, Rik van Riel wrote:
> There is one thing I do not understand about these locks.
Ah, I need to explain it better then :)
> On 01/22/2013 06:13 PM, Michel Lespinasse wrote:
>> +static inline void
>> +q_spin_unlock(struct q_spinlock *lock, struct q_spinlock_node *no
On 01/22/2013 06:13 PM, Michel Lespinasse wrote:
Because of these limitations, the MCS queue spinlock implementation does
not always compare favorably to ticket spinlocks under moderate contention.
This alternative queue spinlock implementation has some nice properties:
- One single atomic ope
The MCS based queue spinlock implementation was easy to use, but it has
a few annoying performance limitations under moderate contention.
- In the uncontended case, it uses atomic operations on both the acquire
and the release paths, as opposed to the ticket spinlock which can use
(on x86) a s
4 matches
Mail list logo