>>> On 03.02.16 at 12:57, wrote:
> On 03/02/16 11:28, Jan Beulich wrote:
> On 01.02.16 at 12:31, wrote:
>>> +void queue_write_lock_slowpath(rwlock_t *lock)
>>> +{
>>> +u32 cnts;
>>> +
>>> +/* Put the writer into the wait queue. */
>>> +spin_lock(&lock->lock);
>>> +
>>> +/* Try
On 03/02/16 11:28, Jan Beulich wrote:
On 01.02.16 at 12:31, wrote:
>> +void queue_write_lock_slowpath(rwlock_t *lock)
>> +{
>> +u32 cnts;
>> +
>> +/* Put the writer into the wait queue. */
>> +spin_lock(&lock->lock);
>> +
>> +/* Try to acquire the lock directly if no reader is
>>> On 01.02.16 at 12:31, wrote:
> +void queue_write_lock_slowpath(rwlock_t *lock)
> +{
> +u32 cnts;
> +
> +/* Put the writer into the wait queue. */
> +spin_lock(&lock->lock);
> +
> +/* Try to acquire the lock directly if no reader is present. */
> +if ( !atomic_read(&lock->cn
From: Jennifer Herbert
The current rwlocks are write-biased and unfair. This allows writers
to starve readers in situations where there are many writers (e.g.,
p2m type changes from log dirty updates during domain save).
Replace the current implementation with queued read-write locks which use