On Sun, May 24, 2009 at 05:04:52PM +0300, Avi Kivity wrote:
> Marcelo Tosatti wrote:
>> Get rid of kvm->lock dependency on coalesced_mmio methods. Use an  
>> atomic variable instead to guarantee only one vcpu is batching
>> data into the ring at a given time.
>>
>> Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com>
>>
>> Index: kvm-irqlock/virt/kvm/coalesced_mmio.c
>> ===================================================================
>> --- kvm-irqlock.orig/virt/kvm/coalesced_mmio.c
>> +++ kvm-irqlock/virt/kvm/coalesced_mmio.c
>> @@ -26,9 +26,12 @@ static int coalesced_mmio_in_range(struc
>>      if (!is_write)
>>              return 0;
>>  -   /* kvm->lock is taken by the caller and must be not released before
>> -         * dev.read/write
>> -         */
>> +    /*
>> +     * Some other vcpu might be batching data into the ring,
>> +     * fallback to userspace. Ordering not our problem.
>> +     */
>> +    if (!atomic_add_unless(&dev->in_use, 1, 1))
>> +            return 0;
>>    
>
> Ordering with simultaneous writes is indeed not our problem, but the  
> ring may contain ordered writes (even by the same vcpu!)

You mean writes by a vcpu should maintain ordering even when that vcpu
migrates to a different pcpu? 

The smp_wmb in coalesced_write guarantees that.

> Suggest our own lock here.  in_use is basically a homemade lock, better  
> to use the factory made ones which come with a warranty.

The first submission had a mutex but was considered unorthodox. Although
i agree with your argument made then, i don't see a way around that.

Some pseudocode please? 

Agree with the suggestion on patch 3, will fix the commentary on
kvm_host.h.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to