On Wed, Oct 03, 2018 at 19:05:51 +0200, Paolo Bonzini wrote:
> On 03/10/2018 19:02, Emilio G. Cota wrote:
> >> For reads I agree, but you may actually get a torn read if the writer
> >> doesn't use atomic_set.
> >
> > But you cannot get a torn read if all reads that don't hold the lock
> > are comi
On 03/10/2018 19:02, Emilio G. Cota wrote:
>> For reads I agree, but you may actually get a torn read if the writer
>> doesn't use atomic_set.
>
> But you cannot get a torn read if all reads that don't hold the lock
> are coming from the same thread that performed the write.
Ah, so you are relying
On Wed, Oct 03, 2018 at 17:52:32 +0200, Paolo Bonzini wrote:
> On 03/10/2018 17:48, Emilio G. Cota wrote:
> >> it's probably best to do all atomic_set instead of just the memberwise
> >> copy.
> > Atomics aren't necessary here, as long as the copy is protected by the
> > lock. This allows other vC
On 03/10/2018 17:48, Emilio G. Cota wrote:
>> it's probably best to do all atomic_set instead of just the memberwise copy.
> Atomics aren't necessary here, as long as the copy is protected by the
> lock. This allows other vCPUs to see a consistent view of the data (since
> they always acquire the T
On Wed, Oct 03, 2018 at 12:02:19 +0200, Paolo Bonzini wrote:
> On 03/10/2018 11:19, Alex Bennée wrote:
> >> Fix it by using tlb_lock, a per-vCPU lock. All updaters of tlb_table
> >> and the corresponding victim cache now hold the lock.
> >> The readers that do not hold tlb_lock must use atomic read
On 03/10/2018 11:19, Alex Bennée wrote:
>> Fix it by using tlb_lock, a per-vCPU lock. All updaters of tlb_table
>> and the corresponding victim cache now hold the lock.
>> The readers that do not hold tlb_lock must use atomic reads when
>> reading .addr_write, since this field can be updated by oth
Emilio G. Cota writes:
> Currently we rely on atomic operations for cross-CPU invalidations.
> There are two cases that these atomics miss: cross-CPU invalidations
> can race with either (1) vCPU threads flushing their TLB, which
> happens via memset, or (2) vCPUs calling tlb_reset_dirty on the