On 09/27/2016 04:32 PM, Alex Bennée wrote:
Richard Henderson writes:
On 09/27/2016 03:29 PM, Emilio G. Cota wrote:
What's a quick-and-dirty way to disable the fast-path TLB lookups?
Alex: you told me the monitor has an option for this, but I can't
find it. I'm looking for something that'd go
Richard Henderson writes:
> On 09/27/2016 03:29 PM, Emilio G. Cota wrote:
>> What's a quick-and-dirty way to disable the fast-path TLB lookups?
>> Alex: you told me the monitor has an option for this, but I can't
>> find it. I'm looking for something that'd go in tcg/i386 to simply
>> bypass the
On 09/27/2016 03:29 PM, Emilio G. Cota wrote:
What's a quick-and-dirty way to disable the fast-path TLB lookups?
Alex: you told me the monitor has an option for this, but I can't
find it. I'm looking for something that'd go in tcg/i386 to simply
bypass the fast path.
There is no easy way. If y
Emilio G. Cota writes:
> On Tue, Sep 27, 2016 at 18:16:45 +0200, Paolo Bonzini wrote:
>> Anyhow, the next step is to merge either cmpxchg-based atomics
>> or iothread-free single-threaded TCG. Either will do. :)
>>
>> I think that even iothread-free single-threaded TCG requires this
>> TLB stuf
On Tue, Sep 27, 2016 at 18:16:45 +0200, Paolo Bonzini wrote:
> Anyhow, the next step is to merge either cmpxchg-based atomics
> or iothread-free single-threaded TCG. Either will do. :)
>
> I think that even iothread-free single-threaded TCG requires this
> TLB stuff, because the iothread's addres
Paolo Bonzini writes:
> On 02/08/2016 08:37, Alex Bennée wrote:
>>> - in notdirty_mem_write, care must be put in the ordering of
>>> tb_invalidate_phys_page_fast (which itself calls tlb_unprotect_code and
>>> takes the tb_lock in tb_invalidate_phys_page_range) and tlb_set_dirty.
>>> At least it
On 02/08/2016 08:37, Alex Bennée wrote:
>> - in notdirty_mem_write, care must be put in the ordering of
>> tb_invalidate_phys_page_fast (which itself calls tlb_unprotect_code and
>> takes the tb_lock in tb_invalidate_phys_page_range) and tlb_set_dirty.
>> At least it seems to me that the call to
> On 08/02/2016 12:07 PM, Alex Bennée wrote:
> > This will work but I wonder if it is time to call it a day for 32 on 64
> > support? I mean all this can be worked around but I wonder if it is
> > worth the effort if no one actually uses this combination.
>
> I've been meaning to bring up exactly
On 08/02/2016 12:07 PM, Alex Bennée wrote:
This will work but I wonder if it is time to call it a day for 32 on 64
support? I mean all this can be worked around but I wonder if it is
worth the effort if no one actually uses this combination.
I've been meaning to bring up exactly this question d
> > - tlb_set_page_with_attrs is also hard-ish to get right, but perhaps the
> > same idea of adding the callback last would work:
> >
> > /* First set addr_write so that concurrent tlb_reset_dirty_range
> > * finds a match.
> > */
> > te->addr_write = address;
> > if (memory
Paolo Bonzini writes:
> On 26/07/2016 14:09, Alex Bennée wrote:
>>
>> As the eventual operation is the setting of a flag I'm wondering if we
>> can simply use atomic primitives to ensure we don't corrupt the lookup
>> address when setting the TLB_NOTDIRTY flag?
>
> In theory tlb_reset_dirty and
On 26/07/2016 14:09, Alex Bennée wrote:
>
> As the eventual operation is the setting of a flag I'm wondering if we
> can simply use atomic primitives to ensure we don't corrupt the lookup
> address when setting the TLB_NOTDIRTY flag?
In theory tlb_reset_dirty and tlb_set_dirty1 can use atomic_*
Hi,
While I've been re-spinning the base patches I've brought forward some
of the async work for cputlb done on the ARM enabling set. Thanks to
Sergey's consolidation work we have a robust mechanism for halting all
vCPUs to get work done if we need to. The cputlb changes are actually
independent o
13 matches
Mail list logo