> On 16 Jan 2015, at 09:07, Jan Kiszka wrote:
>
> On 2015-01-16 08:25, Mark Burton wrote:
>>
>>> On 15 Jan 2015, at 22:41, Paolo Bonzini wrote:
>>>
>>>
>>>
>>> On 15/01/2015 21:53, Mark Burton wrote:
> Jan said he had it working at least on ARM (MusicPal).
yeah - our problem
On 16/01/2015 09:07, Jan Kiszka wrote:
On 2015-01-16 08:25, Mark Burton wrote:
On 15 Jan 2015, at 22:41, Paolo Bonzini wrote:
On 15/01/2015 21:53, Mark Burton wrote:
Jan said he had it working at least on ARM (MusicPal).
yeah - our problem is when we enable multi-threads - which I dont bel
On 2015-01-16 08:25, Mark Burton wrote:
>
>> On 15 Jan 2015, at 22:41, Paolo Bonzini wrote:
>>
>>
>>
>> On 15/01/2015 21:53, Mark Burton wrote:
Jan said he had it working at least on ARM (MusicPal).
>>>
>>> yeah - our problem is when we enable multi-threads - which I dont believe
>>> Jan di
> On 15 Jan 2015, at 22:41, Paolo Bonzini wrote:
>
>
>
> On 15/01/2015 21:53, Mark Burton wrote:
>>> Jan said he had it working at least on ARM (MusicPal).
>>
>> yeah - our problem is when we enable multi-threads - which I dont believe
>> Jan did…
>
> Multithreaded TCG, or single-threaded T
On 15/01/2015 21:53, Mark Burton wrote:
>> Jan said he had it working at least on ARM (MusicPal).
>
> yeah - our problem is when we enable multi-threads - which I dont believe Jan
> did…
Multithreaded TCG, or single-threaded TCG with SMP?
>>> One thing I wonder - why do we need to go to the e
On 15/01/2015 21:53, Mark Burton wrote:
>> Jan said he had it working at least on ARM (MusicPal).
>
> yeah - our problem is when we enable multi-threads - which I dont believe Jan
> did…
Multithreaded TCG, or single-threaded TCG with SMP?
>>> One thing I wonder - why do we need to go to the e
> On 15 Jan 2015, at 21:27, Paolo Bonzini wrote:
>
>
>
> On 15/01/2015 20:07, Mark Burton wrote:
>> However - if we go this route -the current patch is only for x86.
>> (apart from the fact that we still seem to land in a deadlock…)
>
> Jan said he had it working at least on ARM (MusicPal).
On 15/01/2015 20:07, Mark Burton wrote:
> However - if we go this route -the current patch is only for x86.
> (apart from the fact that we still seem to land in a deadlock…)
Jan said he had it working at least on ARM (MusicPal).
> One thing I wonder - why do we need to go to the extent of mutex
Still in agony on this issue - I’ve CC’d Jan as his patch looks important…
the patch below would seem to offer by far and away the best result here. (If
only we could get it working ;-) )
it allows threads to proceed as we want them to, it means we dont have
to ‘count’ the number of CPU’
I think we call that flag “please dont reallocate this TB until at least after
a CPU has exited and we do a global flush”… So if we sync and get all cpu’s to
exit on a global flush, this flag is only there as a figment of our imagination…
e.g. we’re safe without it?
Wish I could say the same of
On 15/01/2015 12:14, Alexander Graf wrote:
On 15.01.15 12:12, Paolo Bonzini wrote:
[now with correct listserver address]
On 15/01/2015 11:25, Frederic Konrad wrote:
Hi everybody,
In case of multithread TCG what is the best way to handle
qemu_global_mutex?
We though to have one mutex per vcpu
On 15 January 2015 at 13:27, Frederic Konrad wrote:
> PS: Any idea why listserver is dropped from listserver.greensocs.com?
Paolo's mail client apparently has a bizarre allergy to the correct
address...
-- PMM
On 15/01/2015 13:56, Paolo Bonzini wrote:
On 15/01/2015 13:51, Frederic Konrad wrote:
Thanks for the reply.
As I understand the idea of Jan is to unlock the global_mutex during tcg
execution.
Is that right?
So that means it's currently not the case and we won't be able to run
two TCG
threads
On 15/01/2015 13:51, Frederic Konrad wrote:
>
>
> Thanks for the reply.
>
> As I understand the idea of Jan is to unlock the global_mutex during tcg
> execution.
> Is that right?
> So that means it's currently not the case and we won't be able to run
> two TCG
> threads at the same time?
Yes.
On 15/01/2015 12:12, Paolo Bonzini wrote:
[now with correct listserver address]
On 15/01/2015 11:25, Frederic Konrad wrote:
Hi everybody,
In case of multithread TCG what is the best way to handle
qemu_global_mutex?
We though to have one mutex per vcpu and then synchronize vcpu threads when
the
On 15/01/2015 12:14, Alexander Graf wrote:
>> >
>> > Once you have >1 VCPU thread you'll need the RCU work that I am slowly
>> > polishing and sending out. That's because one device can change the
>> > memory map, and that will cause a tlb_flush for all CPUs in tcg_commit,
>> > and that's not t
On 15.01.15 12:12, Paolo Bonzini wrote:
> [now with correct listserver address]
>
> On 15/01/2015 11:25, Frederic Konrad wrote:
>> Hi everybody,
>>
>> In case of multithread TCG what is the best way to handle
>> qemu_global_mutex?
>> We though to have one mutex per vcpu and then synchronize vcpu
[now with correct listserver address]
On 15/01/2015 11:25, Frederic Konrad wrote:
> Hi everybody,
>
> In case of multithread TCG what is the best way to handle
> qemu_global_mutex?
> We though to have one mutex per vcpu and then synchronize vcpu threads when
> they exit (eg: in tcg_exec_all).
>
On 15/01/2015 11:25, Frederic Konrad wrote:
> Hi everybody,
>
> In case of multithread TCG what is the best way to handle
> qemu_global_mutex?
> We though to have one mutex per vcpu and then synchronize vcpu threads when
> they exit (eg: in tcg_exec_all).
The basic ideas from Jan's patch in
htt
On 15/01/2015 11:34, Peter Maydell wrote:
On 15 January 2015 at 10:25, Frederic Konrad wrote:
Hi everybody,
In case of multithread TCG what is the best way to handle qemu_global_mutex?
It shouldn't need any changes I think. You're basically bringing
TCG into line with what KVM already has --
On 15 January 2015 at 10:25, Frederic Konrad wrote:
> Hi everybody,
>
> In case of multithread TCG what is the best way to handle qemu_global_mutex?
It shouldn't need any changes I think. You're basically bringing
TCG into line with what KVM already has -- one thread per guest
CPU; and qemu_globa
Hi everybody,
In case of multithread TCG what is the best way to handle qemu_global_mutex?
We though to have one mutex per vcpu and then synchronize vcpu threads when
they exit (eg: in tcg_exec_all).
Is that making sense?
Thanks,
Fred
22 matches
Mail list logo