On 07/07/2015 14:27, Alex Bennée wrote:
Frederic Konrad <fred.kon...@greensocs.com> writes:
On 26/06/2015 17:02, Paolo Bonzini wrote:
On 26/06/2015 16:47, fred.kon...@greensocs.com wrote:
From: KONRAD Frederic <fred.kon...@greensocs.com>
This removes tcg_halt_cond global variable.
We need one QemuCond per virtual cpu for multithread TCG.
Signed-off-by: KONRAD Frederic <fred.kon...@greensocs.com>
<snip>
@@ -1068,7 +1065,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
}
}
- qemu_tcg_wait_io_event();
+ qemu_tcg_wait_io_event(QTAILQ_FIRST(&cpus));
Does this work (for non-multithreaded TCG) if tcg_thread_fn is waiting
on the "wrong" condition variable? For example if all CPUs are idle and
the second CPU wakes up, qemu_tcg_wait_io_event won't be kicked out of
the wait.
I think you need to have a CPUThread struct like this:
struct CPUThread {
QemuThread thread;
QemuCond halt_cond;
};
and in CPUState have a CPUThread * field instead of the thread and
halt_cond fields.
Then single-threaded TCG can point all CPUStates to the same instance of
the struct, while multi-threaded TCG can point each CPUState to a
different struct.
Paolo
Hmm probably not, though we didn't pay attention to keep the non MTTCG
working.
(which is probably not good).
<snip>
You may want to consider push a branch up to a github mirror and
enabling travis-ci on the repo. That way you'll at least know how broken
the rest of the tree is.
I appreciate we are still at the RFC stage here but it will probably pay
off in the long run to try and avoid breaking the rest of the tree ;-)
Good point :)
Fred