Richard Henderson <r...@twiddle.net> writes: > On 01/24/2017 12:34 PM, Alex Bennée wrote: >> >> Richard Henderson <r...@twiddle.net> writes: >> >>> On 01/19/2017 09:04 AM, Alex Bennée wrote: >>>> +/* flush_all_helper: run fn across all cpus >>>> + * >>>> + * If the wait flag is set then the src cpu's helper will be queued as >>>> + * "safe" work and the loop exited creating a synchronisation point >>>> + * where all queued work will be finished before execution starts >>>> + * again. >>>> + */ >>>> +static void flush_all_helper(CPUState *src, bool wait, >>>> + run_on_cpu_func fn, run_on_cpu_data d) >>>> +{ >>>> + CPUState *cpu; >>>> + >>>> + if (!wait) { >>>> + CPU_FOREACH(cpu) { >>>> + if (cpu != src) { >>>> + async_run_on_cpu(cpu, fn, d); >>>> + } else { >>>> + g_assert(qemu_cpu_is_self(src)); >>>> + fn(src, d); >>>> + } >>>> + } >>>> + } else { >>>> + CPU_FOREACH(cpu) { >>>> + if (cpu != src) { >>>> + async_run_on_cpu(cpu, fn, d); >>>> + } else { >>>> + async_safe_run_on_cpu(cpu, fn, d); >>>> + } >>>> + >>>> + } >>>> + cpu_loop_exit(src); >>>> + } >>>> +} >>> >>> What's the rationale for not having the target do the exit itself? Surely >>> it >>> can tell, and simple end the TB after the insn. >> >> It's more for the global sync functionality. I wanted to keep all the >> guts of re-starting the loop with the correct async_safe_work all in one >> place with a defined API for the guests rather than have them all do it >> themselves. >> >> For the common case of not needing to sync across the cores I agree the >> guest is perfectly able to end the TB so its safe work completes next. >> In fact the ARM helper calls do exactly that. > > Hmm. Would it make more sense to have two functions then, for wait and !wait? > That would allow the wait function be QEMU_NORETURN, which might make it a bit > more obvious about the interface contract.
Seems fair. I was worried about multiplying out too many variants in the API but this seems a good reason to. -- Alex Bennée