Pranith Kumar <bobby.pr...@gmail.com> writes: > The current method of executing atomic code in a guest uses > cpu_exec_step_atomic() from the outermost loop. This causes an abort() > when single stepping over atomic code since debug exception longjmp > will point to the the setlongjmp in cpu_exec(). Another issue with > this mechanism is that the flags which were set in atomic execution > will be lost since we do not call cpu_exec_enter().
I should not the original patch (which is still in my tree so I guess I should squash it) says: The patch enables handling atomic code in the guest. This should be preferably done in cpu_handle_exception(), but the current assumptions regarding when we can execute atomic sections cause a deadlock. > The following patch moves atomic exception handling to the exception > handler where all these issues are taken care of. The change in > start_exclusive() is necessary since now the cpu in atomic execution > will have its running flag set, but we do not want to count it as > pending. > > Thanks to Alex for helping me debug the issue. > > CC: Alex Bennée <alex.ben...@linaro.org> > CC: Richard Henderson <r...@twiddle.net> > CC: Paolo Bonzini <pbonz...@redhat.com> > Signed-off-by: Pranith Kumar <bobby.pr...@gmail.com> > --- > cpu-exec.c | 2 ++ > cpus-common.c | 2 +- > cpus.c | 4 ---- > 3 files changed, 3 insertions(+), 5 deletions(-) > > diff --git a/cpu-exec.c b/cpu-exec.c > index b0ddada8c1..dceacfc5dd 100644 > --- a/cpu-exec.c > +++ b/cpu-exec.c > @@ -427,6 +427,8 @@ static inline bool cpu_handle_exception(CPUState *cpu, > int *ret) > *ret = cpu->exception_index; > if (*ret == EXCP_DEBUG) { > cpu_handle_debug_exception(cpu); > + } else if (*ret == EXCP_ATOMIC) { > + cpu_exec_step_atomic(cpu); > } > cpu->exception_index = -1; > return true; > diff --git a/cpus-common.c b/cpus-common.c > index 59f751ecf9..7b859752ea 100644 > --- a/cpus-common.c > +++ b/cpus-common.c > @@ -192,7 +192,7 @@ void start_exclusive(void) > smp_mb(); > running_cpus = 0; > CPU_FOREACH(other_cpu) { > - if (atomic_read(&other_cpu->running)) { > + if (atomic_read(&other_cpu->running) && > !qemu_cpu_is_self(other_cpu)) { The comment above reads: Must only be called from outside cpu_exec. So we need to revise this comment. Is this really a limitation or was it originally the design goal? > other_cpu->has_waiter = true; > running_cpus++; > qemu_cpu_kick(other_cpu); > diff --git a/cpus.c b/cpus.c > index e1b82bcd49..981f23d52b 100644 > --- a/cpus.c > +++ b/cpus.c > @@ -1461,10 +1461,6 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) > */ > g_assert(cpu->halted); > break; > - case EXCP_ATOMIC: > - qemu_mutex_unlock_iothread(); > - cpu_exec_step_atomic(cpu); > - qemu_mutex_lock_iothread(); > default: > /* Ignore everything else? */ > break; -- Alex Bennée