On Sun, Oct 21, 2018 at 14:34:25 +0100, Richard Henderson wrote: > On 10/19/18 2:06 AM, Emilio G. Cota wrote: > > @@ -540,16 +540,16 @@ static inline bool cpu_handle_interrupt(CPUState *cpu, > > */ > > atomic_mb_set(&cpu->icount_decr.u16.high, 0); > > > > - if (unlikely(atomic_read(&cpu->interrupt_request))) { > > + if (unlikely(cpu_interrupt_request(cpu))) { > > int interrupt_request; > > qemu_mutex_lock_iothread(); > > - interrupt_request = cpu->interrupt_request; > > + interrupt_request = cpu_interrupt_request(cpu); > > if (unlikely(cpu->singlestep_enabled & SSTEP_NOIRQ)) { > > /* Mask out external interrupts for this step. */ > > interrupt_request &= ~CPU_INTERRUPT_SSTEP_MASK; > > } > > if (interrupt_request & CPU_INTERRUPT_DEBUG) { > > - cpu->interrupt_request &= ~CPU_INTERRUPT_DEBUG; > > + cpu_reset_interrupt(cpu, CPU_INTERRUPT_DEBUG); > > cpu->exception_index = EXCP_DEBUG; > > qemu_mutex_unlock_iothread(); > > return true; > > Multiple calls.
I'd rather keep it as is. The first read takes the lock, and that has to stay unless we want to use atomic_set on interrupt_request everywhere. The second read happens after the BQL has been acquired; note that to avoid deadlock we never acquire the BQL after a CPU lock; the second (locked) read thus has to stay. Subsequent accesses are all via cpu_reset_interrupt. If we wanted to avoid reacquiring the lock, we'd have to explicitly acquire the lock before the 2nd read, and add unlocks everywhere (like the many qemu_mutex_unlock_iothread calls), which would be ugly. But we'd also have to be careful not to longjmp with the CPU mutex held, so we'd have to unlock/lock around cc->cpu_exec_interrupt. Given that the CPU lock is uncontended (so it's cheap to acquire) and that the cases where we call cpu_reset_interrupt are not that frequent (CPU_INTERRUPT_{DEBUG,HALT,EXITTB}), I'd rather just keep the patch as is. Thanks, Emilio