On Sun, Oct 21, 2018 at 13:56:59 +0100, Richard Henderson wrote: > On 10/19/18 2:05 AM, Emilio G. Cota wrote: > > @@ -1088,11 +1088,13 @@ static target_ulong h_cede(PowerPCCPU *cpu, > > sPAPRMachineState *spapr, > > > > env->msr |= (1ULL << MSR_EE); > > hreg_compute_hflags(env); > > + cpu_mutex_lock(cs); > > if (!cpu_has_work(cs)) { > > - cs->halted = 1; > > + cpu_halted_set(cs, 1); > > cs->exception_index = EXCP_HLT; > > cs->exit_request = 1; > > } > > + cpu_mutex_unlock(cs); > > return H_SUCCESS; > > Why does this one get extra locking?
It's taking into account that later in the series we expand the CPU lock to cpu_has_work. I've added the following note to this patch's commit log: > In hw/ppc/spapr_hcall.c, acquire the lock just once to > update cpu->halted and call cpu_has_work, since later > in the series we'll acquire the BQL (if not already held) > from cpu_has_work. Thanks, Emilio