On Fri, Dec 04, 2020 at 12:48:52AM +0100, Alexander Graf wrote: > The hooks we have that call us after reset, init and loadvm really all > just want to say "The reference of all register state is in the QEMU > vcpu struct, please push it". > > We already have a working pushing mechanism though called cpu->vcpu_dirty, > so we can just reuse that for all of the above, syncing state properly the > next time we actually execute a vCPU. > > This fixes PSCI resets on ARM, as they modify CPU state even after the > post init call has completed, but before we execute the vCPU again. > > To also make the scheme work for x86, we have to make sure we don't > move stale eflags into our env when the vcpu state is dirty. > > Signed-off-by: Alexander Graf <ag...@csgraf.de> > --- > accel/hvf/hvf-cpus.c | 27 +++++++-------------------- > target/i386/hvf/x86hvf.c | 5 ++++- > 2 files changed, 11 insertions(+), 21 deletions(-) > > diff --git a/accel/hvf/hvf-cpus.c b/accel/hvf/hvf-cpus.c > index 1b0c868944..71721e17de 100644 > --- a/accel/hvf/hvf-cpus.c > +++ b/accel/hvf/hvf-cpus.c > @@ -275,39 +275,26 @@ static void hvf_cpu_synchronize_state(CPUState *cpu) > } > } > > -static void do_hvf_cpu_synchronize_post_reset(CPUState *cpu, > - run_on_cpu_data arg) > +static void do_hvf_cpu_synchronize_set_dirty(CPUState *cpu, > + run_on_cpu_data arg) > { > - hvf_put_registers(cpu); > - cpu->vcpu_dirty = false; > + /* QEMU state is the reference, push it to HVF now and on next entry */
It's only signalling now. The actual push is delayed until the next entry. It'd be good if Paolo or Eduardo would also peek at this change because it makes HVF a bit different from other accels. HVF's post_reset, post_init and pre_loadvm no longer result into QEMU state being pushed to HVF. I'm not sure I can fully grasp if there're undesired side-effects of this so it's something worth broader review. If nobody raises objections: Reviewed-by: Roman Bolshakov <r.bolsha...@yadro.com> Tested-by: Roman Bolshakov <r.bolsha...@yadro.com> Thanks, Roman > + cpu->vcpu_dirty = true; > } > > static void hvf_cpu_synchronize_post_reset(CPUState *cpu) > { > - run_on_cpu(cpu, do_hvf_cpu_synchronize_post_reset, RUN_ON_CPU_NULL); > -} > - > -static void do_hvf_cpu_synchronize_post_init(CPUState *cpu, > - run_on_cpu_data arg) > -{ > - hvf_put_registers(cpu); > - cpu->vcpu_dirty = false; > + run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL); > } > > static void hvf_cpu_synchronize_post_init(CPUState *cpu) > { > - run_on_cpu(cpu, do_hvf_cpu_synchronize_post_init, RUN_ON_CPU_NULL); > -} > - > -static void do_hvf_cpu_synchronize_pre_loadvm(CPUState *cpu, > - run_on_cpu_data arg) > -{ > - cpu->vcpu_dirty = true; > + run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL); > } > > static void hvf_cpu_synchronize_pre_loadvm(CPUState *cpu) > { > - run_on_cpu(cpu, do_hvf_cpu_synchronize_pre_loadvm, RUN_ON_CPU_NULL); > + run_on_cpu(cpu, do_hvf_cpu_synchronize_set_dirty, RUN_ON_CPU_NULL); > } > > static void hvf_vcpu_destroy(CPUState *cpu) > diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c > index 0f2aeb1cf8..3111c0be4c 100644 > --- a/target/i386/hvf/x86hvf.c > +++ b/target/i386/hvf/x86hvf.c > @@ -435,7 +435,10 @@ int hvf_process_events(CPUState *cpu_state) > X86CPU *cpu = X86_CPU(cpu_state); > CPUX86State *env = &cpu->env; > > - env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS); > + if (!cpu_state->vcpu_dirty) { > + /* light weight sync for CPU_INTERRUPT_HARD and IF_MASK */ > + env->eflags = rreg(cpu_state->hvf->fd, HV_X86_RFLAGS); > + } > > if (cpu_state->interrupt_request & CPU_INTERRUPT_INIT) { > cpu_synchronize_state(cpu_state); > -- > 2.24.3 (Apple Git-128) >