On Tue, Dec 01, 2015 at 11:55:58AM +1100, David Gibson wrote: > On Fri, Nov 20, 2015 at 06:24:33PM +0530, Bharata B Rao wrote: > > From: Gu Zheng <guz.f...@cn.fujitsu.com> > > > > In order to deal well with the kvm vcpus (which can not be removed without > > any > > protection), we do not close KVM vcpu fd, just record and mark it as stopped > > into a list, so that we can reuse it for the appending cpu hot-add request > > if > > possible. It is also the approach that kvm guys suggested: > > https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html > > > > Signed-off-by: Chen Fan <chen.fan.f...@cn.fujitsu.com> > > Signed-off-by: Gu Zheng <guz.f...@cn.fujitsu.com> > > Signed-off-by: Zhu Guihua <zhugh.f...@cn.fujitsu.com> > > Signed-off-by: Bharata B Rao <bhar...@linux.vnet.ibm.com> > > [- Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu() > > isn't needed as it is done from cpu_exec_exit()] > > --- > > cpus.c | 41 +++++++++++++++++++++++++++++++++++++ > > include/qom/cpu.h | 10 +++++++++ > > include/sysemu/kvm.h | 1 + > > kvm-all.c | 57 > > +++++++++++++++++++++++++++++++++++++++++++++++++++- > > kvm-stub.c | 5 +++++ > > 5 files changed, 113 insertions(+), 1 deletion(-) > > > > diff --git a/cpus.c b/cpus.c > > index 877bd70..af2b274 100644 > > --- a/cpus.c > > +++ b/cpus.c > > @@ -953,6 +953,21 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void > > *data), void *data) > > qemu_cpu_kick(cpu); > > } > > > > +static void qemu_kvm_destroy_vcpu(CPUState *cpu) > > +{ > > + if (kvm_destroy_vcpu(cpu) < 0) { > > + error_report("kvm_destroy_vcpu failed.\n"); > > + exit(EXIT_FAILURE); > > + } > > + > > + object_unparent(OBJECT(cpu)); > > +} > > + > > +static void qemu_tcg_destroy_vcpu(CPUState *cpu) > > +{ > > + object_unparent(OBJECT(cpu)); > > +} > > + > > static void flush_queued_work(CPUState *cpu) > > { > > struct qemu_work_item *wi; > > @@ -1053,6 +1068,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg) > > } > > } > > qemu_kvm_wait_io_event(cpu); > > + if (cpu->exit && !cpu_can_run(cpu)) { > > + qemu_kvm_destroy_vcpu(cpu); > > + qemu_mutex_unlock(&qemu_global_mutex); > > This looks like a change to locking semantics, and I can't see the > connection to the described purpose of the patch.
As I replied in another thread to Alexey, this needs fixing. > > > + return NULL; > > + } > > } > > > > return NULL; > > @@ -1108,6 +1128,7 @@ static void tcg_exec_all(void); > > static void *qemu_tcg_cpu_thread_fn(void *arg) > > { > > CPUState *cpu = arg; > > + CPUState *remove_cpu = NULL; > > > > rcu_register_thread(); > > > > @@ -1145,6 +1166,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg) > > } > > } > > qemu_tcg_wait_io_event(QTAILQ_FIRST(&cpus)); > > + CPU_FOREACH(cpu) { > > + if (cpu->exit && !cpu_can_run(cpu)) { > > + remove_cpu = cpu; > > + break; > > + } > > + } > > + if (remove_cpu) { > > + qemu_tcg_destroy_vcpu(remove_cpu); > > + remove_cpu = NULL; > > + } > > Any particular reason to only cleanup one cpu per iteration? Not sure, this is borrowed from x86 CPU hotplug patchset. Zhu - do you know why ? > > Also, any particular reason this isn't folded into tcg_exec_all with > the other cpu->exit logic? Looks like it can be done. Will give it a try in the next iteration. Regards, Bharata.