On Thu, Jul 09, 2015 at 03:23:55PM +0200, Andreas Färber wrote: > From: Bharata B Rao <bhar...@linux.vnet.ibm.com> > > Currently CPUState::cpu_index is monotonically increasing and a newly > created CPU always gets the next higher index. The next available > index is calculated by counting the existing number of CPUs. This is > fine as long as we only add CPUs, but there are architectures which > are starting to support CPU removal, too. For an architecture like PowerPC > which derives its CPU identifier (device tree ID) from cpu_index, the > existing logic of generating cpu_index values causes problems. > > With the currently proposed method of handling vCPU removal by parking > the vCPU fd in QEMU > (Ref: http://lists.gnu.org/archive/html/qemu-devel/2015-02/msg02604.html), > generating cpu_index this way will not work for PowerPC. > > This patch changes the way cpu_index is handed out by maintaining > a bit map of the CPUs that tracks both addition and removal of CPUs. > > The CPU bitmap allocation logic is part of cpu_exec_init(), which is > called by instance_init routines of various CPU targets. Newly added > cpu_exec_exit() API handles the deallocation part and this routine is > called from generic CPU instance_finalize. > > Note: This new CPU enumeration is for !CONFIG_USER_ONLY only. > CONFIG_USER_ONLY continues to have the old enumeration logic. > > Signed-off-by: Bharata B Rao <bhar...@linux.vnet.ibm.com> > Reviewed-by: Eduardo Habkost <ehabk...@redhat.com> > Reviewed-by: Igor Mammedov <imamm...@redhat.com> > Reviewed-by: David Gibson <da...@gibson.dropbear.id.au> > Reviewed-by: Peter Crosthwaite <peter.crosthwa...@xilinx.com> > Acked-by: Paolo Bonzini <pbonz...@redhat.com> > Signed-off-by: Peter Crosthwaite <crosthwaite.pe...@gmail.com> > [AF: max_cpus -> MAX_CPUMASK_BITS] > Signed-off-by: Andreas Färber <afaer...@suse.de> > --- > exec.c | 58 > ++++++++++++++++++++++++++++++++++++++++++++++++++----- > include/qom/cpu.h | 1 + > qom/cpu.c | 7 +++++++ > 3 files changed, 61 insertions(+), 5 deletions(-) > > diff --git a/exec.c b/exec.c > index ce5fadd..d817e5f 100644 > --- a/exec.c > +++ b/exec.c > @@ -526,12 +526,57 @@ void tcg_cpu_address_space_init(CPUState *cpu, > AddressSpace *as) > } > #endif > > +#ifndef CONFIG_USER_ONLY > +static DECLARE_BITMAP(cpu_index_map, MAX_CPUMASK_BITS); > + > +static int cpu_get_free_index(Error **errp) > +{ > + int cpu = find_first_zero_bit(cpu_index_map, MAX_CPUMASK_BITS); > + > + if (cpu >= MAX_CPUMASK_BITS) { > + error_setg(errp, "Trying to use more CPUs than max of %d", > + MAX_CPUMASK_BITS); > + return -1; > + }
If this routine and hence cpu_exec_init() (which is called from realize routine) don't error out when max_cpus is reached, archs supporting CPU hotplug using device_add will find it difficult to fail the realization of CPU when hotplugging of more than max_cpus is attempted. An alternative is to explicitly check for the returned cpu_index in realize call within each arch and fail if the cpu_index obtained is greater than max_cpus. So for ppc, I could put such a check in target-ppc/translate_init:ppc_cpu_realizefn(), but ppc_cpu_realizefn() is a common routine for all targets under ppc and some targets like ppc64abi32-linux-user don't have visibility to max_cpus which is in vl.c. Any thoughts on the above problem ? Also, is it possible to revisit the problem that use of max_cpus instead of MAX_CPUMASK_BITS caused to xlnx-ep108 ? Regards, Bharata.