On 05/12/2015 03:06 PM, Dario Faggioli wrote: > In fact, printing the cpupool's CPU online mask > for each vCPU is just redundant, as that is the > same for all the vCPUs of all the domains in the > same cpupool, while hard affinity is already part > of the output of dumping domains info. > > Instead, print the intersection between hard > affinity and online CPUs, which is --in case of this > scheduler-- the effective affinity always used for > the vCPUs. > > This change also takes the chance to add a scratch > cpumask area, to avoid having to either put one > (more) cpumask_t on the stack, or dynamically > allocate it within the dumping routine. (The former > being bad because hypervisor stack size is limited, > the latter because dynamic allocations can fail, if > the hypervisor was built for a large enough number > of CPUs.) We allocate such scratch area, for all pCPUs, > when the first instance of the RTDS scheduler is > activated and, in order not to loose track/leak it > if other instances are activated in new cpupools, > and when the last instance is deactivated, we (sort > of) refcount it. > > Such scratch area can be used to kill most of the > cpumasks{_var}_t local variables in other functions > in the file, but that is *NOT* done in this chage. > > Finally, convert the file to use keyhandler scratch, > instead of open coded string buffers. > > Signed-off-by: Dario Faggioli <dario.faggi...@citrix.com> > Cc: George Dunlap <george.dun...@eu.citrix.com> > Cc: Meng Xu <xumengpa...@gmail.com> > Cc: Jan Beulich <jbeul...@suse.com> > Cc: Keir Fraser <k...@xen.org>
I haven't reviewed this, but since I gave a reviewed-by for the previous series, and its' been reviewed by Meng: Acked-by: George Dunlap <george.dun...@eu.citrix.com> _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel