On a shared LPAR is entitled to certain number of cores, i.e the number of cores that PowerVM hypervisor is committed to provide at any point of time. Hence based on steal metrics, soft offline such that at least soft-offline cores are available.
Also when soft-onlining cores, unless DLPAR, ensure system can only online up to max virtual cores. Signed-off-by: Srikar Dronamraju <[email protected]> --- arch/powerpc/platforms/pseries/smp.c | 27 +++++++++++++++++++++++++-- 1 file changed, 25 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/platforms/pseries/smp.c b/arch/powerpc/platforms/pseries/smp.c index 4c83749018d0..69e209880b6f 100644 --- a/arch/powerpc/platforms/pseries/smp.c +++ b/arch/powerpc/platforms/pseries/smp.c @@ -327,25 +327,45 @@ void trigger_softoffline(unsigned long steal_ratio) { int currcpu = smp_processor_id(); static int prev_direction; + int success = 0; int cpu, i; + /* + * Compare delta runtime versus delta steal time. + * [0]<----------->[EC]--------->[VP] + * [0]<------------------>{AC}-->[VP] + * EC == Entitled Cores + * VP == Virtual Processors + * AC == Available Cores Varies between 0 to EC/VP. + * If Steal time is high, then reduce Available Cores. + * If steal time is low, increase Available Cores + */ if (steal_ratio >= STEAL_RATIO_HIGH && prev_direction > 0) { /* * System entitlement was reduced earlier but we continue to - * see steal time. Reduce entitlement further. + * see steal time. Reduce entitlement further if possible. */ + if (available_cores <= entitled_cores) + return; + cpu = cpumask_last(cpu_active_mask); for_each_cpu_andnot(i, cpu_sibling_mask(cpu), cpu_sibling_mask(currcpu)) { struct offline_worker *worker = &per_cpu(offline_workers, i); worker->offline = 1; schedule_work_on(i, &worker->work); + success = 1; } + if (success) + available_cores--; } else if (steal_ratio <= STEAL_RATIO_LOW && prev_direction < 0) { /* * System entitlement was increased but we continue to see - * less steal time. Increase entitlement further. + * less steal time. Increase entitlement further if possible. */ + if (available_cores >= max_virtual_cores) + return; + cpumask_andnot(cpus, cpu_online_mask, cpu_active_mask); if (cpumask_empty(cpus)) return; @@ -356,7 +376,10 @@ void trigger_softoffline(unsigned long steal_ratio) worker->offline = 0; schedule_work_on(i, &worker->work); + success = 1; } + if (success) + available_cores++; } if (steal_ratio >= STEAL_RATIO_HIGH) prev_direction = 1; -- 2.43.7
