When a CPU is about to go down, it moves all it's sleeping task to an active CPU, then nr_uninterruptible counts are also moved. When moving nr_uninterruptible count, currently it chooses a randomly picked CPU from the active CPU mask to keep the global nr_uninterruptible count intact. But, it would be precise to move nr_uninterruptible counts to the CPU where all the sleeping tasks were moved and it also might have subtle impact over rq's load calculation. So, this patch is prepared to address this issue.
Signed-off-by: Rakib Mullick <rakib.mull...@gmail.com> --- diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 82ad284..5839796 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -5304,9 +5304,9 @@ void idle_task_exit(void) * their home CPUs. So we just add the counter to another CPU's counter, * to keep the global sum constant after CPU-down: */ -static void migrate_nr_uninterruptible(struct rq *rq_src) +static void migrate_nr_uninterruptible(struct rq *rq_src, unsigned int dest_cpu) { - struct rq *rq_dest = cpu_rq(cpumask_any(cpu_active_mask)); + struct rq *rq_dest = cpu_rq(dest_cpu); rq_dest->nr_uninterruptible += rq_src->nr_uninterruptible; rq_src->nr_uninterruptible = 0; @@ -5371,6 +5371,7 @@ static void migrate_tasks(unsigned int dead_cpu) } rq->stop = stop; + migrate_nr_uninterruptible(rq, dest_cpu); } #endif /* CONFIG_HOTPLUG_CPU */ @@ -5612,7 +5613,6 @@ migration_call(struct notifier_block *nfb, unsigned long action, void *hcpu) BUG_ON(rq->nr_running != 1); /* the migration thread */ raw_spin_unlock_irqrestore(&rq->lock, flags); - migrate_nr_uninterruptible(rq); calc_global_load_remove(rq); break; #endif -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/