* Mel Gorman <mgor...@suse.de> [2013-07-03 15:21:33]:

> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2a0bbc2..b9139be 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -800,6 +800,37 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000;
>   */
>  unsigned int sysctl_numa_balancing_settle_count __read_mostly = 3;
> 
> +static unsigned long weighted_cpuload(const int cpu);
> +
> +static int
> +find_idlest_cpu_node(int this_cpu, int nid)
> +{
> +     unsigned long load, min_load = ULONG_MAX;
> +     int i, idlest_cpu = this_cpu;
> +
> +     BUG_ON(cpu_to_node(this_cpu) == nid);
> +
> +     for_each_cpu(i, cpumask_of_node(nid)) {
> +             load = weighted_cpuload(i);
> +
> +             if (load < min_load) {
> +                     struct task_struct *p;
> +
> +                     /* Do not preempt a task running on its preferred node 
> */
> +                     struct rq *rq = cpu_rq(i);
> +                     raw_spin_lock_irq(&rq->lock);

Not sure why we need this spin_lock? Cant this be done in a rcu block
instead?


-- 
Thanks and Regards
Srikar Dronamraju

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to