在 2013-8-6,下午3:29,Mike Galbraith <bitbuc...@online.de> 写道:

> +int sched_needs_cpu(int cpu)
> +{
> +     return  cpu_rq(cpu)->avg_idle < sysctl_sched_migration_cost;
> +}
> +
> #else /* CONFIG_NO_HZ_COMMON */
> 
> static inline bool got_nohz_idle_kick(void)
> --- a/kernel/time/tick-sched.c
> +++ b/kernel/time/tick-sched.c
> @@ -548,7 +548,7 @@ static ktime_t tick_nohz_stop_sched_tick
>               time_delta = timekeeping_max_deferment();
>       } while (read_seqretry(&jiffies_lock, seq));
> 
> -     if (rcu_needs_cpu(cpu, &rcu_delta_jiffies) ||
> +     if (sched_needs_cpu(cpu) || rcu_needs_cpu(cpu, &rcu_delta_jiffies) ||
>           arch_needs_cpu(cpu) || irq_work_needs_cpu()) {
>               next_jiffies = last_jiffies + 1;
>               delta_jiffies = 1;

If the performace regression was caused by too much expensive clock device 
reprogramming and too frequent entering /exiting of C-states…   this patch 
should work.
except the following result is almost always false under 3.11-rc3 code.

> return  cpu_rq(cpu)->avg_idle < sysctl_sched_migration_cost;



Ethan

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to