Hi Peter, On 15/08/15 14:05, Peter Zijlstra wrote: > On Tue, Jul 07, 2015 at 07:24:21PM +0100, Morten Rasmussen wrote: >> +void cpufreq_sched_set_cap(int cpu, unsigned long capacity) >> +{ >> + unsigned int freq_new, cpu_tmp; >> + struct cpufreq_policy *policy; >> + struct gov_data *gd; >> + unsigned long capacity_max = 0; >> + >> + /* update per-cpu capacity request */ >> + __this_cpu_write(pcpu_capacity, capacity); >> + >> + policy = cpufreq_cpu_get(cpu); >> + if (IS_ERR_OR_NULL(policy)) { >> + return; >> + } >> + >> + if (!policy->governor_data) >> + goto out; >> + >> + gd = policy->governor_data; >> + >> + /* bail early if we are throttled */ >> + if (ktime_before(ktime_get(), gd->throttle)) >> + goto out; > > Isn't this the wrong place to throttle? Suppose you're getting multiple > new tasks placed on this CPU, the first one would trigger this callback > and start increasing freq.. > > While we're still changing freq. (and therefore throttled), another task > comes in which would again raise the freq. > > With this scheme you loose the latter freq. change and will not > re-evaluate. >
The way the policy is implemented, you should not have this problem. For new tasks you actually jump to max freq, as new tasks util gets initialized to 1024. For load balancing migrations we wait until all the tasks are migrated and then we trigger an update. > Any scheme that limits the callbacks to the actual hardware will have to > buffer requests and once the hardware returns (be it through an > interrupt or timeout) issue the latest request. > But, it is true that if the above events happened the other way around (we trigger an update after load balancing and a new task arrives), we may miss the opportunity to jump to max with the new task. In my mind this is probably not a big deal, as we'll have a tick pretty soon that will fix things anyway (saving us some complexity in the backend). What you think? Thanks, - Juri -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/