* Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote: > On Mon, May 14, 2007 at 01:10:51PM +0200, Ingo Molnar wrote: > > but let me give you some more CFS design background: > > Thanks for this excellent explanation. Things are much clearer now to > me. I just want to clarify one thing below: > > > > 2. Preemption granularity - sysctl_sched_granularity > > [snip] > > > This granularity value does not depend on the number of tasks running. > > Hmm ..so does sysctl_sched_granularity represents granularity in > real/wall-clock time scale then? AFAICS that doesnt seem to be the > case.
there's only this small detail i mentioned: > > ( small detail: the granularity value is currently dependent on the > > nice level, making it easier for higher-prio tasks to preempt > > lower-prio tasks. ) > __check_preempt_curr_fair() compares for the distance between the two > task's (current and next-to-be-run task) fair_key values for deciding > preemption point. > > Let's say that to begin with, at real time t0, both current task Tc > and next task Tn's fair_key values are same, at value K. Tc will keep > running until its fair_key value reaches atleast K + 2000000. The > *real/wall-clock* time taken for Tc's fair_key value to reach K + > 2000000 - is surely dependent on N, the number of tasks on the queue > (more the load, more slowly the fair clock advances)? well, it's somewhere in the [ granularity .. granularity*2 ] wall-clock scale. Basically the slowest way it can reach it is 'half speed' (two tasks running), the slowest way is 'near full speed' (lots of tasks running). > This is what I meant by my earlier remark: "If there a million cpu > hungry tasks, then the (real/wall-clock) time taken to switch between > two tasks is more compared to the case where just two cpu hungry tasks > are running". the current task is recalculated at scheduler tick time and put into the tree at its new position. At a million tasks the fair-clock will advance little (or not at all - which at these load levels is our smallest problem anyway) so during a scheduling tick in kernel/sched_fair.c update_curr() we will have a 'delta_mine' and 'delta_fair' of near zero and a 'delta_exec' of ~1 million, so curr->wait_runtime will be decreased at 'full speed': delta_exec-delta_mine, by almost a full tick. So preemption will occur every sched_granularity (rounded up to the next tick) points in time, in wall-clock time. with 2 tasks running delta_exec-delta_mine is 0.5 million, so preemption will occur in 2*sched_granularity (rounded up to the next timer tick) wall-clock time. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/