On Sun, 2015-07-19 at 18:11 +0900, byungchul.p...@lge.com wrote: > @@ -3226,6 +3226,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct > sched_entity *curr) > struct sched_entity *se; > s64 delta; > > + /* > + * Ensure that a task executes at least for sysctl_sched_min_granularity > + */ > + if (delta_exec < sysctl_sched_min_granularity) > + return; > +
Think about what this does to a low weight task, or any task in a low weight group. The scheduler equalizes runtimes for a living, there is no free lunch. Any runtime larger than fair share that you graciously grant to random task foo doesn't magically appear out of the vacuum, it comes out of task foo's wallet. If you drag that hard coded minimum down into the depths of group scheduling, yeah, every task will get a nice juicy slice of CPU.. eventually, though you may not live to see it. (yeah, overrun can and will happen at all depths due to tick granularity, but you guaranteed it, so I inflated severity a bit;) > ideal_runtime = sched_slice(cfs_rq, curr); > delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime; > if (delta_exec > ideal_runtime) { > @@ -3243,9 +3249,6 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct > sched_entity *curr) > * narrow margin doesn't have to wait for a full slice. > * This also mitigates buddy induced latencies under load. > */ > - if (delta_exec < sysctl_sched_min_granularity) > - return; > - That was about something entirely different. Feel free to remove it after verifying that it has outlived it's original purpose, but please don't just move it about at random. -Mike -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/