sched_slice() compute ideal runtime slice. If there are many tasks in cfs_rq, period for this cfs_rq is extended to guarantee that each task has time slice at least, sched_min_granularity. And then each task get a portion of this period for it. If there is a task which have much larger load weight than others, a portion of period can exceed far more than sysctl_sched_latency.
For exampple, you can simply imagine that one task with nice -20 and 9 tasks with nice 0 on one cfs_rq. In this case, load weight sum for this cfs_rq is 88761 + 9 * 1024, 97977. So a portion of slice for the task with nice -20 is sysctl_sched_min_granularity * 10 * (88761 / 97977), that is, approximately, sysctl_sched_min_granularity * 9. This aspect can be much larger if there is more tasks with nice 0. So we should limit this possible weird situation. Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index e232421..6ceffbc 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -645,6 +645,9 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se) } slice = calc_delta_mine(slice, se->load.weight, load); + if (unlikely(slice > sysctl_sched_latency)) + slice = sysctl_sched_latency; + return slice; } -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/