Hi Joonsoo, On 04/01/2013 10:39 AM, Joonsoo Kim wrote: > Hello Preeti. > So we should limit this possible weird situation. >>> >>> Signed-off-by: Joonsoo Kim <iamjoonsoo....@lge.com> >>> >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >>> index e232421..6ceffbc 100644 >>> --- a/kernel/sched/fair.c >>> +++ b/kernel/sched/fair.c >>> @@ -645,6 +645,9 @@ static u64 sched_slice(struct cfs_rq *cfs_rq, struct >>> sched_entity *se) >>> } >>> slice = calc_delta_mine(slice, se->load.weight, load); >>> >>> + if (unlikely(slice > sysctl_sched_latency)) >>> + slice = sysctl_sched_latency; >> >> Then in this case the highest priority thread would get >> 20ms(sysctl_sched_latency), and the rest would get >> sysctl_sched_min_granularity * 10 * (1024/97977) which would be 0.4ms. >> Then all tasks would get scheduled ateast once within 20ms + (0.4*9) ms >> = 23.7ms, while your scheduling latency period was extended to 40ms,just >> so that each of these tasks don't have their sched_slices shrunk due to >> large number of tasks. > > I don't know I understand your question correctly. > I will do my best to answer your comment. :) > > With this patch, I just limit maximum slice at one time. Scheduling is > controlled through the vruntime. So, in this case, the task with nice -20 > will be scheduled twice. > > 20 + (0.4 * 9) + 20 = 43.9 ms > > And after 43.9 ms, this process is repeated. > > So I can tell you that scheduling period is preserved as before. > > If we give a long period to a task at one go, it can cause > a latency problem. So IMHO, limiting this is meaningful.
Thank you very much for the explanation. Just one question. What is the reason behind you choosing sysctl_sched_latency as the upper bound here? Regards Preeti U Murthy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/