On Thu, 2019-08-29 at 16:02 +0200, Vincent Guittot wrote: > On Thu, 29 Aug 2019 at 01:19, Rik van Riel <r...@surriel.com> wrote: > > > What am I overlooking? > > My point is more for task that runs several ticks in a row. Their > sched_slice will be shorter in some cases with your changes so they > can be preempted earlier by other runnable tasks with a lower > vruntime > and there will be more context switch
I can think of exactly one case where the time slice will be shorter with my new code than with the old code, and that is the case where: - A CPU has nr_running > sched_nr_latency - __sched_period returns a value larger than sysctl_sched_latency - one of the tasks is much higher priority than the others - that one task alone gets a timeslice larger than sysctl_sched_latency With the new code, that high priority task will get a time slice that is a (large) fraction of sysctl_sched_latency, while the other (lower priority) tasks get their time slices rounded up to sysctl_sched_min_granularity. When tasks get their timeslice rounded up, that will increase the total sched period in a similar way the old code did by returning a longer period from __sched_period. If a CPU is faced with a large number of equal priority tasks, both the old code and the new code would end up giving each task a timeslice length of sysctl_sched_min_granularity. What am I missing? -- All Rights Reversed.
signature.asc
Description: This is a digitally signed message part