> Either you have a strange definition of fairness or you chose an
> extremely
> poor example, Ingo. In a fair scheduler I'd expect all tasks to
> get the exact
> same amount of time on the processor.
Yes, as a long-term average. However, that is impossible to do in the
short-term. Some taks has
It is possible to do something like this in check_preemption ?
delta = curr->fair_key - first->fair_key;
if (delta > ??? [scale it as you wish] ||
(curr->key > first->key) && (curr->wait_runtime > ???
[simple funtion of curr->weight]) )
preempt
Forgive
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> [...] I'm suspicious of EEVDF's timekeeping now as well.
On Mon, May 14, 2007 at 02:04:05PM +0200, Ingo Molnar wrote:
> well, EEVDF is a paper-only academic scheduler, one out of thousands
> that never touched real hardware. For nearly every m
On Mon, May 14, 2007 at 12:31:20PM +0200, Ingo Molnar wrote:
>>> please clarify - exactly what is a mistake? Thanks,
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
>> The variability in ->fair_clock advancement rate was the mistake, at
>> least according to my way of thinking. [...]
On Mon,
Ingo Molnar wrote:
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
On Mon, May 14, 2007 at 12:31:20PM +0200, Ingo Molnar wrote:
please clarify - exactly what is a mistake? Thanks,
The variability in ->fair_clock advancement rate was the mistake, at
least according to my wa
William Lee Irwin III wrote:
On Mon, May 14, 2007 at 04:52:59PM +0530, Srivatsa Vaddagiri wrote:
Doesn't EEVDF have the same issue? From the paper:
V(t) = 1/(w1 + w2 + ...wn)
Who knows what I was smoking, then. I misremembered the scale factor
as being on the other side of com
* Daniel Hazelton <[EMAIL PROTECTED]> wrote:
> [...] In a fair scheduler I'd expect all tasks to get the exact same
> amount of time on the processor. So if there are 10 tasks running at
> nice 0 and the current task has run for 20msecs before a new task is
> swapped onto the CPU, the new task
Ingo Molnar wrote:
> the current task is recalculated at scheduler tick time and put into the
> tree at its new position. At a million tasks the fair-clock will advance
> little (or not at all - which at these load levels is our smallest
> problem anyway) so during a scheduling tick in kernel/sched
On Mon, May 14, 2007 at 10:31:13AM -0400, Daniel Hazelton wrote:
> Hrm... Okay, so you're saying that "fair_clock" runs slower the more process
> there are running to keep the above run-up in "Time Spent on CPU" I noticed
> based solely on your initial example? If that is the case, then I can see
On Monday 14 May 2007 07:50:49 Ingo Molnar wrote:
> * William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> > On Mon, May 14, 2007 at 12:31:20PM +0200, Ingo Molnar wrote:
> > > please clarify - exactly what is a mistake? Thanks,
> >
> > The variability in ->fair_clock advancement rate was the mistake,
* Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> On Mon, May 14, 2007 at 01:10:51PM +0200, Ingo Molnar wrote:
> > but let me give you some more CFS design background:
>
> Thanks for this excellent explanation. Things are much clearer now to
> me. I just want to clarify one thing below:
>
> > >
On Mon, May 14, 2007 at 01:10:51PM +0200, Ingo Molnar wrote:
> but let me give you some more CFS design background:
Thanks for this excellent explanation. Things are much clearer now to
me. I just want to clarify one thing below:
> > 2. Preemption granularity - sysctl_sched_granularity
[snip]
>
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> [...] I'm suspicious of EEVDF's timekeeping now as well.
well, EEVDF is a paper-only academic scheduler, one out of thousands
that never touched real hardware. For nearly every mathematically
possible scheduling algorithm i suspect there's a
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> On Mon, May 14, 2007 at 12:31:20PM +0200, Ingo Molnar wrote:
> > please clarify - exactly what is a mistake? Thanks,
>
> The variability in ->fair_clock advancement rate was the mistake, at
> least according to my way of thinking. [...]
you
On Mon, May 14, 2007 at 04:05:00AM -0700, William Lee Irwin III wrote:
>> The variability in ->fair_clock advancement rate was the mistake, at
>> least according to my way of thinking. The queue's virtual time clock
>> effectively stops under sufficiently high load, possibly literally in
>> the eve
On Mon, May 14, 2007 at 04:05:00AM -0700, William Lee Irwin III wrote:
> The variability in ->fair_clock advancement rate was the mistake, at
> least according to my way of thinking. The queue's virtual time clock
> effectively stops under sufficiently high load, possibly literally in
> the event o
* Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> I have been brooding over how fair clock is computed/used in
> CFS and thought I would ask the experts to avoid wrong guesses!
hey, thanks for the interest :-)
> As I understand, fair_clock is a monotonously increasing clock which
> adva
On Mon, May 14, 2007 at 02:03:58PM +0530, Srivatsa Vaddagiri wrote:
>>> I have been brooding over how fair clock is computed/used in
>>> CFS and thought I would ask the experts to avoid wrong guesses!
>>> As I understand, fair_clock is a monotonously increasing clock which
>>> advances at a pac
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> On Mon, May 14, 2007 at 02:03:58PM +0530, Srivatsa Vaddagiri wrote:
> > I have been brooding over how fair clock is computed/used in
> > CFS and thought I would ask the experts to avoid wrong guesses!
> > As I understand, fair_clock is a mo
On Mon, May 14, 2007 at 02:03:58PM +0530, Srivatsa Vaddagiri wrote:
> I have been brooding over how fair clock is computed/used in
> CFS and thought I would ask the experts to avoid wrong guesses!
> As I understand, fair_clock is a monotonously increasing clock which
> advances at a pace inve
20 matches
Mail list logo