Ingo Molnar wrote:
* Davide Libenzi <[EMAIL PROTECTED]> wrote:
The same user nicing two different multi-threaded processes would
expect a predictable CPU distribution too. [...]
i disagree that the user 'would expect' this. Some users might. Others
would say: 'my 10-thread rendering engine
Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
Why is X special? Because it does work on behalf of other processes?
Lots of things do this. Perhaps a scheduler should focus entirely on
the implicit and directed wakeup matrix and optimizing that
instead[1].
I 100% agree - the p
Matt Mackall wrote:
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
[2] It's trivial to construct two or more perfectly reasonable and
desirable definitions of fairness that are mutually incompatible.
Probably not if you use common sense, and in the context of a replacement
for t
Hi Björn,
On Sat, Apr 21, 2007 at 01:29:41PM +0200, Björn Steinbrink wrote:
> Hi,
>
> On 2007.04.21 13:07:48 +0200, Willy Tarreau wrote:
> > > another thing i noticed: when using a -y larger then 1, then the window
> > > title (at least on Metacity) overlaps and thus the ocbench tasks have
> >
Hi,
On 2007.04.21 13:07:48 +0200, Willy Tarreau wrote:
> > another thing i noticed: when using a -y larger then 1, then the window
> > title (at least on Metacity) overlaps and thus the ocbench tasks have
> > different X overhead and get scheduled a bit assymetrically as well. Is
> > there any
Hi Ingo,
I'm replying to your 3 mails at once.
On Sat, Apr 21, 2007 at 12:45:22PM +0200, Ingo Molnar wrote:
>
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > > It could become a useful scheduler benchmark !
> >
> > i just tried ocbench-0.3, and it is indeed very nice!
So as you've noticed ju
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > It could become a useful scheduler benchmark !
>
> i just tried ocbench-0.3, and it is indeed very nice!
another thing i noticed: when using a -y larger then 1, then the window
title (at least on Metacity) overlaps and thus the ocbench tasks have
d
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > The modified code is here :
> >
> > http://linux.1wt.eu/sched/orbitclock-0.2bench.tgz
> >
> > What is interesting to note is that it's easy to make X work a lot
> > (99%) by using 0 as the sleeping time, and it's easy to make the
> > process work
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> I hacked it a bit to make it accept two parameters :
> -R : time spent burning CPU cycles at each round
> -S : time spent getting a rest
>
> It now advances what it thinks is a second at each iteration, so that
> it makes it easy to compare its
* Bill Davidsen <[EMAIL PROTECTED]> wrote:
> All of my testing has been on desktop machines, although in most cases
> they were really loaded desktops which had load avg 10..100 from time
> to time, and none were low memory machines. Up to CFS v3 I thought
> nicksched was my winner, now CFSv3
On Fri, Apr 20, 2007 at 04:47:27PM -0400, Bill Davidsen wrote:
> Ingo Molnar wrote:
>
> >( Lets be cautious though: the jury is still out whether people actually
> > like this more than the current approach. While CFS feedback looks
> > promising after a whopping 3 days of it being released [
Ingo Molnar wrote:
( Lets be cautious though: the jury is still out whether people actually
like this more than the current approach. While CFS feedback looks
promising after a whopping 3 days of it being released [ ;-) ], the
test coverage of all 'fairness centric' schedulers, even cons
Mike Galbraith wrote:
On Tue, 2007-04-17 at 05:40 +0200, Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
Yup, and progress _is_ happening now, quite rapidly.
Progress as in progress on Ingo's scheduler. I still don't know how we'd
decide when to replace the
William Lee Irwin III wrote:
William Lee Irwin III wrote:
I'd further recommend making priority levels accessible to kernel threads
that are not otherwise accessible to processes, both above and below
user-available priority levels. Basically, if you can get SCHED_RR and
SCHED_FIFO to coexist as
William Lee Irwin III wrote:
>> I'd further recommend making priority levels accessible to kernel threads
>> that are not otherwise accessible to processes, both above and below
>> user-available priority levels. Basically, if you can get SCHED_RR and
>> SCHED_FIFO to coexist as "intimate scheduler
On Thu, 2007-04-19 at 09:55 -0700, Davide Libenzi wrote:
> On Thu, 19 Apr 2007, Mike Galbraith wrote:
>
> > On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
> > > * Mike Galbraith <[EMAIL PROTECTED]> wrote:
> > >
> > > > With a heavily reniced X (perfectly fine), that should indeed solve my
On Fri, Apr 20, 2007 at 02:52:38AM +0300, Jan Knutar wrote:
> On Thursday 19 April 2007 18:18, Ingo Molnar wrote:
> > * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> > > You can certainly script it with -geometry. But it is the wrong
> > > application for this matter, because you benchmark X more than
On Thursday 19 April 2007 18:18, Ingo Molnar wrote:
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> > You can certainly script it with -geometry. But it is the wrong
> > application for this matter, because you benchmark X more than
> > glxgears itself. What would be better is something like a line
On Thu, Apr 19, 2007 at 05:18:03PM +0200, Ingo Molnar wrote:
>
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
>
> > You can certainly script it with -geometry. But it is the wrong
> > application for this matter, because you benchmark X more than
> > glxgears itself. What would be better is somet
In article <[EMAIL PROTECTED]> you wrote:
> Top (VCPU maybe?)
>User
>Process
>Thread
The problem with that is, that not all Schedulers might work on the User
level. You can think of Batch/Job, Parent, Group, Session or namespace
level. That would be IMHO a generic Top, with
On Thursday 19 April 2007, Ingo Molnar wrote:
>* Willy Tarreau <[EMAIL PROTECTED]> wrote:
>> You can certainly script it with -geometry. But it is the wrong
>> application for this matter, because you benchmark X more than
>> glxgears itself. What would be better is something like a line
>> rotatin
On Thursday 19 April 2007, Ingo Molnar wrote:
>* Willy Tarreau <[EMAIL PROTECTED]> wrote:
>> Good idea. The machine I'm typing from now has 1000 scheddos running
>> at +19, and 12 gears at nice 0. [...]
>>
>> From time to time, one of the 12 aligned gears will quickly perform a
>> full quarter of r
On Thu, 19 Apr 2007, Mike Galbraith wrote:
> On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
> > * Mike Galbraith <[EMAIL PROTECTED]> wrote:
> >
> > > With a heavily reniced X (perfectly fine), that should indeed solve my
> > > daily usage pattern nicely (always need godmode for shells, bu
On Thu, 19 Apr 2007, Ingo Molnar wrote:
> i disagree that the user 'would expect' this. Some users might. Others
> would say: 'my 10-thread rendering engine is more important than a
> 1-thread job because it's using 10 threads for a reason'. And the CFS
> feedback so far strengthens this point:
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> You can certainly script it with -geometry. But it is the wrong
> application for this matter, because you benchmark X more than
> glxgears itself. What would be better is something like a line
> rotating 360 degrees and doing some short stuff betwe
Hi Ingo,
On Thu, Apr 19, 2007 at 11:01:44AM +0200, Ingo Molnar wrote:
>
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
>
> > Good idea. The machine I'm typing from now has 1000 scheddos running
> > at +19, and 12 gears at nice 0. [...]
>
> > From time to time, one of the 12 aligned gears will qu
William Lee Irwin III wrote:
* Andrew Morton <[EMAIL PROTECTED]> wrote:
Yes, there are potential compatibility problems. Example: a machine
with 100 busy httpd processes and suddenly a big gzip starts up from
console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > I think a better approach would be to keep track of the rightmost
> > entry, set the key to the rightmost's key +1 and then simply insert
> > it there.
>
> yeah. I had that implemented at a stage but was trying to be too
> clever for my own good ;-
* Esben Nielsen <[EMAIL PROTECTED]> wrote:
> >+/*
> >+ * Temporarily insert at the last position of the tree:
> >+ */
> >+p->fair_key = LLONG_MAX;
> >+__enqueue_task_fair(rq, p);
> > p->on_rq = 1;
> >+
> >+/*
> >+ * Update the key to the real value, so that when al
On Wed, 18 Apr 2007, Ingo Molnar wrote:
* Christian Hesse <[EMAIL PROTECTED]> wrote:
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions are
more than welcome,
I just gave CFS a try on my system. From a user's po
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> Good idea. The machine I'm typing from now has 1000 scheddos running
> at +19, and 12 gears at nice 0. [...]
> From time to time, one of the 12 aligned gears will quickly perform a
> full quarter of round while others slowly turn by a few degrees. I
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > > And yes, by fairly, I mean fairly among all threads as a base
> > > resource class, because that's what Linux has always done
> >
> > Yes, there are potential compatibility proble
* Davide Libenzi <[EMAIL PROTECTED]> wrote:
> > That's one reason why i dont think it's necessarily a good idea to
> > group-schedule threads, we dont really want to do a per thread group
> > percpu_alloc().
>
> I still do not have clear how much overhead this will bring into the
> table, but
* Andrew Morton <[EMAIL PROTECTED]> wrote:
>> Yes, there are potential compatibility problems. Example: a machine
>> with 100 busy httpd processes and suddenly a big gzip starts up from
>> console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
> h. How about the
On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
> * Mike Galbraith <[EMAIL PROTECTED]> wrote:
>
> > With a heavily reniced X (perfectly fine), that should indeed solve my
> > daily usage pattern nicely (always need godmode for shells, but not
> > for mozilla and ilk. 50/50 split automatic
On Thu, 2007-04-19 at 08:52 +0200, Mike Galbraith wrote:
> On Wed, 2007-04-18 at 23:48 +0200, Ingo Molnar wrote:
>
> > so my current impression is that we want per UID accounting to solve the
> > X problem, the kernel threads problem and the many-users problem, but
> > i'd not want to do it for
* Mike Galbraith <[EMAIL PROTECTED]> wrote:
> With a heavily reniced X (perfectly fine), that should indeed solve my
> daily usage pattern nicely (always need godmode for shells, but not
> for mozilla and ilk. 50/50 split automatic without renice of entire
> gui)
how about the first-approxima
On Wed, 2007-04-18 at 23:48 +0200, Ingo Molnar wrote:
> so my current impression is that we want per UID accounting to solve the
> X problem, the kernel threads problem and the many-users problem, but
> i'd not want to do it for threads just yet because for them there's not
> really any apparen
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > And yes, by fairly, I mean fairly among all threads as a base
> > resource class, because that's what Linux has always done
>
> Yes, there are potential compatibility problems. Example: a machine
> with 100 busy httpd processes and suddenly a big
On Thu, 19 Apr 2007 05:18:07 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
> And yes, by fairly, I mean fairly among all threads as a base resource
> class, because that's what Linux has always done
Yes, there are potential compatibility problems. Example: a machine with
100 busy httpd processes
On Wed, Apr 18, 2007 at 10:49:45PM +1000, Con Kolivas wrote:
> On Wednesday 18 April 2007 22:13, Nick Piggin wrote:
> >
> > The kernel compile (make -j8 on 4 thread system) is doing 1800 total
> > context switches per second (450/s per runqueue) for cfs, and 670
> > for mainline. Going up to 20ms g
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
>
>
> On Wed, 18 Apr 2007, Matt Mackall wrote:
> >
> > Why is X special? Because it does work on behalf of other processes?
> > Lots of things do this. Perhaps a scheduler should focus entirely on
> > the implicit and directed wakeu
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I looked at) the (horrible to my eyes) dual
Chris Friesen wrote:
Mark Glines wrote:
One minor question: is it even possible to be completely fair on SMP?
For instance, if you have a 2-way SMP box running 3 applications, one of
which has 2 threads, will the threaded app have an advantage here? (The
current system seems to try to keep eac
On Wed, 18 Apr 2007, Davide Libenzi wrote:
>
> I know, we agree there. But that did not fit my "Pirates of the Caribbean"
> quote :)
Ahh, I'm clearly not cultured enough, I didn't catch that reference.
Linus "yes, I've seen the movie, but it
apparently left more of a
Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
And "fairness by euid" is probably a hell of a lot easier to do than
trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole
On Wed, 18 Apr 2007, Linus Torvalds wrote:
> On Wed, 18 Apr 2007, Davide Libenzi wrote:
> >
> > "Perhaps on the rare occasion pursuing the right course demands an act of
> > unfairness, unfairness itself can be the right course?"
>
> I don't think that's the right issue.
>
> It's just that "f
On Wed, 18 Apr 2007, Ingo Molnar wrote:
> That's one reason why i dont think it's necessarily a good idea to
> group-schedule threads, we dont really want to do a per thread group
> percpu_alloc().
I still do not have clear how much overhead this will bring into the
table, but I think (like Li
On Wednesday 18 April 2007 22:33, Con Kolivas wrote:
> On Wednesday 18 April 2007 22:14, Nick Piggin wrote:
> > On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
> > > On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> > > > Again, for comparison 2.6.21-rc7 mainline:
> > > >
> > > >
* Davide Libenzi <[EMAIL PROTECTED]> wrote:
> I think Ingo's idea of a new sched_group to contain the generic
> parameters needed for the "key" calculation, works better than adding
> more fields to existing strctures (that would, of course, host
> pointers to it). Otherwise I can already the
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> > perhaps a more fitting term would be 'precise group-scheduling'.
> > Within the lowest level task group entity (be that thread group or
> > uid group, etc.) 'precise scheduling' is equivalent to 'fairness'.
>
> Yes. Absolutely. Except I think tha
On Wed, 18 Apr 2007, Davide Libenzi wrote:
>
> "Perhaps on the rare occasion pursuing the right course demands an act of
> unfairness, unfairness itself can be the right course?"
I don't think that's the right issue.
It's just that "fairness" != "equal".
Do you think it "fair" to pay everyb
On Wed, 18 Apr 2007, Linus Torvalds wrote:
> For example, maybe we can approximate it by spreading out the statistics:
> right now you have things like
>
> - last_ran, wait_runtime, sum_wait_runtime..
>
> be per-thread things. Maybe some of those can be spread out, so that you
> put a part of
On Wed, 18 Apr 2007, Linus Torvalds wrote:
> I'm not arguing against fairness. I'm arguing against YOUR notion of
> fairness, which is obviously bogus. It is *not* fair to try to give out
> CPU time evenly!
"Perhaps on the rare occasion pursuing the right course demands an act of
unfairness,
On Wed, 18 Apr 2007, William Lee Irwin III wrote:
> Thinking of the scheduler as a CPU bandwidth allocator, this means
> handing out shares of CPU bandwidth to all users on the system, which
> in turn hand out shares of bandwidth to all sessions, which in turn
> hand out shares of bandwidth to all
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> For example, maybe we can approximate it by spreading out the
> statistics: right now you have things like
>
> - last_ran, wait_runtime, sum_wait_runtime..
>
> be per-thread things. [...]
yes, yes, yes! :) My thinking is "struct sched_group" embe
On Wed, 18 Apr 2007, Ingo Molnar wrote:
>
> perhaps a more fitting term would be 'precise group-scheduling'. Within
> the lowest level task group entity (be that thread group or uid group,
> etc.) 'precise scheduling' is equivalent to 'fairness'.
Yes. Absolutely. Except I think that at least
Mark Glines wrote:
One minor question: is it even possible to be completely fair on SMP?
For instance, if you have a 2-way SMP box running 3 applications, one of
which has 2 threads, will the threaded app have an advantage here? (The
current system seems to try to keep each thread on a specific
On Wed, 18 Apr 2007, Ingo Molnar wrote:
>
> But note that most of the reported CFS interactivity wins, as surprising
> as it might be, were due to fairness between _the same user's tasks_.
And *ALL* of the CFS interactivity *losses* and complaints have been
because it did the wrong thing _bet
On 4/18/07, Matt Mackall <[EMAIL PROTECTED]> wrote:
For the record, you actually don't need to track a whole NxN matrix
(or do the implied O(n**3) matrix inversion!) to get to the same
result. You can converge on the same node weightings (ie dynamic
priorities) by applying a damped function at ea
On Wed, 18 Apr 2007, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
> > And "fairness by euid" is probably a hell of a lot easier to do than
> > trying to figure out the wakeup matrix.
>
> For the record, you actually don't need to track a whole NxN matrix
El Wed, 18 Apr 2007 10:22:59 -0700 (PDT), Linus Torvalds <[EMAIL PROTECTED]>
escribió:
> So if you have 2 users on a machine running CPU hogs, you should *first*
> try to be fair among users. If one user then runs 5 programs, and the
> other one runs just 1, then the *one* program should get 50
On Wed, 18 Apr 2007 10:22:59 -0700 (PDT)
Linus Torvalds <[EMAIL PROTECTED]> wrote:
> So if you have 2 users on a machine running CPU hogs, you should
> *first* try to be fair among users. If one user then runs 5 programs,
> and the other one runs just 1, then the *one* program should get 50%
> of
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
> It does largely achieve the sort of fairness it set out for itself as
> its design goal. One should also note that the queueing mechanism is
> more than flexible enough to handle prioritization by a number of
> different methods, and the lar
On Wed, Apr 18, 2007 at 10:22:59AM -0700, Linus Torvalds wrote:
> So I claim that anything that cannot be fair by user ID is actually really
> REALLY unfair. I think it's absolutely humongously STUPID to call
> something the "Completely Fair Scheduler", and then just be fair on a
> thread level.
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> In that sense 'fairness' is not global (and in fact it is almost
> _never_ a global property, as X runs under root uid [*]), it is only
> the most lowlevel scheduling machinery that can then be built upon.
> [...]
perhaps a more fitting term would be
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> The fact is:
>
> - "fairness" is *not* about giving everybody the same amount of CPU
>time (scaled by some niceness level or not). Anybody who thinks
>that is "fair" is just being silly and hasn't thought it through.
yeah, very much so.
On Wed, 18 Apr 2007, Matt Mackall wrote:
>
> On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
> > And "fairness by euid" is probably a hell of a lot easier to do than
> > trying to figure out the wakeup matrix.
>
> For the record, you actually don't need to track a whole NxN matr
* Christian Hesse <[EMAIL PROTECTED]> wrote:
> Hi Ingo and all,
>
> On Friday 13 April 2007, Ingo Molnar wrote:
> > as usual, any sort of feedback, bugreports, fixes and suggestions are
> > more than welcome,
>
> I just gave CFS a try on my system. From a user's point of view it
> looks good s
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
> as usual, any sort of feedback, bugreports, fixes and suggestions are
> more than welcome,
I just gave CFS a try on my system. From a user's point of view it looks good
so far. Thanks for your work.
However I found a problem: When t
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
> And "fairness by euid" is probably a hell of a lot easier to do than
> trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole NxN matrix
(or do the implied O(n**3) matrix inversion!) to get
On Wed, 18 Apr 2007, Matt Mackall wrote:
>
> Why is X special? Because it does work on behalf of other processes?
> Lots of things do this. Perhaps a scheduler should focus entirely on
> the implicit and directed wakeup matrix and optimizing that
> instead[1].
I 100% agree - the perfect schedul
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
> Why are processes special? Should user A be able to get more CPU time
> for his job than user B by splitting it into N parallel jobs? Should
> we be fair per process, per user, per thread group, per session, per
> controlling terminal?
Chris Friesen wrote:
Peter Williams wrote:
Chris Friesen wrote:
Suppose I have a really high priority task running. Another very
high priority task wakes up and would normally preempt the first one.
However, there happens to be another cpu available. It seems like it
would be a win if we
On Wednesday 18 April 2007 22:13, Nick Piggin wrote:
> On Wed, Apr 18, 2007 at 11:53:34AM +0200, Ingo Molnar wrote:
> > * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > So looking at elapsed time, a granularity of 100ms is just behind the
> > > mainline score. However it is using slightly less user t
On Wednesday 18 April 2007 22:14, Nick Piggin wrote:
> On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
> > On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> > > Again, for comparison 2.6.21-rc7 mainline:
> > >
> > > 508.87user 32.47system 2:17.82elapsed 392%CPU
> > > 509.05user 32
On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
> On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> > Again, for comparison 2.6.21-rc7 mainline:
> >
> > 508.87user 32.47system 2:17.82elapsed 392%CPU
> > 509.05user 32.25system 2:17.84elapsed 392%CPU
> > 508.75user 32.26system 2:17.
On Wed, Apr 18, 2007 at 11:53:34AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > So looking at elapsed time, a granularity of 100ms is just behind the
> > mainline score. However it is using slightly less user time and
> > slightly more idle time, which indicates
* Andy Whitcroft <[EMAIL PROTECTED]> wrote:
> > as usual, any sort of feedback, bugreports, fixes and suggestions
> > are more than welcome,
>
> Pushed this through the test.kernel.org and nothing new blew up.
> Notably the kernbench figures are within expectations even on the
> bigger numa s
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > 535.43user 30.62system 2:23.72elapsed 393%CPU
> >
> > Thanks for testing this! Could you please try this also with:
> >
> >echo 1 > /proc/sys/kernel/sched_granularity
>
> 507.68user 31.87system 2:18.05elapsed 390%CPU
> 507.99user 31.93
On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
> > * Nick Piggin <[EMAIL PROTECTED]> wrote:
> > > 2.6.21-rc7-cfs-v2
> > > 534.80user 30.92system 2:23.64elapsed 393%CPU
> > > 534.75user 31.01system 2:23.70elapsed 393%CPU
> > > 534.
On Tue, Apr 17, 2007 at 11:59:00AM +0200, Ingo Molnar wrote:
>
> * Nick Piggin <[EMAIL PROTECTED]> wrote:
>
> > 2.6.21-rc7-cfs-v2
> > 534.80user 30.92system 2:23.64elapsed 393%CPU
> > 534.75user 31.01system 2:23.70elapsed 393%CPU
> > 534.66user 31.07system 2:23.76elapsed 393%CPU
> > 534.56user 30
Matt Mackall wrote:
On Tue, Apr 17, 2007 at 03:59:02PM -0700, William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 03:32:56PM -0700, William Lee Irwin III wrote:
I'm working with the following suggestion:
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
Nonlinear is a must IMO. I
On Wed, Apr 18, 2007 at 01:55:34AM -0500, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
> > I don't know how that supports your argument for unfairness,
>
> I never had such an argument. I like fairness.
>
> My argument is that -you- don't have an argument for
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
> I don't know how that supports your argument for unfairness,
I never had such an argument. I like fairness.
My argument is that -you- don't have an argument for making fairness a
-requirement-.
> processes are special only because th
On Wed, Apr 18, 2007 at 12:55:25AM -0500, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 07:00:24AM +0200, Nick Piggin wrote:
> > > It's also not yet clear that a scheduler can't be taught to do the
> > > right thing with X without fiddling with nice levels.
> >
> > Being fair doesn't prevent that.
On Wed, Apr 18, 2007 at 07:00:24AM +0200, Nick Piggin wrote:
> > It's also not yet clear that a scheduler can't be taught to do the
> > right thing with X without fiddling with nice levels.
>
> Being fair doesn't prevent that. Implicit unfairness is wrong though,
> because it will bite people.
>
Peter Williams wrote:
Chris Friesen wrote:
Suppose I have a really high priority task running. Another very high
priority task wakes up and would normally preempt the first one.
However, there happens to be another cpu available. It seems like it
would be a win if we moved one of those tas
On Tue, Apr 17, 2007 at 11:38:31PM -0500, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 05:15:11AM +0200, Nick Piggin wrote:
> >
> > I don't know why this would be a useful feature (of course I'm talking
> > about processes at the same nice level). One of the big problems with
> > the current sche
On Wed, Apr 18, 2007 at 05:15:11AM +0200, Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> > On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> > > On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> > > > On Mon, Apr 16, 2007 at
On Tue, Apr 17, 2007 at 11:16:54PM +1000, Peter Williams wrote:
> Nick Piggin wrote:
> >I don't like the timeslice based nice in mainline. It's too nasty
> >with latencies. nicksched is far better in that regard IMO.
> >
> >But I don't know how you can assert a particular way is the best way
> >to
On Tue, 17 Apr 2007, William Lee Irwin III wrote:
> 100**(1/39.0) ~= 1.12534 may do if so, but it seems a little weak, and
> even 1000**(1/39.0) ~= 1.19378 still seems weak.
>
> I suspect that in order to get low nice numbers strong enough without
> making high nice numbers too strong something s
On Wed, 2007-04-18 at 05:56 +0200, Nick Piggin wrote:
> On Wed, Apr 18, 2007 at 05:45:20AM +0200, Mike Galbraith wrote:
> > On Wed, 2007-04-18 at 05:15 +0200, Nick Piggin wrote:
> > >
> > >
> > > So on what basis would you allow unfairness? On the basis that it doesn't
> > > seem to harm anyone? I
On Tue, Apr 17, 2007 at 09:07:49AM -0400, James Bruce wrote:
>>> Nonlinear is a must IMO. I would suggest X = exp(ln(10)/10) ~= 1.2589
>>> That value has the property that a nice=10 task gets 1/10th the cpu of a
>>> nice=0 task, and a nice=20 task gets 1/100 of nice=0. I think that
>>> would be f
On Wed, Apr 18, 2007 at 05:45:20AM +0200, Mike Galbraith wrote:
> On Wed, 2007-04-18 at 05:15 +0200, Nick Piggin wrote:
> > On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> > >
> > > I'm a big fan of fairness, but I think it's a bit early to declare it
> > > a mandatory feature. Bou
On Wed, 2007-04-18 at 05:15 +0200, Nick Piggin wrote:
> On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> >
> > I'm a big fan of fairness, but I think it's a bit early to declare it
> > a mandatory feature. Bounded unfairness is probably something we can
> > agree on, ie "if we decid
On Tue, Apr 17, 2007 at 04:39:54PM -0500, Matt Mackall wrote:
> On Tue, Apr 17, 2007 at 09:01:55AM +0200, Nick Piggin wrote:
> > On Mon, Apr 16, 2007 at 11:26:21PM -0700, William Lee Irwin III wrote:
> > > On Mon, Apr 16, 2007 at 11:09:55PM -0700, William Lee Irwin III wrote:
> > > >> All things ar
Michael K. Edwards wrote:
On 4/17/07, Peter Williams <[EMAIL PROTECTED]> wrote:
The other way in which the code deviates from the original as that (for
a few years now) I no longer calculated CPU bandwidth usage directly.
I've found that the overhead is less if I keep a running average of the
si
William Lee Irwin III wrote:
Peter Williams wrote:
William Lee Irwin III wrote:
I was tempted to restart from scratch given Ingo's comments, but I
reconsidered and I'll be working with your code (and the German
students' as well). If everything has to change, so be it, but it'll
still be a deri
> Peter Williams wrote:
> >William Lee Irwin III wrote:
> >>I was tempted to restart from scratch given Ingo's comments, but I
> >>reconsidered and I'll be working with your code (and the German
> >>students' as well). If everything has to change, so be it, but it'll
> >>still be a derived work. It
1 - 100 of 312 matches
Mail list logo