Hi,
I wanted to check in with you, did you receive my email from last week?
I want to share a proven system with you.
This system allows you to try the whole thing for f.r_ee for 30 days.
You can finally change your future without giving up any sensitive
information in advance.
I s-ig-ned up
Jarek Poplawski wrote:
On 16-10-2007 03:16, Peter Williams wrote:
...
I'd suggest that we modify sched_rr_get_interval() to return -EINVAL
(with *interval set to zero) if the target task is not SCHED_RR. That
way we can save a lot of unnecessary code. I'll work on a patch.
...
I
Jarek Poplawski wrote:
On 13-10-2007 03:29, Peter Williams wrote:
Jarek Poplawski wrote:
On 12-10-2007 00:23, Peter Williams wrote:
...
The reason I was going that route was for modularity (which helps
when adding plugsched patches). I'll submit a revised patch for
consideration.
...
Jarek Poplawski wrote:
On 12-10-2007 00:23, Peter Williams wrote:
...
The reason I was going that route was for modularity (which helps when
adding plugsched patches). I'll submit a revised patch for consideration.
...
IMHO, it looks like modularity could suck here:
+static unsigne
Dmitry Adamushko wrote:
On 11/10/2007, Ingo Molnar <[EMAIL PROTECTED]> wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
-#define MIN_TIMESLICEmax(5 * HZ / 1000, 1)
-#define DEF_TIMESLICE(100 * HZ / 1000)
hm, this got removed by Dmitry quite s
the process moves all the code
associated with static_prio_timeslice() to sched_rt.c which is the only
place where it now has relevance.
Signed-off-by: Peter Williams <[EMAIL PROTECTED]>
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n.
non urgent) patch that I sent on the 15th of August has been applied.
Signed-off-by: Peter Williams <[EMAIL PROTECTED]>
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bier
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
At the moment, balance_tasks() provides low level functionality for
both
move_tasks() and move_one_task() (indirectly) via the load_balance()
function (in the sched_class interface) which also provides dual
functionality.
kernel.
Signed-off-by: Peter Williams <[EMAIL PROTECTED]>
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
diff -r 90691a597f06 include/linux/sched.h
--- a/include/l
CONFIG_FAIR_GROUP_SCHED is set. This should
preserve the effect of helping spread groups' higher priority tasks
around the available CPUs while improving system performance when
CONFIG_FAIR_GROUP_SCHED isn't set.
Signed-off-by: Peter Williams <[EMAIL PROTECTED]>
Peter
-
file.
Is it correct?
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a me
s accepted).
NB Since move_tasks() gets called with two run queue locks held even
small reductions in overhead are worthwhile.
Signed-off-by: Peter Williams <[EMAIL PROTECTED]>
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignoran
ff-by: Peter Williams <[EMAIL PROTECTED]>
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
diff -r 622a128d084b kernel/sched.c
--- a/kernel/sched.c Mon Jul 30 21:54:37 2007 -0700
I've just been reviewing these patches and have spotted a couple of
errors that look like they were caused by fuzz during the patch process.
A patch that corrects the errors is attached.
Cheers
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n
Ingo Molnar wrote:
> * Peter Williams <[EMAIL PROTECTED]> wrote:
>
>> Probably the last one now that CFS is in the main line :-(.
>
> hm, why is CFS in mainline a problem?
It means a major rewrite of the plugsched interface and I'm not sure
that it's worth it (
Gene Heskett wrote:
> On Friday 13 July 2007, Peter Williams wrote:
>> Ingo Molnar wrote:
>>> * Gregory Haskins <[EMAIL PROTECTED]> wrote:
>>>> On Thu, 2007-07-12 at 14:07 +0200, Ingo Molnar wrote:
>>>>> * Gregory Haskins <[EMAIL PROTECTED]>
e found it extremely valuable to be able to bisect this
>> beast while working on the 21-22 port.
>
> we are working on something in this area :) Stay tuned ...
I've just been reviewing these patches and have spotted an error in the
file mm/slob.c at lines 500-501 whereby a non exis
ler will be ingosched (which is the normal scheduler).
The scheduler in force on a running system can be determined by the
contents of:
/proc/scheduler
Control parameters for the scheduler can be read/set via files in:
/sys/cpusched//
Peter
--
Peter Williams
Siddha, Suresh B wrote:
On Tue, May 29, 2007 at 07:18:18PM -0700, Peter Williams wrote:
Siddha, Suresh B wrote:
I can try 32-bit kernel to check.
Don't bother. I just checked 2.6.22-rc3 and the problem is not present
which means something between rc2 and rc3 has fixed the problem. I ha
Siddha, Suresh B wrote:
On Tue, May 29, 2007 at 07:18:18PM -0700, Peter Williams wrote:
Siddha, Suresh B wrote:
I can try 32-bit kernel to check.
Don't bother. I just checked 2.6.22-rc3 and the problem is not present
which means something between rc2 and rc3 has fixed the problem. I ha
William Lee Irwin III wrote:
On Wed, May 30, 2007 at 10:09:28AM +1000, Peter Williams wrote:
So what you're saying is that you think dynamic priority (or its
equivalent) should be used for load balancing instead of static priority?
It doesn't do much in other schemes, but when f
Siddha, Suresh B wrote:
On Tue, May 29, 2007 at 04:54:29PM -0700, Peter Williams wrote:
I tried with various refresh rates of top too.. Do you see the issue
at runlevel 3 too?
I haven't tried that.
Do your spinners ever relinquish the CPU voluntarily?
Nope. Simple and plain while(1);
William Lee Irwin III wrote:
William Lee Irwin III wrote:
Lag should be considered in lieu of load because lag
On Sun, May 27, 2007 at 11:29:51AM +1000, Peter Williams wrote:
What's the definition of lag here?
Lag is the deviation of a task's allocated CPU time from the CPU tim
Siddha, Suresh B wrote:
On Thu, May 24, 2007 at 04:23:19PM -0700, Peter Williams wrote:
Siddha, Suresh B wrote:
On Thu, May 24, 2007 at 12:43:58AM -0700, Peter Williams wrote:
Further testing indicates that CONFIG_SCHED_MC is not implicated and
it's CONFIG_SCHED_SMT that's causing t
Peter Williams wrote:
Srivatsa Vaddagiri wrote:
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
I don't think that ignoring cpu affinity is an option. Setting the
cpu affinity of tasks is a deliberate policy action on the part of
the system administrator and has
Srivatsa Vaddagiri wrote:
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
I don't think that ignoring cpu affinity is an option. Setting the cpu
affinity of tasks is a deliberate policy action on the part of the
system administrator and has to be honoured.
mmm ..but
William Lee Irwin III wrote:
Srivatsa Vaddagiri wrote:
Ingo/Peter, any thoughts here? CFS and smpnice probably is "broken"
with respect to such example as above albeit for nice-based tasks.
On Sat, May 26, 2007 at 10:17:42AM +1000, Peter Williams wrote:
See above. I think that
ue for each
run queue and using that to modify find_busiest_group() and
find_busiest_queue() to be a bit smarter. But I'm not sure that it
would be worth the added complexity.
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of i
Siddha, Suresh B wrote:
On Thu, May 24, 2007 at 12:43:58AM -0700, Peter Williams wrote:
Peter Williams wrote:
The relevant code, find_busiest_group() and find_busiest_queue(), has a
lot of code that is ifdefed by CONFIG_SCHED_MC and CONFIG_SCHED_SMT and,
as these macros were defined in the
Peter Williams wrote:
Peter Williams wrote:
Peter Williams wrote:
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics
Dmitry Adamushko wrote:
On 22/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
> [...]
> Hum.. I guess, a 0/4 scenario wouldn't fit well in this explanation..
No, and I haven't seen one.
Well, I just took one of your calculated probabilities as something
you have reall
Peter Williams wrote:
Peter Williams wrote:
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is tha
Peter Williams wrote:
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more o
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less constant interva
Dmitry Adamushko wrote:
On 18/05/07, Peter Williams <[EMAIL PROTECTED]> wrote:
[...]
One thing that might work is to jitter the load balancing interval a
bit. The reason I say this is that one of the characteristics of top
and gkrellm is that they run at a more or less constant interva
Peter Williams wrote:
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I've now done this test on a number of kernels: 2.6.21 and 2.6.22-rc1
with and without CFS; and the problem is always present. It's not
"nice" related as the all four tasks are run
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I've now done this test on a number of kernels: 2.6.21 and 2.6.22-rc1
with and without CFS; and the problem is always present. It's not
"nice" related as the all four tasks are run at nice == 0.
could
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
Load balancing appears to be badly broken in this version. When I
started 4 hard spinners on my 2 CPU machine one ended up on one CPU
and the other 3 on the other CPU and they stayed there.
could you try to debug this
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
As usual, any sort of feedback, bugreport, fix and suggestion is more
than welcome,
Load balancing appears to be badly broken in this version. When I
started 4 hard spinners on my 2 CPU machine one ended up on one CPU
a
ded up on one CPU and
the other 3 on the other CPU and they stayed there.
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line &
Esben Nielsen wrote:
On Tue, 8 May 2007, Peter Williams wrote:
Esben Nielsen wrote:
On Sun, 6 May 2007, Linus Torvalds wrote:
> > > On Sun, 6 May 2007, Ingo Molnar wrote:
> > > > * Linus Torvalds <[EMAIL PROTECTED]> wrote:
> > > > > So t
6 minuttes is redicolously long for a
scheduler and a simple test limiting time values to that value would not
break anything.
Except if you're measuring sleep times. I think that you'll find lots
of tasks sleep for more than 72 minutes.
Peter
--
Peter Williams
ol parameters for the scheduler can be read/set via files in:
/sys/cpusched//
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line &q
Neil Horman wrote:
On Sat, Apr 28, 2007 at 12:28:28AM +1000, Peter Williams wrote:
Neil Horman wrote:
On Fri, Apr 27, 2007 at 04:05:11PM +1000, Peter Williams wrote:
Damn, This is what happens when I try to do things too quickly. I missed one
spot in my last patch where I replaced skb with
Neil Horman wrote:
On Fri, Apr 27, 2007 at 04:05:11PM +1000, Peter Williams wrote:
Linus Torvalds wrote:
On Fri, 27 Apr 2007, Peter Williams wrote:
The 2.6.21 kernel is hanging during the post boot phase where various
daemons
are being started (not always the same daemon unfortunately
Linus Torvalds wrote:
On Fri, 27 Apr 2007, Peter Williams wrote:
The 2.6.21 kernel is hanging during the post boot phase where various daemons
are being started (not always the same daemon unfortunately).
This problem was not present in 2.6.21-rc7 and there is no oops or other
unusual output
Linus Torvalds wrote:
On Fri, 27 Apr 2007, Peter Williams wrote:
The 2.6.21 kernel is hanging during the post boot phase where various daemons
are being started (not always the same daemon unfortunately).
This problem was not present in 2.6.21-rc7 and there is no oops or other
unusual output
multiple years.
Of course, I could (very likely!) be full of it! ;-)
And won't be using the any new scheduler on these computers anyhow as
that would involve bringing the system down to install the new kernel. :-)
Peter
--
Peter Williams [EMAIL PROTECTED
g X more
CPU bandwidth and there are simpler ways to give X more CPU if it needs
it. However, I think there's something seriously wrong if it needs the
-19 nice that I've heard mentioned. You might as well just run it as a
real time process.
Peter
--
Peter Williams
7;s normal
CPU use is low enough for it to be given high priority.
Just because the O(1) tried this model and failed doesn't mean that the
model is bad. O(1) was a flawed implementation of a good model.
Peter
PS Doing a kernel build in an xterm isn't an example of high enough
output to c
Peter Williams wrote:
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I retract this suggestion as it's a very bad idea. It introduces the
possibility of starvation via the poor sods at the bottom of the
queue having their "on CPU" forever postponed and w
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
I retract this suggestion as it's a very bad idea. It introduces the
possibility of starvation via the poor sods at the bottom of the queue
having their "on CPU" forever postponed and we all know th
Peter Williams wrote:
Peter Williams wrote:
Ingo Molnar wrote:
your suggestion concentrates on the following scenario: if a task
happens to schedule in an 'unlucky' way and happens to hit a busy
period while there are many idle periods. Unless i misunderstood your
suggestion, t
Peter Williams wrote:
William Lee Irwin III wrote:
William Lee Irwin III wrote:
On Sat, Apr 21, 2007 at 10:23:07AM +1000, Peter Williams wrote:
If some form of precise timer was used (instead) to trigger
pre-emption then, where there is more than one task with the same
expected "on CPU&
e of service as opposed to a time at which anything should happen
or a number useful for predicting such. When service should begin more
properly depends on the other tasks in the system and a number of other
decisions that are part of the scheduling policy.
On Sat, Apr 21, 2007 at 10:23:07AM +1000,
William Lee Irwin III wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
I have a suggestion I'd like to make that addresses both nice and
fairness at the same time. As I understand the basic principle behind
this scheduler it to work out a time by which a task should
Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
address the many nice level re
Peter Williams wrote:
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still naive, i'll
address the many
Peter Williams wrote:
Willy Tarreau wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE
tial background processing intended not to ever disturb
userspace can be given priorities appropriate to it (perhaps even con's
SCHED_IDLEPRIO would make sense), and other, urgent processing can be
given priority over userspace altogether.
On Thu, Apr 19, 2007 at 09:50:19PM +1000, Peter Wil
Willy Tarreau wrote:
On Fri, Apr 20, 2007 at 10:10:45AM +1000, Peter Williams wrote:
Ingo Molnar wrote:
- bugfix: use constant offset factor for nice levels instead of
sched_granularity_ns. Thus nice levels work even if someone sets
sched_granularity_ns to 0. NOTE: nice support is still
e task's
average cpu use per scheduling cycle is counter intuitive but I believe
that (if you think about it) you'll see that it actually makes sense.
Peter
PS Some reordering of calculation order within the expressions might be
in order to keep them within the range of 32 bit arith
to old habits of
make -j4 on uniprocessor and the like, and I expect that those on CFS and
Nicksched would also have similar experiences.
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
Peter Williams wrote:
Con Kolivas wrote:
Ok, there are 3 known schedulers currently being "promoted" as solid
replacements for the mainline scheduler which address most of the
issues with mainline (and about 10 other ones not currently being
promoted). The main way they do this
nf
This is sounding very much like System V Release 4 (and descendants)
except that they call it SCHED_SYS and also give SCHED_NORMAL tasks that
are in system mode dynamic priorities in the SCHED_SYS range (to avoid
priority inversion, I believe).
Peter
--
Peter Williams
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I looked at) the (horrible to m
eral days but they're probably best
served by explicit allocation of processes to CPUs using the process
affinity mechanism.
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bie
lers that can
achieve higher level scheduling policies. Versions of PLFS work on
Windows from user space by twiddling process priorities. Part of my
more recent work at Aurema had been involved in patching Linux's
scheduler so that nice worked more predictably so that we could
Peter Williams wrote:
William Lee Irwin III wrote:
Ingo Molnar wrote:
this is the second release of the CFS (Completely Fair Scheduler)
patchset, against v2.6.21-rc7:
http://redhat.com/~mingo/cfs-scheduler/sched-cfs-v2.patch
i'd like to thank everyone for the tremendous amount of fee
Chris Friesen wrote:
Peter Williams wrote:
Chris Friesen wrote:
Suppose I have a really high priority task running. Another very
high priority task wakes up and would normally preempt the first one.
However, there happens to be another cpu available. It seems like it
would be a win if we
Michael K. Edwards wrote:
On 4/17/07, Peter Williams <[EMAIL PROTECTED]> wrote:
The other way in which the code deviates from the original as that (for
a few years now) I no longer calculated CPU bandwidth usage directly.
I've found that the overhead is less if I keep a running ave
William Lee Irwin III wrote:
Peter Williams wrote:
William Lee Irwin III wrote:
I was tempted to restart from scratch given Ingo's comments, but I
reconsidered and I'll be working with your code (and the German
students' as well). If everything has to change, so be it, but i
Peter Williams wrote:
William Lee Irwin III wrote:
I was tempted to restart from scratch given Ingo's comments, but I
reconsidered and I'll be working with your code (and the German
students' as well). If everything has to change, so be it, but it'll
still be a deriv
schedulers and it works well.
I found that 10 milliseconds was a good value for the initial chunk of
CPU for a newly forked process.
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bie
Chris Friesen wrote:
Peter Williams wrote:
Chris Friesen wrote:
Scuse me if I jump in here, but doesn't the load balancer need some
way to figure out a) when to run, and b) which tasks to pull and
where to push them?
Yes but both of these are independent of the scheduler discipli
task is nice 19 it can expect to wait longer
to get onto the CPU than if it was nice 0.
Which I think is quite a reasonable requirement.
I'm pretty sure the stock scheduler falls short of both of these
guarantees though.
Peter
--
Peter Williams [EMAIL PROT
Ingo Molnar wrote:
* William Lee Irwin III <[EMAIL PROTECTED]> wrote:
On Tue, Apr 17, 2007 at 04:46:57PM +1000, Peter Williams wrote:
Have you considered using rq->raw_weighted_load instead of
rq->nr_running in calculating fair_clock? This would take the nice
value (or RT prio
William Lee Irwin III wrote:
William Lee Irwin III wrote:
Comments on which directions you'd like this to go in these respects
would be appreciated, as I regard you as the current "project owner."
On Tue, Apr 17, 2007 at 06:00:06PM +1000, Peter Williams wrote:
I'd do sc
feature is that (in this pure form) it's starvation free.
However, if you fiddle with it and do things like giving bonus priority
boosts to interactive tasks it becomes susceptible to starvation. This
can be fixed by using an anti starvation mechanism such as SPA's
promotion scheme
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 05:48:55PM +1000, Peter Williams wrote:
Nick Piggin wrote:
Other hints that it was a bad idea was the need to transfer time slices
between children and parents during fork() and exit().
I don't see how that has anything to do with dual arrays.
he v1 patch got - i could hardly keep up with just reading the
mails! Some of the stuff people addressed i couldnt implement yet, i
mostly concentrated on bugs, regressions and debuggability.
On Tue, Apr 17, 2007 at 04:46:57PM +1000, Peter Williams wrote:
Have you considered using rq->raw_weigh
William Lee Irwin III wrote:
On Tue, Apr 17, 2007 at 04:34:36PM +1000, Peter Williams wrote:
This doesn't make any sense to me.
For a start, exact simultaneous operation would be impossible to achieve
except with highly specialized architecture such as the long departed
transputer.
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:23:37PM +1000, Peter Williams wrote:
Nick Piggin wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I
esigned to be used in the construction
of CPU scheduler tests. Of particular use is the aspin program which
can be used to launch tasks with specified sleep/wake characteristics.
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignoran
alue (or RT priority) of the other tasks into account when determining
what's fair.
Peter
PS You'd have to change the migration thread's load_weight from 0 to 1
in order to prevent divide by zero without having to explicitly check
for it ever
Nick Piggin wrote:
Well I know people have had woes with the scheduler for ever (I guess that
isn't going to change :P). I think people generally lost a bit of interest
in trying to improve the situation because of the upstream problem.
Yes.
Peter
--
Peter Wil
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 02:17:22PM +1000, Peter Williams wrote:
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
On Tue, 2007-04-17 at 10:06 +1000, Peter Williams wrote:
Mike Galbraith wrote:
Demystify what? The casual observer need
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 02:25:39PM +1000, Peter Williams wrote:
Nick Piggin wrote:
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
On 4/16/07, Peter Williams <[EMAIL PROTECTED]> wrote:
Note that I talk of run queues
not CPUs as I think a sh
Nick Piggin wrote:
On Mon, Apr 16, 2007 at 04:10:59PM -0700, Michael K. Edwards wrote:
On 4/16/07, Peter Williams <[EMAIL PROTECTED]> wrote:
Note that I talk of run queues
not CPUs as I think a shift to multiple CPUs per run queue may be a good
idea.
This observation of Peter's
Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
On Tue, 2007-04-17 at 10:06 +1000, Peter Williams wrote:
Mike Galbraith wrote:
Demystify what? The casual observer need only read either your attempt
at writing a scheduler, or my attempts at fixing the one
#x27;s going as
suggestions get folded in and bugs get fixed etc.
Thanks
Peter
--
Peter Williams [EMAIL PROTECTED]
"Learning, n. The kind of ignorance distinguishing the studious."
-- Ambrose Bierce
-
To unsubscribe from this list: send the line &quo
be approximately
equivalent to 0.5 seconds by changing some constants in the code.
As you can
imagine, mainline doesn't do very well in this case.
You should look back through the plugsched patches where many of these
ideas have been experimented with.
Peter
--
Peter Williams
Chris Friesen wrote:
Peter Williams wrote:
To my mind scheduling and load balancing are orthogonal and keeping
them that way simplifies things.
Scuse me if I jump in here, but doesn't the load balancer need some way
to figure out a) when to run, and b) which tasks to pull and where to
Al Boldi wrote:
Peter Williams wrote:
Al Boldi wrote:
Reducing the prio-level granularity may also be helpful;
Because of some of the bit operations code makes it a bad idea to have
more than 160 priority levels, you're more or less limited to 60
priority levels for SCHED_OTHER tasks (a
pt
at writing a scheduler, or my attempts at fixing the one we have, to see
that it was high time for someone with the necessary skills to step in.
Make that "someone with the necessary clout".
Now progress can happen, which was _not_ happening before.
This is true.
Al Boldi wrote:
Peter Williams wrote:
Al Boldi wrote:
Peter Williams wrote:
William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:06:56AM +1000, Peter Williams wrote:
PS I no longer read LKML (due to time constraints) and would
appreciate it if I could be CC'd on any e-mails sugge
. the stock load
balancing.
On Mon, Apr 16, 2007 at 03:09:31PM +1000, Peter Williams wrote:
Well a single run queue removes the need for load balancing but has
scalability issues on large systems. Personally, I think something in
between would be the best solution i.e. multiple run queues but
Al Boldi wrote:
Peter Williams wrote:
William Lee Irwin III wrote:
On Mon, Apr 16, 2007 at 11:06:56AM +1000, Peter Williams wrote:
PS I no longer read LKML (due to time constraints) and would appreciate
it if I could be CC'd on any e-mails suggesting scheduler changes.
PPS I'm jus
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
One more quick comment. The claim that there is no concept of time
slice in the new scheduler is only true in the sense of the rather
arcane implementation of time slices extant in the O(1) scheduler.
yeah. AFAIK most
1 - 100 of 165 matches
Mail list logo