> i'd suggest we go with what we have now
Ok - I'll try sending this against 2.6.23-rc8-mm2
in a couple of hours, if it goes well.
> We cannot merge it via sched.git because it has some containers
> (or other?) dependencies, right?
This sched_load_balance flag patch collides with the cgroup
(ak
* Paul Jackson <[EMAIL PROTECTED]> wrote:
> My plan had been to send Andrew something on Wednesday
> next week (five days from now), either what we have now,
> or with the improved cpuset-to-sched API, if Nick and I
> can work through that (well, if Nick can figure out what
> I broke, while I'm o
Ingo wrote:
> please resend the base patch to Andrew (or better yet, a combo patch
> that Andrew can apply and which just does it all).
I'm reluctant to right now, because I will be off the air
for three days, starting in one more day, and don't like
firing off stuff just before I vanish.
Though
after the
> '---' marker, this current patch applies to the base patch of Subject:
>
> [PATCH] cpuset and sched domains: sched_load_balance flag
>
> You probably didn't pick up that base patch because Nick and I are
> still haggling over it. Well ... we'
ense.
As I stated in this current patch in the diffstat section after the
'---' marker, this current patch applies to the base patch of Subject:
[PATCH] cpuset and sched domains: sched_load_balance flag
You probably didn't pick up that base patch because Nick and I are
still hagg
d_load_balance' failed, EINVAL.
>
> Signed-off-by: Paul Jackson <[EMAIL PROTECTED]>
>
> ---
>
> Andrew,
>
> These fixes go right after the patch they fix:
> [PATCH] cpuset and sched domains: sched_load_balance flag
I'm getting 100% rejects from thi
Nick wrote:
> So if a new pdflush is spawned, it get's moved to some cpuset? That
> probably isn't something these realtime systems want to do (ie. the
> non-realtime portion probably doesn't want to have any sort of scheduler
> or even worry about cpusets at all).
No - the new pdflush is put in t
Nick wrote:
> There won't be any CPU cycles used, if the tasks are paused (surely
> they're not spin waiting).
Consider the case when there are two, smaller, non-overlapping cpusets
with active jobs, and one larger cpuset, covering both those smaller
ones, with only paused tasks.
If we realize we
On Wednesday 03 October 2007 22:17, Paul Jackson wrote:
> Nick wrote:
> > OK, so I don't exactly understand you either. To make it simple, can
> > you give a concrete example of a cpuset hierarchy that wouldn't
> > work?
>
> It's more a matter of knowing how my third party batch scheduler
> coders
On Wednesday 03 October 2007 22:41, Paul Jackson wrote:
> > pdflush
> > is not pinned at all and can be dynamically created and destroyed. Ditto
> > for kjournald, as well as many others.
>
> Whatever is not pinned is moved out of the top cpuset, on the kind of
> systems I'm most familiar with. Th
> pdflush
> is not pinned at all and can be dynamically created and destroyed. Ditto
> for kjournald, as well as many others.
Whatever is not pinned is moved out of the top cpuset, on the kind of
systems I'm most familiar with. They are put in a smaller cpuset, with
load balancing, that is sized
On Wednesday 03 October 2007 22:14, Paul Jackson wrote:
> > These are what I'm worried about, and things like kswapd, pdflush,
> > could definitely use a huge amount of CPU.
> >
> > If you are interested in hard partitioning the system, you most
> > definitely want these things to be balanced acros
Nick wrote:
> OK, so I don't exactly understand you either. To make it simple, can
> you give a concrete example of a cpuset hierarchy that wouldn't
> work?
It's more a matter of knowing how my third party batch scheduler
coders think. They will be off in some corner of their code with a
cpuset i
> These are what I'm worried about, and things like kswapd, pdflush,
> could definitely use a huge amount of CPU.
>
> If you are interested in hard partitioning the system, you most
> definitely want these things to be balanced across the non-isolated
> CPUs.
But these guys are pinned anyway (or
On Wednesday 03 October 2007 21:38, Paul Jackson wrote:
> > OK, so to really do anything different (from a non-partitioned setup),
> > you would need to set sched_load_balance=0 for the root cpuset?
> > Suppose you do that to hard partition the machine, what happens to
> > newly created tasks like
to rebuild scheduler domains
Without (5), every read or write system call on a per-cpuset
special file 'sched_load_balance' failed, EINVAL.
Signed-off-by: Paul Jackson <[EMAIL PROTECTED]>
---
Andrew,
These fixes go right after the patch they fix:
[PATCH] cpuset and sched domai
> OK, so to really do anything different (from a non-partitioned setup),
> you would need to set sched_load_balance=0 for the root cpuset?
Yup - exactly. In fact one code fragment in my patch highlights this:
/* Special case for the 99% of systems with one, full, sched domain */
On Wednesday 03 October 2007 19:55, Paul Jackson wrote:
> > > Yeah -- cpusets are hierarchical. And some of the use cases for
> > > which cpusets are designed are hierarchical.
> >
> > But partitioning isn't.
>
> Yup. We've got a square peg and a round hole. An impedance mismatch.
> That's the r
> > Yeah -- cpusets are hierarchical. And some of the use cases for
> > which cpusets are designed are hierarchical.
>
> But partitioning isn't.
Yup. We've got a square peg and a round hole. An impedance mismatch.
That's the root cause of this entire wibbling session, in my view.
The essentia
On Wednesday 03 October 2007 17:25, Paul Jackson wrote:
> Nick wrote:
> > BTW. as far as the sched.c changes in your patch go, I much prefer
> > the partition_sched_domains API: http://lkml.org/lkml/2006/10/19/85
> >
> > The caller should manage everything itself, rather than
> > partition_sched_do
On Wednesday 03 October 2007 16:58, Paul Jackson wrote:
> > > Yup - it's asking for load balancing over that set. That is why it is
> > > called that. There's no idea here of better or worse load balancing,
> > > that's an internal kernel scheduler subtlety -- it's just a request
> > > that load
Nick wrote:
> BTW. as far as the sched.c changes in your patch go, I much prefer
> the partition_sched_domains API: http://lkml.org/lkml/2006/10/19/85
>
> The caller should manage everything itself, rather than
> partition_sched_domains doing half of the memory allocation.
Please take a closer lo
> > Yup - it's asking for load balancing over that set. That is why it is
> > called that. There's no idea here of better or worse load balancing,
> > that's an internal kernel scheduler subtlety -- it's just a request that
> > load balancing be done.
>
> OK, if it prohibits balancing when sched
On Tuesday 02 October 2007 04:15, Paul Jackson wrote:
> Nick wrote:
> > which you could equally achieve by adding
> > a second set of sched domains (and the global domains could keep
> > globally balancing).
>
> Hmmm ... this could be the key to this discussion.
>
> Nick - can two sched domains ove
On Monday 01 October 2007 13:42, Paul Jackson wrote:
> Nick wrote:
> > Moreover, sched_load_balance doesn't really sound like a good name
> > for asking for a partition.
>
> Yup - it's not a good name for asking for a partition.
>
> That's because it isn't asking for a partition.
>
> It's asking fo
Thanks for the review, Randy. Good comments.
> > Acked-by: Paul Jackson <[EMAIL PROTECTED]>
>
> Are there some attributions missing, else S-O-B ?
Yup - I should have written this line as:
Signed-off-by: Paul Jackson <[EMAIL PROTECTED]>
> > +static int cpusets_overlap(struct cpuset *a,
On Sun, 30 Sep 2007 03:44:03 -0700 Paul Jackson wrote:
> From: Paul Jackson <[EMAIL PROTECTED]>
>
...
>
> Acked-by: Paul Jackson <[EMAIL PROTECTED]>
Are there some attributions missing, else S-O-B ?
> ---
>
> Andrew - this patch goes right after your *-mm patch:
> task-containers-enable-con
Nick wrote:
> which you could equally achieve by adding
> a second set of sched domains (and the global domains could keep
> globally balancing).
Hmmm ... this could be the key to this discussion.
Nick - can two sched domains overlap? And if they do, what does that
mean on any user or applicatio
Nick wrote:
> Moreover, sched_load_balance doesn't really sound like a good name
> for asking for a partition.
Yup - it's not a good name for asking for a partition.
That's because it isn't asking for a partition.
It's asking for load balancing over the CPUs in the cpuset so marked.
> It's mor
On Monday 01 October 2007 04:07, Paul Jackson wrote:
> Nick wrote:
> > The user should just be able to specify exactly the partitioning of
> > tasks required, and cpusets should ask the scheduler to do the best
> > job of load balancing possible.
>
> If the cpusets which have 'sched_load_balance' e
Nick wrote:
> The user should just be able to specify exactly the partitioning of
> tasks required, and cpusets should ask the scheduler to do the best
> job of load balancing possible.
If the cpusets which have 'sched_load_balance' enabled are disjoint
(their 'cpus' cpus_allowed masks don't overl
* Paul Jackson <[EMAIL PROTECTED]> wrote:
> Add a new per-cpuset flag called 'sched_load_balance'.
>
> When enabled in a cpuset (the default value) it tells the kernel
> scheduler that the scheduler should provide the normal load balancing
> on the CPUs in that cpuset, sometimes moving tasks f
On Sunday 30 September 2007 20:44, Paul Jackson wrote:
> From: Paul Jackson <[EMAIL PROTECTED]>
>
> Add a new per-cpuset flag called 'sched_load_balance'.
>
> When enabled in a cpuset (the default value) it tells the kernel
> scheduler that the scheduler should provide the normal load
> balancing o
From: Paul Jackson <[EMAIL PROTECTED]>
Add a new per-cpuset flag called 'sched_load_balance'.
When enabled in a cpuset (the default value) it tells the kernel
scheduler that the scheduler should provide the normal load
balancing on the CPUs in that cpuset, sometimes moving tasks
from one CPU to a
34 matches
Mail list logo