David wrote:
> Not necessarily because migration only occurs to any online cpu in the
> mask, it won't attempt to migrate it to some cpu that has been downed.
> ...
... one of David or I is insane ... I can't tell which one yet,
perhaps both of us ;).
I'm going to reply to David without all the
On Tue, 16 Oct 2007, Paul Jackson wrote:
> David wrote:
> > Why can't you just add a helper function to sched.c:
> >
> > void set_hotcpus_allowed(struct task_struct *task,
> > cpumask_t cpumask)
> > {
> > mutex_lock(&sched_hotcpu_mutex);
> >
Paul Jackson wrote:
Any chance you could provide a patch that works against cgroups?
Fix cpusets update_cpumask
Cause writes to cpuset "cpus" file to update cpus_allowed for member
tasks:
- collect batches of tasks under tasklist_lock and then call
set_cpus_allowed() on them outside the lo
David wrote:
> Why can't you just add a helper function to sched.c:
>
> void set_hotcpus_allowed(struct task_struct *task,
>cpumask_t cpumask)
> {
> mutex_lock(&sched_hotcpu_mutex);
> set_cpus_allowed(task, cpumask);
>
On Mon, 15 Oct 2007, Paul Jackson wrote:
> My solution may be worse than that. Because set_cpus_allowed() will
> fail if asked to set a non-overlapping cpumask, my solution could never
> terminate. If asked to set a cpusets cpus to something that went off
> line right then, this I'd guess this c
> > + if (cpus_equal(*cpus, t->cpus_allowed))
> > + continue;
> > ...
> > + for (q = tasks; q < p; q++) {
> > + set_cpus_allowed(*q, *cpus);
> > + put_task_struct(*q);
> > + }
> > + }
> > +}
>
> Y
> Will do - I justed wanted to get this quickly out to show the idea
> that I was working on.
Ok - good.
In the final analysis, I'll take whatever works ;).
I'll lobby for keeping the code "simple" (a subjective metric) and poke
what holes I can in things, and propose what alternatives I can mus
On 10/15/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> > currently against an older kernel
>
> ah .. which older kernel?
2.6.18, but I can do a version against 2.6.23-mm1.
> + if (!retval) {
> + cpus_allowed = cpuset_cpus_allowed(p);
> + if (!cpus_subset(new_mask
> currently against an older kernel
ah .. which older kernel?
I tried it against the broken out 2.6.23-rc8-mm2 patch set,
inserting it before the task-containersv11-* patches, but
that blew up on me - three rejected hunks.
Any chance of getting this against a current cgroup (aka
container) kern
> Yet by not doing any locking here to prevent a cpu from being
> hot-unplugged, you can race and allow the hot-unplug event to happen
> before calling set_cpus_allowed(). That makes this entire function a
> no-op with set_cpus_allowed() returning -EINVAL for every call, which
> isn't caught,
Paul Jackson wrote:
Paul M wrote:
Here's an alternative for consideration, below.
I don't see the alternative -- I just see my patch, with the added
blurbage:
#12 - /usr/local/google/home/menage/kernel9/linux/kernel/cpuset.c
# action=edit type=text
Should I be increasing my caffeine
Paul M wrote:
> Here's an alternative for consideration, below.
I don't see the alternative -- I just see my patch, with the added
blurbage:
#12 - /usr/local/google/home/menage/kernel9/linux/kernel/cpuset.c
# action=edit type=text
Should I be increasing my caffeine intake?
--
Paul Jackson wrote:
Paul M, David R, others -- how does this look?
Looks plausible, although as David comments I don't think it handles a
concurrent CPU hotplug/unplug. Also I don't like the idea of doing a
cgroup_lock() across sched_setaffinity() - cgroup_lock() can be held for
relatively
On Mon, 15 Oct 2007, Paul Jackson wrote:
> --- 2.6.23-mm1.orig/kernel/cpuset.c 2007-10-14 22:24:56.268309633 -0700
> +++ 2.6.23-mm1/kernel/cpuset.c2007-10-14 22:34:52.645364388 -0700
> @@ -677,6 +677,64 @@ done:
> }
>
> /*
> + * update_cgroup_cpus_allowed(cont, cpus)
> + *
> + * Keep
Paul M, David R, others -- how does this look?
From: Paul Jackson <[EMAIL PROTECTED]>
Update the per-task cpus_allowed of each task in a cgroup
whenever it has a cpuset whose 'cpus' mask changes.
The change to basing cpusets on the cgroup (aka container)
infrastructure broke an essential cpuset
15 matches
Mail list logo