On 11/29/06, Paul Jackson <[EMAIL PROTECTED]> wrote:
2) I wedged the kernel on the container_lock, doing a removal of a cpuset
using notify_on_release.
I couldn't reproduce this, with a /sbin/cpuset_release_agent that does:
#!/bin/bash
logger cpuset_release_agent $1
rmdir /dev/cpuset/$1
a
On 6/20/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
Display the current usage and limit in a more user friendly manner. Number
of pages can be confusing if the page size is different. Some systems
can choose a page size of 64KB.
I'm not sure that's such a great idea. "Human-friendly"
represen
On 6/21/07, Pavel Emelianov <[EMAIL PROTECTED]> wrote:
Nothing wrong, but currently they are shown in "natural" points, i.e. in
those that the controller accounts them in. For RSS controller the natural
point is "page", but auto-converting them from pages to bytes is wrong, as
not all the contro
On 6/22/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
The problem with input in bytes is that the user will have to ensure
that the input is
a multiple of page size, which implies that she would need to use the
calculator every time.
Having input in bytes seems pretty natural to me. Why not ju
On 6/22/07, Vaidyanathan Srinivasan <[EMAIL PROTECTED]> wrote:
Merging both limits will eliminate the issue, however we would need
individual limits for pagecache and RSS for better control. There are
use cases for pagecache_limit alone without RSS_limit like the case of
database application us
On 6/25/07, Paul Menage <[EMAIL PROTECTED]> wrote:
On 6/22/07, Vaidyanathan Srinivasan <[EMAIL PROTECTED]> wrote:
>
> Merging both limits will eliminate the issue, however we would need
> individual limits for pagecache and RSS for better control. There are
> use cases f
On 7/10/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> Container stuff. Hold, I guess. I was expecting updates from Paul.
Paul,
Are you working on a new version? I thought it was mostly ready
for mainline.
There are definitely some big changes that I want to make internally
On 7/10/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
> Andrew, how about we merge enough of the container framework to
> support CFS? Bits we could leave out for now include container_clone()
> support and the nsproxy subsystem, fork/exit callback hooks, and
> possibly leave cpusets alone for now
On 7/10/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
I'm inclined to take the cautious route here - I don't think people will be
dying for the CFS thingy (which I didn't even know about?) in .23, and it's
rather a lot of infrastructure to add for a CPU scheduler configurator
Selecting the rele
On 5/28/07, Peter Williams <[EMAIL PROTECTED]> wrote:
In any case, there's no point having cpu affinity if it's going to be
ignored. Maybe you could have two levels of affinity: 1. if set by a
root it must be obeyed; and 2. if set by an ordinary user it can be
overridden if the best interests o
On 5/30/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
Holy cow, do we need all those?
I'll experiment to see which ones we can get rid of.
> +typedef enum {
> + CONT_REMOVED,
> +} container_flagbits_t;
typedefs are verboten. Fortunately this one is never referred to - only
the values a
On 7/17/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
Thinking out loud again, can we add can_destroy() callbacks?
What would the exact semantics of such a callback be?
Since for proper interaction with release agents we need the subsystem
to notify the framework when a subsystem object become
On 5/8/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
I now have a use case for maintaining a per-container task list.
I am trying to build a per-container stats similar to taskstats.
I intend to support container accounting of
1. Tasks running
2. Tasks stopped
3. Tasks un-interruptible
4. Tasks b
On 6/4/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
2. I can't delete containers because of the files they contain, and
am not allowed to delete those files by hand.
You should be able to delete a container with rmdir as long as it's
not in use - its control files will get cleaned up automa
On 6/4/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
Yup - early in the life of cpusets, a created cpuset inherited the cpus
and mems of its parent. But that broke the exclusive property big
time. You will recall that a cpu_exclusive or mem_exclusive cpuset
cannot overlap the cpus or memory, res
On 6/4/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] root]# rm -rf /containers/1
Just use "rmdir /containers/1" here.
Ah, I see the second time I typed 'ls /containers/1/tasks' instead of
cat. When I then used cat, the file was empty, and I got an oops just
like Pavel rep
On 6/6/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
(1) build for i386 with my .config
(2) attempt to boot in qemu's i386 system simulator
I'm not seeing the sort of nondeterminism Andy Whitcroft is. It breaks
every time when I try this.
Looks to be lockdep related - it's reproducibl
On 6/7/07, Cedric Le Goater <[EMAIL PROTECTED]> wrote:
when there's no tasks in a container, opening
//tasks
spits the following warning because we are trying to
kmalloc(0).
I guess I'm not opposed to this change - but isn't there still
discussion going on about whether kmalloc(0) should act
On 6/7/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
> this needs tasklist_lock?
>
rcu_read_lock() should be fine. From Eric's patch at
2.6.17-mm2 - proc-remove-tasklist_lock-from-proc_pid_readdir.patch
The patch mentions that "We don't need the tasklist_lock to safely
iterate through processes
On 6/8/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
The problem is container_clone() doesn't call ->create explicitly, it
does vfs_mkdir. So we have no real way of passing in clone_task.
Good point.
Looking at vfs_mkdir(), it's pretty simple, and really the only bits
that apply to contain
On 6/8/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
Anyway the patch I sent is simple enough, and if users end up demanding
the ability to better deal with exclusive cpusets, the patch will be
simple enough to extend by changing cpuset_auto_setup(), so let's
stick with that patch since it's yo
On 6/8/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
I do fear that that could become a maintenance nightmare. For instance
right now there's the call to fsnotify_mkdir(). Other such hooks might
be placed at vfs_mkdir, which we'd then likely want to have placed in
our container_mkdir() and co
On 6/8/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
On Fri, 08 Jun 2007 23:43:46 +0530
Balbir Singh <[EMAIL PROTECTED]> wrote:
> This patch implements per container statistics infrastructure and re-uses
> code from the taskstats interface.
boggle.
Symbol: CONTAINERS [=y]
Selected by: CONTAIN
On 6/9/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
- CONTAINER_DEBUG should depend on CONTAINERS
CONTAINER_DEBUG is actually a container subsystem whose sole purpose
is to provide debugging information about any hierarchy that it's
mounted as a part of. So in some senses it's in the same boat
On 6/9/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
Would it not be simplest to have CONTAINERS as the top-level
user-configurable item and to then have everything else depend on it?
Yes, OK - it can go that way around too. I guess my thought was that
people would be more interested in enabli
On 5/30/07, William Lee Irwin III <[EMAIL PROTECTED]> wrote:
On Wed, May 30, 2007 at 12:14:55AM -0700, Andrew Morton wrote:
> So how do we do this?
> Is there any sneaky way in which we can modify the kernel so that this new
> code gets exercised more? Obviously, tossing init into some default
>
On 6/26/07, Dhaval Giani <[EMAIL PROTECTED]> wrote:
There are a few questions I had with respect to the current code,
Why is the increment of s_active dependent on the return value of
simple_set_mnt?
I think it's because, as you observed, grab_super() is static and
hence not reachable from co
Thanks. I've added that to my tree.
Paul
On 5/18/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
Fix containers mounting issue. With the current v9 patches if a container
hierarchy is mounted and then umounted. A second mount of the hierarchy
fails
Steps to reproduce the problem
1. mount -t con
Hi Balbir,
On 5/14/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
This patch implements per container statistics infrastructure and re-uses
code from the taskstats interface. A new set of container operations are
registered with commands and attributes. It should be very easy to
extend per contain
cpuset.c:update_nodemask() uses a write_lock_irq() on tasklist_lock to
block concurrent forks; a read_lock() suffices and is less intrusive.
Signed-off-by: Paul Menage<[EMAIL PROTECTED]>
---
kernel/cpuset.c |6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
Index: scratch-
On 5/24/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
Kirill Korotaev wrote:
>> Where do we stand on all of this now anyway? I was thinking of getting
Paul's
>> changes into -mm soon, see what sort of calamities that brings about.
> I think we can merge Paul's patches with *interfaces* and then s
On 5/24/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
I thought about this approach, but did not implement the code this way
because a system could have thousands of containers and expecting a
statistics application to open a file descriptor each time for each
container will turn out to be an expe
On 3/6/07, Pavel Emelianov <[EMAIL PROTECTED]> wrote:
The idea is:
Task may be "the entity that allocates the resources" and "the
entity that is a resource allocated".
When task is the first entity it may move across containers
(that is implemented in your patches). When task is a resource
it s
On 3/9/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
1. What is the fundamental unit over which resource-management is
applied? Individual tasks or individual containers?
/me thinks latter.
Yes
In which case, it makes sense to stick
resource control information in the co
On 3/11/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
My current understanding of Paul Menage's container patch is that it is
a useful improvement for some of the metered classes - those that could
make good use of a file system like hierarchy for their interface.
It probably doesn't benefit all m
On 3/12/07, Herbert Poetzl <[EMAIL PROTECTED]> wrote:
why? you simply enter that specific space and
use the existing mechanisms (netlink, proc, whatever)
to retrieve the information with _existing_ tools,
That's assuming that you're using network namespace virtualization,
with each group of ta
On 3/12/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
- (subjective!) If there is a existing grouping mechanism already (say
tsk->nsproxy[->pid_ns]) over which res control needs to be applied,
then the new grouping mechanism can be considered redundant (it can
On 3/15/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Thu, Mar 15, 2007 at 04:24:37AM -0700, Paul Menage wrote:
> If there really was a grouping that was always guaranteed to match the
> way you wanted to group tasks for e.g. resource control, then yes, it
> would be great
On 3/13/07, Dave Hansen <[EMAIL PROTECTED]> wrote:
How do we determine what is shared, and goes into the shared zones?
Once we've allocated a page, it's too late because we already picked.
Do we just assume all page cache is shared? Base it on filesystem,
mount, ...? Mount seems the most logica
On 4/4/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Wed, Apr 04, 2007 at 12:00:07AM -0700, Paul Menage wrote:
> OK, looking at that, I see a few problems related to the use of
> nsproxy and lack of a container object:
Before we (and everyone else!) gets lost in this thread
On 4/4/07, Eric W. Biederman <[EMAIL PROTECTED]> wrote:
In addition there appear to be some weird assumptions (an array with
one member per task_struct) in the group. The pid limit allows
us millions of task_structs if the user wants it. A several megabyte
array sounds like a completely unsui
On 4/4/07, Paul Menage <[EMAIL PROTECTED]> wrote:
The current code creates such arrays when it needs an atomic snapshot
of the set of tasks in the container (e.g. for reporting them to
userspace or updating the mempolicies of all the tasks in the case of
cpusets). It may be possible to do
On 4/4/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> - how do you handle additional reference counts on subsystems? E.g.
> beancounters wants to be able to associate each file with the
> container that owns it. You need to be able to lock out subsystems
> from taking new reference counts on
On 3/26/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Sun, Mar 25, 2007 at 12:50:25PM -0700, Paul Jackson wrote:
> Is there perhaps another race here?
Yes, we have!
Modified patch below. Compile/boot tested on a x86_64 box.
Currently cpuset_exit() changes the exiting task's ->cpuset po
On 4/4/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Wed, Apr 04, 2007 at 07:57:40PM -0700, Paul Menage wrote:
> >Firstly, this is not a unique problem introduced by using ->nsproxy.
> >Secondly we have discussed this to some extent before
> >(http://
On 4/5/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Wed, Apr 04, 2007 at 10:55:01PM -0700, Paul Menage wrote:
> >@@ -1257,8 +1260,8 @@ static int attach_task(struct cpuset *cs
> >
> >put_task_struct(tsk);
> >synchronize_rcu();
> >-
On 4/5/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
Hmm yes ..I am surprised we arent doing a synhronize_rcu in
cpuset_rmdir() before dropping the dentry. Did you want to send a patch
for that?
Currently cpuset_exit() isn't in a rcu section so it wouldn't help.
But this ceases to be a pr
On 4/5/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
You mean dentry->d_fsdata pointing to nsproxy should take a ref count on
nsproxy? afaics it is not needed as long as you first drop the dentry
before freeing associated nsproxy.
You get the nsproxy object from dup_namespaces(), which wil
On 4/5/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> If the container directory were to have no refcount on the nsproxy, so
> the initial refcount was 0,
No it should be 1.
mkdir H1/foo
rcfs_create()
ns = dup_namespaces(parent);
On 4/5/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
The approach I am on currently doesnt deal with dynamically loaded
modules ..Partly because it allows subsystem ids to be compile-time
decided
Yes, that part is definitely a good idea, since it removes one of the
potential performance co
On 4/6/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
This patch removes all cpuset-specific knowlege from the container
system, replacing it with a generic API that can be used by multiple
subsystems. Cpusets is adapted to be a container subsystem.
+
+ /* Set of subsystem states, one fo
On 4/6/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Fri, Apr 06, 2007 at 04:32:24PM -0700, [EMAIL PROTECTED] wrote:
> +static int attach_task(struct container *cont, struct task_struct *tsk)
> {
[snip]
> + task_lock(tsk);
You need to check here if task state is PF_EXITING and fail
On 4/10/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
Is the first argument into all the callbacks, struct container_subsys *ss,
necessary?
I added it to support library-like abstractions - where one subsystem
can have its container callbacks and file accesses all handled by a
library whic
On 4/10/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
[ Sorry abt piece meal reviews, I am sending comments as and when I spot
something ]
That's no problem.
On Fri, Apr 06, 2007 at 04:32:24PM -0700, [EMAIL PROTECTED] wrote:
> -void container_exit(struct task_struct *tsk)
> +void contain
On 4/23/07, Vaidyanathan Srinivasan <[EMAIL PROTECTED]> wrote:
>
> config CONTAINERS
> - bool "Container support"
> - help
> - This option will let you create and manage process containers,
> - which can be used to aggregate multiple processes, e.g. for
> - the purposes
On 4/23/07, Vaidyanathan Srinivasan <[EMAIL PROTECTED]> wrote:
Hi Paul,
In [patch 3/7] Containers (V8): Add generic multi-subsystem API to
containers, you have forcefully enabled interrupt in
container_init_subsys() with spin_unlock_irq() which breaks on PPC64.
> +static void container_init_su
Hi Vatsa,
Sorry for the delayed reply - the last week has been very busy ...
On 3/1/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
Paul,
Based on some of the feedback to container patches, I have
respun them to avoid the "container" structure abstraction and instead use
nsproxy struc
Hi Pavel,
On 3/6/07, Pavel Emelianov <[EMAIL PROTECTED]> wrote:
diff -upr linux-2.6.20.orig/include/linux/sched.h
linux-2.6.20-0/include/linux/sched.h
--- linux-2.6.20.orig/include/linux/sched.h 2007-03-06 13:33:28.0
+0300
+++ linux-2.6.20-0/include/linux/sched.h2007-03-06
On 3/6/07, Pavel Emelianov <[EMAIL PROTECTED]> wrote:
2. Extended containers may register themselves too late.
Kernel threads/helpers start forking, opening files
and touching pages much earlier. This patchset
workarounds this in not-so-cute manner and I'm waiting
for Paul's comments
On 3/7/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> - when you do sys_unshare() or a clone that creates new namespaces,
> then the task (or its child) will get a new nsproxy that has the rcfs
> subsystem state associated with the old nsproxy, and one or more
> namespace pointers cloned to
On 3/7/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
Quoting Srivatsa Vaddagiri ([EMAIL PROTECTED]):
> On Tue, Mar 06, 2007 at 06:32:07PM -0800, Paul Menage wrote:
> > I'm not really sure that I see the value of having this be part of
> > nsproxy rather than the prev
On 3/7/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
All that being said, if it were going to save space without overly
complicating things I'm actually not opposed to using nsproxy, but it
If space-saving is the main issue, then the latest version of my
containers patches uses just a single
On 3/7/07, Eric W. Biederman <[EMAIL PROTECTED]> wrote:
> Effectively, container_group is to container as nsproxy is to namespace.
The statement above nicely summarizes the confusion in terminology.
In the namespace world when we say container we mean roughly at the level
of nsproxy and contain
On 3/7/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Mon, Feb 12, 2007 at 12:15:23AM -0800, [EMAIL PROTECTED] wrote:
> /*
> @@ -913,12 +537,14 @@ static int update_nodemask(struct cpuset
> int migrate;
> int fudge;
> int retval;
> + struct container *cont;
This seem
On 3/7/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Mon, Feb 12, 2007 at 12:15:23AM -0800, [EMAIL PROTECTED] wrote:
> - mutex_lock(&callback_mutex);
> - list_add(&cs->sibling, &cs->parent->children);
> + cont->cpuset = cs;
> + cs->container = cont;
> number_of_cpuset
On 3/7/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
It makes sense in the first cpuset patch
(cpusets_using_containers.patch), but should be removed in the second
cpuset patch (multiuser_container.patch). In the 2nd patch, we use this
comparison:
if (task_cs(p) != cs)
On 3/7/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
If that is the case, I think we can push container_lock entirely inside
cpuset.c and not have others exposed to this double-lock complexity.
This is possible because cpuset.c (build on top of containers) still has
cpuset->parent and walking
On 3/7/07, Sam Vilain <[EMAIL PROTECTED]> wrote:
Paul Menage wrote:
>> In the namespace world when we say container we mean roughly at the level
>> of nsproxy and container_group.
>>
> So you're saying that a task can only be in a single system-wide container.
>
On 3/7/07, Sam Vilain <[EMAIL PROTECTED]> wrote:
But "namespace" has well-established historical semantics too - a way
of changing the mappings of local * to global objects. This
accurately describes things liek resource controllers, cpusets, resource
monitoring, etc.
Sorry, I think this statem
On 3/7/07, Eric W. Biederman <[EMAIL PROTECTED]> wrote:
Pretty much. For most of the other cases I think we are safe referring
to them as resource controls or resource limits.I know that roughly covers
what cpusets and beancounters and ckrm currently do.
Plus resource monitoring (which may
On 3/7/07, Sam Vilain <[EMAIL PROTECTED]> wrote:
Sorry, I didn't realise I was talking with somebody qualified enough to
speak on behalf of the Generally Established Principles of Computer Science.
I made sure to check
http://en.wikipedia.org/wiki/Namespace
http://en.wikipedia.org/wiki/Namesp
On 3/7/07, Eric W. Biederman <[EMAIL PROTECTED]> wrote:
Please next time this kind of patch is posted add a description of
what is happening and why. I have yet to see people explain why
this is a good idea. Why the current semantics were chosen.
OK. I thought that the descriptions in my las
er that I'm not the one pushing to move them into ns_proxy.
These patches are all Srivatsa's work. Despite that fact that they say
"Signed-off-by: Paul Menage", I'd never seen them before they were
posted to LKML, and I'm not sure that they're the right approac
On 3/8/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Wed, Mar 07, 2007 at 12:50:03PM -0800, Paul Menage wrote:
> The callback mutex (which is what container_lock() actually locks) is
> also used to synchronize fork/exit against subsystem additions, in the
> event that som
On 4/3/07, Serge E. Hallyn <[EMAIL PROTECTED]> wrote:
But frankly I don't know where we stand right now wrt the containers
patches. Do most people want to go with Vatsa's latest version moving
containers into nsproxy? Has any other development been going on?
Paul, have you made any updates?
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Tue, Apr 03, 2007 at 08:45:37AM -0700, Paul Menage wrote:
> Whilst I've got no objection in general to using nsproxy rather than
> the container_group object that I introduced in my latest patches,
So are you saying
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Tue, Apr 03, 2007 at 09:52:35AM -0700, Paul Menage wrote:
> I'm not saying "let's use nsproxy" - I'm not yet convinced that the
> lifetime/mutation/correlation rate of a pointer in an nsprox
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Tue, Apr 03, 2007 at 10:10:35AM -0700, Paul Menage wrote:
> Agreed. So I'm not saying it's fundamentally a bad idea - just that
> merging container_group and nsproxy is a fairly simple space
> optimization
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
Hmm no .. I currently have nsproxy having just M additional pointers, where
M is the maximum number of resource controllers and a single dentry
pointer.
So how do you implement something like the /proc//container info
file in my patches?
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> (Or more generally, tell which container a task is
> in for a given hierarchy?)
Why is the hierarchy bit important here? Usually controllers need to
know "tell me what cpuset this task belongs to", which is answered
by tsk->nsproxy->ctlr
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
User space queries like "what is the cpuset to which this task belongs",
where the answer needs to be something of the form "/dev/cpuset/C1"?
The patches address that requirement atm by having a dentry pointer in
struct cpuset itself.
Hav
On 4/3/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Tue, Apr 03, 2007 at 09:04:59PM -0700, Paul Menage wrote:
> Have you posted the cpuset implementation over your system yet?
Yep, here:
http://lists.linux-foundation.org/pipermail/containers/2007-March/001497.html
For some re
On 4/30/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
On Sun, Apr 29, 2007 at 02:37:21AM -0700, Paul Jackson wrote:
> It builds and boots and mounts the cpuset file system ok.
> But trying to write the 'mems' file hangs the system hard.
Basically we are attempting a read_lock(&tasklist_lock)
On 5/1/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
[EMAIL PROTECTED] wrote:
> This patch adds the main containers framework - the container
> filesystem, and the basic structures for tracking membership and
> associating subsystem state objects to tasks.
[snip]
> +*** notify_on_release is disab
On 5/1/07, Balbir Singh <[EMAIL PROTECTED]> wrote:
> + if (container_is_removed(cont)) {
> + retval = -ENODEV;
> + goto out2;
> + }
Can't we make this check prior to kmalloc() and copy_from_user()?
We could but I'm not sure what it would buy us - we'd be optimiz
On 5/1/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
For the CPU controller I was working on, (a fast access to) such a list would
have been valuable. Basically each task has a weight associated with it
(p->load_weight) which is made to depend upon its class limit. Whenever
the class limit c
On 5/1/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
Why do you need this? It adds a little more code, and changes
semantics a little bit, so I'd think it should have at least a
little bit of justfication.
We have cases where we'd like to be able to clear the memory nodes
away from a (temporaril
On 9/10/07, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
>
> Unless folks have strong objection to it, I prefer "cptctlr", the way it is.
>
By definition any container (about to be renamed control group)
subsystem is some kind of "controller" so that bit seems a bit
redundant.
Any reason not to
.)
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
fs/dcache.c |1 +
1 file changed, 1 insertion(+)
Index: container-2.6.23-rc3-mm1/fs/dcache.c
===
--- container-2.6.23-rc3-mm1.orig/fs/dcache.c
+++ container-2.6.23-rc3-
On 9/10/07, Dmitry Adamushko <[EMAIL PROTECTED]> wrote:
> On 10/09/2007, Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> > On Mon, Sep 10, 2007 at 10:22:59AM -0700, Andrew Morton wrote:
> > > objection ;) "cpuctlr" isn't memorable. Kernel code is write-rarely,
> > > read-often. "cpu_controller",
Hi Balbir/Pavel,
As I mentioned to you directly at the kernel summit, I think it might
be cleaner to integrate resource counters more closely with control
groups. So rather than controllers such as the memory controller
having to create their own boilerplate cf_type structures and
read/write funct
On 9/11/07, Cedric Le Goater <[EMAIL PROTECTED]> wrote:
> >
> > And "group" is more or less implied by the fact that it's in the
> > containers/control groups filesystem.
>
> "control groups" is the name of your framework. right ?
That's the main contender for the new name, to replace "task
contai
Add write_uint() helper method for cgroup subsystems
This helper is analagous to the read_uint() helper method for
reporting u64 values to userspace. It's designed to reduce the amount
of boilerplate requierd for creating new cgroup subsystems.
Signed-off-by: Paul Menage <[EMAIL P
Comment fixed, to match the actual arguments.
Signed-off-by: Balaji Rao <[EMAIL PROTECTED]>
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
kernel/cgroup.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: container-2.6.23-rc8-mm1/ker
On 9/15/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> From: Paul Jackson <[EMAIL PROTECTED]>
>
> Paul Menage - in pre-container cpusets, a few config files enabled
> cpusets by default. Could you blend the following patch into your
> container patch set, so that cpuset
On 9/15/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
> >
> > + BUG_ON(!atomic_read(&dentry->d_count));
> > repeat:
> > if (atomic_read(&dentry->d_count) == 1)
> > might_sleep();
>
> eek, much too aggressive.
How about the equivalent BUG_ON() in dget()? I figure that they o
On 9/15/07, Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> Yeah. Bug, surely. But I guess it's always been there.
>
> What are the implications of this for cpusets-via-containers?
>
I don't think it should be any different from the previous version - I
tried to avoid touching those bits of cpusets
This is already fixed in -mm - see
task-containersv11-basic-task-container-framework-containers-fix-refcount-bug.patch
task-containersv11-add-container_clone-interface-containers-fix-refcount-bug.patch
Paul
On 9/15/07, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Paul Menage,
>
&g
() called lookup_one_len(), this resulted in a
reference count being missed from the directory dentry.
This patch removes container_get_dentry() and replaces it with direct
calls to lookup_one_len(); the initialization of containerfs dentry
ops is done now in container_create_file() at dentry
This example subsystem exports debugging information as an aid to diagnosing
refcount leaks, etc, in the cgroup framework.
Signed-off-by: Paul Menage <[EMAIL PROTECTED]>
---
include/linux/cgroup_subsys.h |4 +
init/Kconfig | 10 ++
kernel/Ma
201 - 300 of 333 matches
Mail list logo