> Regarding semantics, can you be more specific? Unfortunately not - sorry.
I've been off in other areas, and not found the time to read through this current PATCH or think about it carefully enough to be really useful. Your reply seemed reasonable enough. > It should have the same perf overhead as the original container patches > (basically a double dereference - task->containers/nsproxy->cpuset - > required to get to the cpuset from a task). There is just one spot that this might matter to cpusets. Except for one hook, cpusets uses the mems_allowed and cpus_allowed masks in the task struct to avoid having to look at the cpuset on hot code paths. There is one RCU guarded reference per memory allocation to current->cpuset->mems_generation in the call to cpuset_update_task_memory_state(), for tasks that are in some cpuset -other- than the default top cpuset, on systems that have explicitly created additional (other than the top cpuset) cpusets after boot. If that RCU guarded reference turned into taking a global lock, or pulling in a cache line that was frequently off dirty in some other node, that would be unfortunate. But that's the key hook so far as cpuset performance impact is concerned. Perhaps you could summarize what becomes of this hook, in this brave new world of rcfs ... -- I won't rest till it's the best ... Programmer, Linux Scalability Paul Jackson <[EMAIL PROTECTED]> 1.925.600.0401 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/