On 02/04/2013 12:36 PM, Michal Hocko wrote:
> On Mon 04-02-13 12:04:06, Glauber Costa wrote:
>> On 02/04/2013 11:57 AM, Michal Hocko wrote:
>>> On Sun 03-02-13 20:29:01, Hugh Dickins wrote:
Whilst I run the risk of a flogging for disloyalty to the Lord of Sealand,
I do have CONFIG_MEMCG=y
On 02/04/2013 11:57 AM, Michal Hocko wrote:
> On Sun 03-02-13 20:29:01, Hugh Dickins wrote:
>> Whilst I run the risk of a flogging for disloyalty to the Lord of Sealand,
>> I do have CONFIG_MEMCG=y CONFIG_MEMCG_KMEM not set, and grow tired of the
>> "mm/memcontrol.c:4972:12: warning: `memcg_propaga
On 02/04/2013 08:29 AM, Hugh Dickins wrote:
> Whilst I run the risk of a flogging for disloyalty to the Lord of Sealand,
> I do have CONFIG_MEMCG=y CONFIG_MEMCG_KMEM not set, and grow tired of the
> "mm/memcontrol.c:4972:12: warning: `memcg_propagate_kmem' defined but not
> used [-Wunused-function]
On 01/28/2013 07:27 PM, Seth Jennings wrote:
> Yes, I prototyped a shrinker interface for zswap, but, as we both
> figured, it shrinks the zswap compressed pool too aggressively to the
> point of being useless.
Can't you advertise a smaller number of objects that you actively have?
Since the shrin
On 01/28/2013 08:19 PM, Eric W. Biederman wrote:
> Lord Glauber Costa of Sealand writes:
>
>> On 01/28/2013 12:14 PM, Eric W. Biederman wrote:
>>> Lord Glauber Costa of Sealand writes:
>>>
>>>> I just saw in a later patch of yours that your concern he
From: Glauber Costa
While stress-running very-small container scenarios with the Kernel
Memory Controller, I've run into a lockdep-detected lock imbalance in
cfq-iosched.c.
I'll apologize beforehand for not posting a backlog: I didn't anticipate
it would be so hard to reproduce, so I didn't save
Hello Mr. Someone.
On 01/28/2013 06:28 PM, Aristeu Rozanski wrote:
> On Fri, Jan 25, 2013 at 06:21:00PM -0800, Eric W. Biederman wrote:
>> When I initially wrote the code for /proc//uid_map. I was lazy
>> and avoided duplicate mappings by the simple expedient of ensuring the
>> first number in a
On 01/28/2013 12:14 PM, Eric W. Biederman wrote:
> Lord Glauber Costa of Sealand writes:
>
>> I just saw in a later patch of yours that your concern here seems not
>> limited to backed ram by tmpfs, but with things like the internal
>> structures for userns , to avoid pa
On 01/28/2013 11:37 AM, Lord Glauber Costa of Sealand wrote:
> On 01/26/2013 06:22 AM, Eric W. Biederman wrote:
>>
>> In the help text describing user namespaces recommend use of memory
>> control groups. In many cases memory control groups are the only
>> mechanism
On 01/26/2013 06:22 AM, Eric W. Biederman wrote:
>
> In the help text describing user namespaces recommend use of memory
> control groups. In many cases memory control groups are the only
> mechanism there is to limit how much memory a user who can create
> user namespaces can use.
>
> Signed-of
From: Glauber Costa
All the information we have that is needed for cpuusage (and
cpuusage_percpu) is present in schedstats. It is already recorded
in a sane hierarchical way.
If we have CONFIG_SCHEDSTATS, we don't really need to do any extra
work. All former functions become empty inlines.
Sign
From: Tejun Heo
Now that cpu serves the same files as cpuacct and using cpuacct
separately from cpu is deprecated, we can deprecate cpuacct. To avoid
disturbing userland which has been co-mounting cpu and cpuacct,
implement some hackery in cgroup core so that cpuacct co-mounting
still works even
From: Glauber Costa
Hi all,
This is an attempt to provide userspace with enough information to reconstruct
per-container version of files like "/proc/stat". In particular, we are
interested in knowing the per-cgroup slices of user time, system time, wait
time, number of processes, and a variety
From: Tejun Heo
cpuacct being on a separate hierarchy is one of the main cgroup
related complaints from scheduler side and the consensus seems to be
* Allowing cpuacct to be a separate controller was a mistake. In
general multiple controllers on the same type of resource should be
avoided,
From: Glauber Costa
We already track multiple tick statistics per-cgroup, using
the task_group_account_field facility. This patch accounts
guest_time in that manner as well.
Signed-off-by: Glauber Costa
CC: Peter Zijlstra
CC: Paul Turner
---
kernel/sched/cputime.c | 10 --
1 file cha
From: Glauber Costa
The CPU cgroup is so far, undocumented. Although data exists in the
Documentation directory about its functioning, it is usually spread,
and/or presented in the context of something else. This file
consolidates all cgroup-related information about it.
Signed-off-by: Glauber C
From: Glauber Costa
exec_clock already provides per-group cpu usage metrics, and can be
reused by cpuacct in case cpu and cpuacct are comounted.
However, it is only provided by tasks in fair class. Doing the same for
rt is easy, and can be done in an already existing hierarchy loop. This
is an i
From: Glauber Costa
The file cpu.stat_percpu will show various scheduler related
information, that are usually available to the top level through other
files.
For instance, most of the meaningful data in /proc/stat is presented
here. Given this file, a container can easily construct a local copy
From: Peter Zijlstra
In order to avoid having to do put/set on a whole cgroup hierarchy
when we context switch, push the put into pick_next_task() so that
both operations are in the same function. Further changes then allow
us to possibly optimize away redundant work.
[ glom...@parallels.com: in
From: Glauber Costa
This patch changes the calculation of nr_context_switches. The variable
"nr_switches" is now used to account for the number of transition to the
idle task, or stop task. It is removed from the schedule() path.
The total calculation can be made using the fact that the transiti
From: Tejun Heo
When cgroup files are created, cgroup core automatically prepends the
name of the subsystem as prefix. This patch adds CFTYPE_NO_PREFIX
which disables the automatic prefix.
This will be used to deprecate cpuacct which will make cpu create and
serve the cpuacct files.
Signed-off
From: Glauber Costa
Context switches are, to this moment, a property of the runqueue. When
running containers, we would like to be able to present a separate
figure for each container (or cgroup, in this context).
The chosen way to accomplish this is to increment a per cfs_rq or
rt_rq, depending
From: Glauber Costa
Commit 8f618968 changed stop_task to do the same bookkeping as the
other classes. However, the call to cpuacct_charge() doesn't affect
the scheduler decisions at all, and doesn't need to be moved over.
Moreover, being a kthread, the migration thread won't belong to any
cgroup
23 matches
Mail list logo