On Tue 13-11-12 08:14:42, Tejun Heo wrote:
> On Tue, Nov 13, 2012 at 04:30:36PM +0100, Michal Hocko wrote:
> > @@ -1063,8 +1063,8 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup
> > *root,
> >st
On Wed 14-11-12 09:20:03, KAMEZAWA Hiroyuki wrote:
> (2012/11/14 0:30), Michal Hocko wrote:
[...]
> > @@ -1096,30 +1096,64 @@ struct mem_cgroup *mem_cgroup_iter(struct
> > mem_cgroup *root,
> > mz = mem_cgroup_zoneinfo(root, nid, zid);
> >
although I haven't seen a machine with no 0 node yet.
According to 13808910 this is indeed possible.
> Cc: Greg Kroah-Hartman
> Signed-off-by: David Rientjes
Reviewed-by: Michal Hocko
> ---
> drivers/tty/sysrq.c |3 ++-
> 1 files changed, 2 insertions(+), 1 deletions(-)
On Wed 14-11-12 03:03:02, David Rientjes wrote:
> On Wed, 14 Nov 2012, Michal Hocko wrote:
>
> > > With hotpluggable and memoryless nodes, it's possible that node 0 will
> > > not be online, so use the first online node's zonelist rather than
> > > ha
om() so that they can be removed.
>
> Cc: KAMEZAWA Hiroyuki
> Cc: KOSAKI Motohiro
> Cc: Michal Hocko
> Signed-off-by: David Rientjes
The only _potential_ problem I can see with this is that if we ever have
a HW which requires that a node zonelist doesn't contain others nodes
On Wed 14-11-12 01:15:25, David Rientjes wrote:
> out_of_memory() will already cause current to schedule if it has not been
> killed, so doing it again in pagefault_out_of_memory() is redundant.
> Remove it.
>
> Cc: KAMEZAWA Hiroyuki
> Cc: KOSAKI Motohiro
> Cc: Michal H
; unsigned int fault)
> @@ -879,7 +865,14 @@ mm_fault_error(struct pt_regs *regs, unsigned long
> error_code,
> return 1;
> }
>
> - out_of_memory(regs, error_code, address);
> + up_read(¤t->mm->mmap_sem);
On Tue 13-11-12 16:10:41, Johannes Weiner wrote:
> On Tue, Oct 30, 2012 at 11:35:59AM +0100, Michal Hocko wrote:
> > On Mon 29-10-12 15:00:22, Andrew Morton wrote:
> > > On Mon, 29 Oct 2012 17:58:45 +0400
> > > Glauber Costa wrote:
> > >
> > > &
GFP_KERNEL);
>
>
> why GFP_KERNEL ? not GFP_HIGHUSER_MOVABLE ?
I was wondering about the same but gfp_zonelist cares only about
__GFP_THISNODE so GFP_HIGHUSER_MOVABLE doesn't do any difference.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from t
On Wed 14-11-12 10:52:45, Tejun Heo wrote:
> Hello, Michal.
>
> On Wed, Nov 14, 2012 at 09:51:29AM +0100, Michal Hocko wrote:
> > > reclaim(root);
> > > for_each_descendent_pre()
> > > reclaim(descendant);
> >
> > We cannot do for_ea
On Thu 15-11-12 13:12:38, KAMEZAWA Hiroyuki wrote:
> (2012/11/14 19:10), Michal Hocko wrote:
> >On Wed 14-11-12 09:20:03, KAMEZAWA Hiroyuki wrote:
> >>(2012/11/14 0:30), Michal Hocko wrote:
> >[...]
> >>>@@ -1096,30 +1096,64 @@ struct mem_cgroup *mem_cgrou
On Thu 15-11-12 06:47:32, Tejun Heo wrote:
> Hello, Michal.
>
> On Thu, Nov 15, 2012 at 10:51:03AM +0100, Michal Hocko wrote:
> > > I'm a bit confused. Why would that make any difference? Shouldn't it
> > > be just able to test the condition and continu
kind of questions at libcgroup
libcg-de...@lists.sourceforge.net mailing list as it is more focused on
the cgroups usage.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo in
line versions of
> "static inline" functions
and if it decides then __always_inline will not help, right?
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majord
kernel.org;
> > a...@firstfloor.org;
> > a...@canonical.com; de...@linuxdriverproject.org; linux...@kvack.org;
> > Hiroyuki Kamezawa; Michal Hocko; Johannes Weiner; Ying Han
> > Subject: Re: [PATCH 1/2] mm: Export vm_committed_as
> &g
up_replace_page_cache callers
>* so the lru seemed empty but the page could have been added
>* right after the check. RES_USAGE should be safe as we always
>* charge before adding to the LRU.
> */
> usage = re
driver could use instead.
Agreed, we should rather make sure that nobody can manipulate the value
from modules.
> (And why percpu_counter_read_positive() returns a signed type is a
> mystery.)
Strange indeed. The last commit changed it from long to s64 to suport
values bigger than 2^31 but even the original
NG_PARENT);
> + spin_unlock_irq(&pos_f->lock);
> + }
> + rcu_read_unlock();
> }
>
> static int freezer_write(struct cgroup *cgroup, struct cftype *cft,
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
strict
requirement so we can safely use your new iterators without pre_destroy.
Anyway, I like this change because the shared state is now really easy
to implement.
> Signed-off-by: Tejun Heo
> Cc: Glauber Costa
Acked-by: Michal Hocko
> ---
> include/linux/cgroup.h | 1 +
>
> ->children list instead of head. This isn't strictly necessary but is
> done so that the iteration order is more conventional.
>
> Signed-off-by: Tejun Heo
Reviewed-by: Michal Hocko
> ---
> include/linux/cgroup.h | 1 +
> kernel/cgroup.c| 8 +++-
&
; cgroup_for_each_descendant_pre() can be used to propagate config
> > updates to descendants in reliable way. See comments for details.
>
> Michal, Li, how does this look to you? Would this be okay for memcg
> too?
Yes, definitely. We are currently iterating by css->id which
f (&next->sibling != &pos->parent->children)
> + return next;
> +
> + pos = pos->parent;
> + } while (pos != cgroup);
> +
> + return NULL;
> +}
> +EXPORT_SYMBOL_GPL(cgroup_next_descendant_pre);
[...]
--
Michal Hocko
SUSE Labs
with the to-be-added generic descendant
> iterators, ->post_create() can be used to implement reliable state
> inheritance. It will be explained with the descendant iterators.
Thanks
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux
On Wed 07-11-12 09:01:18, Tejun Heo wrote:
> Hello, Michal.
>
> On Wed, Nov 07, 2012 at 05:54:57PM +0100, Michal Hocko wrote:
> > > +struct cgroup *cgroup_next_descendant_pre(struct cgroup *pos,
> > > + struct cgroup *cgroup)
> >
d-off-by: Sha Zhengju
> Cc: Michal Hocko
> Cc: KAMEZAWA Hiroyuki
> Cc: David Rientjes
> Cc: Andrew Morton
> ---
> mm/memcontrol.c | 71
> ---
> mm/oom_kill.c |6 +++-
> 2 files changed, 66 insertions(+), 11
we scan through more tasks currently and most of them
are not relevant but then you would need to exclude task_in_mem_cgroup
from oom_unkillable_task and that would be more code churn than the
win.
> Signed-off-by: Sha Zhengju
> Cc: Michal Hocko
> Cc: KAMEZAWA Hiroyuki
> Cc: Dav
On Wed 07-11-12 14:10:25, Andrew Morton wrote:
> On Tue, 16 Oct 2012 00:04:08 +0200
> Michal Hocko wrote:
>
> > As Kosaki correctly pointed out, the glogal reclaim doesn't have this
> > issue because we _do_ swap on swappinnes==0 so the swap space has
> > to be co
On Wed 07-11-12 14:53:40, Andrew Morton wrote:
> On Wed, 7 Nov 2012 23:46:40 +0100
> Michal Hocko wrote:
>
> > > Realistically, is anyone likely to hurt from this?
> >
> > The primary motivation for the fix was a real report by a customer.
>
> Descri
> Signed-off-by: Tejun Heo
I will convert mem_cgroup_iter to use this rather than css_get_next
after this gets into the next tree so that it can fly via Andrew.
Reviewed-by: Michal Hocko
Just a minor knit. You are talking about a config propagation while I
would consider state propagation mor
t; default:
> BUG();
> @@ -275,8 +273,7 @@ static void freezer_change_state(struct cgroup *cgroup,
> spin_unlock_irq(&freezer->lock);
> }
>
> -static int freezer_write(struct cgroup *cgroup,
> - struct cftype *cft,
> +static
ill be filled with hierarchy handling later on.
>
> This patch doesn't introduce any behavior change.
>
> Signed-off-by: Tejun Heo
Makes sense
Reviewed-by: Michal Hocko
> ---
> kernel/cgroup_freezer.c | 48 ++--
> 1
f-by: Tejun Heo
I think it would be nicer to use freezer_state_flags enum rather than
unsigned int for the state. I would even expect gcc to complain about
that but it looks like -fstrict-enums is c++ specific (so long enum
safety).
Anyway
Reviewed-by: Michal
On Tue 06-11-12 09:03:54, Michal Hocko wrote:
> On Mon 05-11-12 16:28:37, Andrew Morton wrote:
> > On Thu, 1 Nov 2012 16:07:35 +0400
> > Glauber Costa wrote:
> >
> > > +static __always_inline struct kmem_cache *
> > > +memcg_kmem_get_cache(struct kmem_cac
oup, we'd probably like
> >to know the global memory state to determine what the problem is.
> >
>
> I really wondering if there is any case that can pass
> root_mem_cgroup down here.
No it cannot because the root cgroup doesn't have any limit so we cannot
trigger memc
hawed to unfreeze the group I am interested
in. Could you also update Documentation/cgroups/freezer-subsystem.txt to
clarify the intended usage?
Minor nit. Same as mentioned in the previous patch freezer_apply_state
should get enum freezer_state_flags state parameter.
> Signed-off-by: Te
group.
>
> Adjusting system_freezing_cnt on destruction is moved from
> freezer_destroy() to the new freezer_pre_destroy() for consistency.
>
> This patch doesn't introduce any noticeable behavior change.
>
> Signed-off-by: Tejun Heo
freezer_apply_state(B, false)
B->state & FREEZING == false
freezer_apply_state(B, true)
B->state & FREEZING == true
freezer_apply_state(C, true)
freezer_apply_state(C, false)
So A, C are thawed while B is frozen.
On Thu 08-11-12 12:05:13, Michal Hocko wrote:
> On Tue 06-11-12 09:03:54, Michal Hocko wrote:
> > On Mon 05-11-12 16:28:37, Andrew Morton wrote:
> > > On Thu, 1 Nov 2012 16:07:35 +0400
> > > Glauber Costa wrote:
> > >
> > >
On Thu 08-11-12 06:39:52, Tejun Heo wrote:
> Hello, Michal.
>
> On Thu, Nov 08, 2012 at 11:39:28AM +0100, Michal Hocko wrote:
> > On Sat 03-11-12 01:38:32, Tejun Heo wrote:
> > > freezer->state was an enum value - one of THAWED, FREEZING and FROZEN.
> > > As
On Thu 08-11-12 06:18:48, Tejun Heo wrote:
> Hello, Michal.
>
> On Thu, Nov 08, 2012 at 03:08:52PM +0100, Michal Hocko wrote:
> > This seems to be racy because parent->state access is not linearized.
> > Say we have parallel freeze and thawing on a tree like
r per-thread oom kill flags in the next patch.
>
> Signed-off-by: David Rientjes
Reviewed-by: Michal Hocko
> ---
> drivers/staging/android/lowmemorykiller.c | 16
> fs/proc/base.c| 10 +-
> include/linux/oom.
allows the correct oom_score_adj to always be shown when
> reading /proc/pid/oom_score.
>
> Signed-off-by: David Rientjes
I didn't like the previous playing with the oom_score_adj and what you
propose looks much nicer.
Maybe s/oom_task_origin/task_oom_origin/ would be a better fit with
On Thu 08-11-12 07:29:23, Tejun Heo wrote:
> Hey, Michal.
>
> On Thu, Nov 08, 2012 at 04:20:39PM +0100, Michal Hocko wrote:
> > > So, in the above example in CPU2, (B->state & FREEZING) test and
> > > freezer_apply_state(C, false) can't be interleaved wi
F
and we trigger an OOM on the A's limit. Now we know that something blew
up but what it was we do not know. Wouldn't it be better to swap the for
and for_each_mem_cgroup_tree loops? Then we would see the whole
hierarchy and can potentially point at the group which doesn't b
- descendants were
> inheriting from the wrong ancestor. Fixed.
>
> v3: Documentation/cgroups/freezer-subsystem.txt updated.
>
> Signed-off-by: Tejun Heo
> Reviewed-by: Tejun Heo
You probably meant Reviewed-by: Michal Hocko ;)
--
Michal Hocko
SUSE Labs
--
To unsu
On Thu 08-11-12 10:04:17, Tejun Heo wrote:
> On Thu, Nov 08, 2012 at 07:02:46PM +0100, Michal Hocko wrote:
> > On Thu 08-11-12 09:57:50, Tejun Heo wrote:
> > > Signed-off-by: Tejun Heo
> > > Reviewed-by: Tejun Heo
> >
> > You probably meant Reviewed-b
On Fri 09-11-12 18:23:07, Sha Zhengju wrote:
> On 11/09/2012 12:25 AM, Michal Hocko wrote:
> >On Thu 08-11-12 23:52:47, Sha Zhengju wrote:
[...]
> >>+ for (i = 0; i< MEM_CGROUP_STAT_NSTATS; i++) {
> >>+ long long val = 0;
> >>+
On Fri 09-11-12 20:09:29, Sha Zhengju wrote:
> On 11/09/2012 06:50 PM, Michal Hocko wrote:
> >On Fri 09-11-12 18:23:07, Sha Zhengju wrote:
[...]
> >>Another one I'm hesitating is numa stats, it seems the output is
> >>beginning to get more and more
> >N
true);
+ ret = __mem_cgroup_try_charge(NULL, mask, 1, ptr, oom);
css_put(&memcg->css);
return ret;
charge_cur_mm:
if (unlikely(!mm))
mm = &init_mm;
- return __mem_cgroup_try_charge(mm, mask, 1, ptr, true);
+ return __mem_cgroup_try_c
i_mutex and the backup process wants to access the
same inode with an operation which requires the lock.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
EL which means that
we do memcg OOM and never return ENOMEM. do_swap_page calls
mem_cgroup_try_charge_swapin with GFP_KERNEL as well.
I might have missed something but I will not get to look closer before
2nd January.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "u
terrupt+0x69/0x99
> [ 406.164780] [] apic_timer_interrupt+0x72/0x80
> [ 406.164781][] ? _cpu_down+0x19a/0x2e0
> [ 406.164787] [] ? _cpu_down+0x19a/0x2e0
> [ 406.164790] [] cpu_down+0x3e/0x60
> [ 406.164792] [] store_online+0x75/0xe0
> [ 406.164795] [] dev_attr_store+0x20/0x30
> [
ps=63528.9 , runt= 8279msec
> lat (usec): min=1 , max=742361 , avg=30.918, stdev=1601.02
> After:
> write: io=2048.0MB, bw=254044KB/s, iops=63510.3 , runt= 8274.4msec
> lat (usec): min=1 , max=856333 , avg=31.043, stdev=1769.32
>
> Note that the impact is little
Maybe I have missed some other locking which would prevent this from
happening but the locking relations are really complicated in this area
so if mem_cgroup_{begin,end}_update_page_stat might be called
recursively then we need a fat comment which justifies that.
[...]
--
Michal Hocko
SUSE Labs
--
roup_update_page_stat()
> mem_cgroup_end_update_page_stat()
>
> There're two writeback interface to modify: test_clear/set_page_writeback.
>
> Signed-off-by: Sha Zhengju
Looks good to me
Acked-by: Michal Hocko
Thanks!
> ---
> include/linux/memcontrol.h |1 +
>
inue;
> for_each_mem_cgroup_tree(mi, memcg)
> val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE;
> +
> + /* Adding local stats of root memcg */
> + if (memcg == root_mem_cgroup)
> + val += nstat[i];
> seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val);
> }
>
> --
> 1.7.9.5
>
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
gt; kmem_cgroup_destroy(memcg);
>
> memcg_dangling_add(memcg);
> + static_key_slow_dec(&memcg_in_use_key);
> mem_cgroup_put(memcg);
memcg could be still alive at this moment (e.g. due to swap or kmem
charges). This is not a big issue with the current state of t
On Wed 26-12-12 01:28:21, Sha Zhengju wrote:
> From: Sha Zhengju
>
> Signed-off-by: Sha Zhengju
Acked-by: Michal Hocko
> ---
> Documentation/cgroups/memory.txt |2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/Documentation/cgroups/memory.txt
&
On Wed 02-01-13 10:36:05, Tejun Heo wrote:
> Hey, Michal.
>
> On Wed, Jan 02, 2013 at 09:53:55AM +0100, Michal Hocko wrote:
> > Hi Li,
> >
> > On Wed 26-12-12 18:51:02, Li Zefan wrote:
> > > I reverted 38d7bee9d24adf4c95676a3dc902827c72930ebb ("cpuset:
will be replaced cgroup generic iteration which requires
storing mem_cgroup pointer into iterator and that requires reference
counting and so concurrent access will be a problem.
Signed-off-by: Michal Hocko
Acked-by: KAMEZAWA Hiroyuki
---
mm/memcontrol.c | 12 +++-
1 file changed, 11 i
a simple invariant that memcg is always alive when non-NULL and all
nodes have been visited otherwise.
We could get rid of the surrounding while loop but keep it in for now to
make review easier. It will go away in the following patch.
Signed-off-by: Michal Hocko
Acked-by: KAMEZAWA Hiroyuki
the previous one in the same sub-hierarchy.
This guarantees that no group gets more reclaiming than necessary and
the next iteration will continue without noticing that the removed group
has disappeared.
Spotted-by: Ying Han
Signed-off-by: Michal Hocko
---
mm/memcontrol.c |
et,put} for iter->last_visited rather than
mem_cgroup_{get,put} because it is stronger wrt. cgroup life cycle
- cgroup_next_descendant_pre expects NULL pos for the first iterartion
otherwise it might loop endlessly for intermediate node without any
children.
Signed-off-by: Michal Hocko
Acked-by
Now that we have generic and well ordered cgroup tree walkers there is
no need to keep css_get_next in the place.
Signed-off-by: Michal Hocko
Acked-by: KAMEZAWA Hiroyuki
---
include/linux/cgroup.h |7 ---
kernel/cgroup.c| 49
2
ill hammer it some more but the
series should be in quite a good shape already.
Michal Hocko (7):
memcg: synchronize per-zone iterator access by a spinlock
memcg: keep prev's css alive for the whole mem_cgroup_iter
memcg: rework mem_cgroup_iter to use cgroup iterators
m
right after it gets the
last css_id.
This is correct because neither prev's memcg nor cgroup are accessed
after then. This will change in the next patch so we need to hold the
group alive a bit longer so let's move the css_put at the end of the
function.
Signed-off-by: Michal Hocko
(to __mem_cgrou_iter_next) so the distinction
is more clear.
This patch doesn't introduce any functional changes.
Signed-off-by: Michal Hocko
Acked-by: KAMEZAWA Hiroyuki
---
mm/memcontrol.c | 79 ---
1 file changed, 46 insertions(+
Hi,
I have posted this quite some time ago
(https://lkml.org/lkml/2012/12/14/102) but it probably slipped through
---
>From 28b4e10bc3c18b82bee695b76f4bf25c03baa5f8 Mon Sep 17 00:00:00 2001
From: Michal Hocko
Date: Fri, 14 Dec 2012 11:12:43 +0100
Subject: [PATCH] memcg,vmscan: do not break
On Thu 03-01-13 12:24:04, Andrew Morton wrote:
> On Thu, 3 Jan 2013 19:09:01 +0100
> Michal Hocko wrote:
>
> > Hi,
> > I have posted this quite some time ago
> > (https://lkml.org/lkml/2012/12/14/102) but it probably slipped through
> > ---
> > >From 28
ure to describe which locks are exercised during that test
and how much.
Thanks
[...]
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
y next year when
all the things settle a bit.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
+0x3d/0xb0
[65674.045831] [] get_signal_to_deliver+0x247/0x480
[65674.045840] [] do_signal+0x71/0x1b0
[65674.045845] [] do_notify_resume+0x98/0xb0
[65674.045853] [] int_signal+0x12/0x17
[65674.046737] DWARF2 unwinder stuck at int_signal+0x12/0x17
Signed-off-by: Michal Hocko
Cc: sta
_update_active_cpus+0xe/0x10
>[] cpuset_cpu_inactive+0x47/0x50
>[] notifier_call_chain+0x66/0x150
>[] __raw_notifier_call_chain+0xe/0x10
>[] __cpu_notify+0x20/0x40
> [] _cpu_down+0x7e/0x2f0
> [] cpu_down+0x36/0x50
>[] store_online+0x5d/0xe0
>[]
On Tue 18-12-12 14:02:19, Andrew Morton wrote:
> On Tue, 18 Dec 2012 17:11:28 +0100
> Michal Hocko wrote:
>
> > Since e303297 (mm: extended batches for generic mmu_gather) we are batching
> > pages to be freed until either tlb_next_batch cannot allocate a new batch
&
On Tue 18-12-12 16:00:30, Andrew Morton wrote:
> On Wed, 19 Dec 2012 00:50:42 +0100
> Michal Hocko wrote:
>
> > On Tue 18-12-12 14:02:19, Andrew Morton wrote:
> > > On Tue, 18 Dec 2012 17:11:28 +0100
> > > Michal Hocko wrote:
> > >
> > >
On Wed 19-12-12 13:13:16, Andrew Morton wrote:
> On Wed, 19 Dec 2012 16:04:37 +0100
> Michal Hocko wrote:
>
> > Since e303297 (mm: extended batches for generic mmu_gather) we are batching
> > pages to be freed until either tlb_next_batch cannot allocate a new batch
&
ro.
>
> Cc: Johannes Weiner
> Cc: Rik van Riel
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Hugh Dickins
> Cc: Satoru Moriya
> Cc: Simon Jeons
> Signed-off-by: Andrew Morton
Reviewed-by: Michal Hocko
> ---
>
> mm/page_alloc.c |7 ++-
>
On Thu 20-12-12 12:27:46, Andrew Morton wrote:
> On Thu, 20 Dec 2012 13:47:10 +0100
> Michal Hocko wrote:
>
> > > > + */
> > > > +#if defined(CONFIG_PREEMPT_COUNT)
> > > > +#define MAX_GATHER_BATCH_COUNT (UINT_MAX)
> > > > +#else
>
On Thu 20-12-12 23:36:23, Michal Hocko wrote:
> From 839736c13064008a41fabf5de69c2f23d685a1fd Mon Sep 17 00:00:00 2001
> From: Michal Hocko
> Date: Thu, 20 Dec 2012 23:25:24 +0100
> Subject: [PATCH] mm: limit mmu_gather batching to fix soft lockups on
> !CONFIG_PREEMPT
>
&
= 0; idx < count; idx++)
+ pages[i] = page + idx;
return pages;
}
while (count) {
- int j, order = __fls(count);
+ unsigned int j;
+ unnsigned int order = __fls(count);
s/unnsigned/unsigned/
--
On Thu 15-11-12 07:31:24, Tejun Heo wrote:
> Hello, Michal.
>
> On Thu, Nov 15, 2012 at 04:12:55PM +0100, Michal Hocko wrote:
> > > Because I'd like to consider the next functions as implementation
> > > detail, and having interations structred as loops tend to
given pause by fact that that text has been in
> Documentation/filesystems/proc.txt since at least 2.6.0.
> Anyway, I believe that the patch below fixes things.
Yes, this goes back to 2003 when the /proc/meminfo doc has been added.
>
> Signed-off-by: Michael Kerrisk
Reviewed-by: Mi
, but it will remove the ugly
> >>>>> wakeups. One would argue that if the caches have free objects but are
> >>>>> not being shrunk, it is because we don't need that memory yet.
> >>>>>
> >>>>> Signed-off-by: Glauber Costa
On Wed 14-11-12 11:10:52, Michal Hocko wrote:
> On Wed 14-11-12 09:20:03, KAMEZAWA Hiroyuki wrote:
> > (2012/11/14 0:30), Michal Hocko wrote:
> [...]
> > > @@ -1096,30 +1096,64 @@ struct mem_cgroup *mem_cgroup_iter(struct
> > > mem_cgroup *root,
>
On Tue 13-11-12 16:30:36, Michal Hocko wrote:
[...]
> + /*
> + * Root is not visited by cgroup iterators so it needs a special
> + * treatment.
> + */
> + if (!last_visited) {
> +
even when memcg
> is disabled.
>
> To fix this, inline the check for mem_cgroup_disabled() so we avoid the
> unnecessary function call if memcg is disabled.
>
> Signed-off-by: David Rientjes
Acked-by: Michal Hocko
Thanks!
> ---
> include/linux/memcontro
ked that already but it didn't get answered. What is the reason
for an explicit knob to enable this? It just adds an additional code and
it doesn't make much sense to me to be honest.
[...]
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-
er like a slab backed memory leak or something went
crazy and allocate huge amount of slab. You have 3.6G (or of 4G
available) of slab_unreclaimable. I would check /proc/slabinfo for which
cache consumes that huge amount of memory.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send
On Thu 18-04-13 18:15:41, Han Pingtian wrote:
> On Wed, Apr 17, 2013 at 07:19:09AM -0700, Michal Hocko wrote:
> > On Wed 17-04-13 17:47:50, Han Pingtian wrote:
> > > [ 5233.949714] Node 1 DMA free:3968kB min:7808kB low:9728kB high:11712kB
> > > active_anon:0kB inact
On Fri 19-04-13 00:55:31, Han Pingtian wrote:
> On Thu, Apr 18, 2013 at 07:17:36AM -0700, Michal Hocko wrote:
> > On Thu 18-04-13 18:15:41, Han Pingtian wrote:
> > > On Wed, Apr 17, 2013 at 07:19:09AM -0700, Michal Hocko wrote:
> > > > On Wed 17-04-1
[Do not drop people from the CC please]
On Fri 19-04-13 10:33:45, Han Pingtian wrote:
> On Thu, Apr 18, 2013 at 10:55:14AM -0700, Michal Hocko wrote:
[...]
> > What is the kernel that you are using and what config?
> >
> We are testing a alpha version of a enterprise linux
On Mon 22-04-13 11:18:49, Han Pingtian wrote:
> On Fri, Apr 19, 2013 at 09:43:10AM -0700, Michal Hocko wrote:
> > [Do not drop people from the CC please]
> >
> > On Fri 19-04-13 10:33:45, Han Pingtian wrote:
> > > On Thu, Apr 18, 2013 at 10:55:
On Tue 23-04-13 12:22:34, Han Pingtian wrote:
> On Mon, Apr 22, 2013 at 01:40:52PM +0200, Michal Hocko wrote:
> > > CONFIG_PPC_BOOK3S_64=y
> > > # CONFIG_PPC_BOOK3E_64 is not set
> > > -CONFIG_GENERIC_CPU=y
> > > +# CONFIG_GENERIC_CPU is not
On Tue 11-06-13 13:33:40, David Rientjes wrote:
> On Mon, 10 Jun 2013, Michal Hocko wrote:
[...]
> > Your only objection against userspace handling so far was that admin
> > has no control over sub-hierarchies. But that is hardly a problem. A
> > periodic tree scan would solve
On Wed 12-06-13 13:12:09, David Rientjes wrote:
> On Wed, 12 Jun 2013, Michal Hocko wrote:
>
> > > > > > > > Reported-by: Reported-by: azurIt
> > >
> > > Ok, so the key here is that azurIt was able to reliably reproduce this
> > > is
5355.883056] RIP [] mem_cgroup_page_lruvec+0x79/0x90
> [35355.883056] RSP
> [35355.883056] CR2: 01a8
> [35355.883056] ---[ end trace 2c9b8eec517f960d ]---
>
>
> --
> Thanks,
> //richard
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
dress at an offset from 88003e04a800 But there is 0x138 there
instead.
Is this easily reproducible? Could you configure kdump.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Ohh and could you post the config please? Sorry should have asked
earlier.
On Thu 13-06-13 15:29:08, Michal Hocko wrote:
>
> On Thu 13-06-13 14:06:20, Richard Weinberger wrote:
> [...]
> > All code
> >
> >0: 89 50 08mov%edx,0x8
On Wed 12-06-13 13:49:47, David Rientjes wrote:
> On Wed, 12 Jun 2013, Michal Hocko wrote:
>
> > The patch is a big improvement with a minimum code overhead. Blocking
> > any task which sits on top of an unpredictable amount of locks is just
> > broken. So regardless how
On Thu 13-06-13 15:34:59, Richard Weinberger wrote:
> Am 13.06.2013 15:32, schrieb Michal Hocko:
> >Ohh and could you post the config please? Sorry should have asked
> >earlier.
>
> See attachment.
Nothing unusual there. Could you enable CONFIG_DEBUG_VM maybe it will
help
401 - 500 of 11648 matches
Mail list logo