Hello, Andrew.
2012/11/20 Minchan Kim :
> Hi Joonsoo,
> Sorry for the delay.
>
> On Thu, Nov 15, 2012 at 02:09:04AM +0900, JoonSoo Kim wrote:
>> Hi, Minchan.
>>
>> 2012/11/14 Minchan Kim :
>> > On Tue, Nov 13, 2012 at 11:12:28PM +0900, JoonSoo Kim wrote:
&g
and stat inevitably
use vmlist and vmlist_lock. But it is preferable that they are used
as least as possible in outside of vmalloc.c
Changelog
v1->v2:
[2/3]: patch description is improved.
Rebased on v3.7-rc7
Joonsoo Kim (3):
ARM: vmregion: remove vmregion code entirely
ARM: static
Now, there is no user for vmregion.
So remove it.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7 @@ obj-y := dma-mapping.o extable.o
: Joonsoo Kim
diff --git a/arch/arm/include/asm/mach/static_vm.h
b/arch/arm/include/asm/mach/static_vm.h
new file mode 100644
index 000..1bb6604
--- /dev/null
+++ b/arch/arm/include/asm/mach/static_vm.h
@@ -0,0 +1,45 @@
+/*
+ * arch/arm/include/asm/mach/static_vm.h
+ *
+ * Copyright (C) 2012 LG
.
With it, we don't need to iterate all mapped areas. Instead, we just
iterate static mapped areas. It helps to reduce an overhead of finding
matched area. And architecture dependency on vmalloc layer is removed,
so it will help to maintainability for vmalloc layer.
Signed-off-by: Joonsoo Kim
: change cache_is_vipt() to cache_is_vipt_nonaliasing() in order to
be self-documented
Acked-by: Nicolas Pitre
Signed-off-by: Joonsoo Kim
---
Hello Nicolas.
I maintain your 'Acked-by' while updating this patch to v2.
Please let me know if there is problem.
Thanks.
diff --git a/arch/
On Thu, Mar 07, 2013 at 07:35:51PM +0900, JoonSoo Kim wrote:
> 2013/3/7 Nicolas Pitre :
> > On Thu, 7 Mar 2013, Joonsoo Kim wrote:
> >
> >> Hello, Nicolas.
> >>
> >> On Tue, Mar 05, 2013 at 05:36:12PM +0800, Nicolas Pitre wrote:
> >> > On Mo
Hello, Pekka.
Could you pick up 1/3, 3/3?
These are already acked by Christoph.
2/3 is same effect as Glauber's "slub: correctly bootstrap boot caches",
so should skip it.
Thanks.
On Mon, Jan 21, 2013 at 05:01:25PM +0900, Joonsoo Kim wrote:
> There is a subtle bug when calcu
On Mon, Feb 25, 2013 at 01:56:59PM +0900, Joonsoo Kim wrote:
> On Thu, Feb 14, 2013 at 02:48:33PM +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > But, there are some missing parts for this feature to work properly.
> >
Remove unused argument and make function static,
because there is no user outside of nobootmem.c
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
index cdc3bab..5f0b0e1 100644
--- a/include/linux/bootmem.h
+++ b/include/linux/bootmem.h
@@ -44,7 +44,6
max_low_pfn reflect the number of _pages_ in the system,
not the maximum PFN. You can easily find that fact in init_bootmem().
So fix it.
Additionally, if 'start_pfn == end_pfn', we don't need to go futher,
so change range check.
Signed-off-by: Joonsoo Kim
diff --git a/mm/n
Currently, we do memset() before reserving the area.
This may not cause any problem, but it is somewhat weird.
So change execution order.
Signed-off-by: Joonsoo Kim
diff --git a/mm/nobootmem.c b/mm/nobootmem.c
index 589c673..f11ec1c 100644
--- a/mm/nobootmem.c
+++ b/mm/nobootmem.c
@@ -46,8
On Tue, Mar 19, 2013 at 02:16:00PM +0900, Joonsoo Kim wrote:
> Remove unused argument and make function static,
> because there is no user outside of nobootmem.c
>
> Signed-off-by: Joonsoo Kim
>
> diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
> index cd
On Mon, Mar 18, 2013 at 10:53:04PM -0700, Yinghai Lu wrote:
> On Mon, Mar 18, 2013 at 10:16 PM, Joonsoo Kim wrote:
> > Currently, we do memset() before reserving the area.
> > This may not cause any problem, but it is somewhat weird.
> > So change execution order.
> >
On Mon, Mar 18, 2013 at 10:51:43PM -0700, Yinghai Lu wrote:
> On Mon, Mar 18, 2013 at 10:16 PM, Joonsoo Kim wrote:
> > Remove unused argument and make function static,
> > because there is no user outside of nobootmem.c
> >
> > Signed-off-by: Joonsoo Kim
> &g
On Mon, Mar 18, 2013 at 10:47:41PM -0700, Yinghai Lu wrote:
> On Mon, Mar 18, 2013 at 10:15 PM, Joonsoo Kim wrote:
> > max_low_pfn reflect the number of _pages_ in the system,
> > not the maximum PFN. You can easily find that fact in init_bootmem().
> > So fix it.
>
>
On Tue, Mar 19, 2013 at 03:25:22PM +0900, Joonsoo Kim wrote:
> On Mon, Mar 18, 2013 at 10:47:41PM -0700, Yinghai Lu wrote:
> > On Mon, Mar 18, 2013 at 10:15 PM, Joonsoo Kim
> > wrote:
> > > max_low_pfn reflect the number of _pages_ in the system,
> > > not the
On Tue, Mar 19, 2013 at 12:35:45AM -0700, Yinghai Lu wrote:
> Can you check why sparc do not need to change interface during converting
> to use memblock to replace bootmem?
Sure.
According to my understanding to sparc32 code(arch/sparc/mm/init_32.c),
they already use max_low_pfn as the maximum PF
Hello, Peter.
On Tue, Mar 19, 2013 at 03:02:21PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > After commit 88b8dac0, dst-cpu can be changed in load_balance(),
> > then we can't know cpu_idle_type of dst-cpu when load_balance()
>
On Tue, Mar 19, 2013 at 03:20:57PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group,
> > regardless of idle type. When we do NEWLY_IDLE balancing, we should not
> > c
On Tue, Mar 19, 2013 at 03:30:15PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Some validation for task moving is performed in move_tasks() and
> > move_one_task(). We can move these code to can_migrate_task()
> > which is already e
On Tue, Mar 19, 2013 at 04:01:01PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > This name doesn't represent specific meaning.
> > So rename it to imply it's purpose.
> >
> > Signed-off-by: Joonsoo Kim
> >
&g
On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > But, in that, there is no code for preventing to re-select dst-cpu.
> > So
On Tue, Mar 19, 2013 at 04:21:23PM +0100, Peter Zijlstra wrote:
> On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
> > Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> > So, now, When we redo in load_balance(), we should reset some fields of
> >
2013/3/20 Peter Zijlstra :
> On Wed, 2013-03-20 at 16:33 +0900, Joonsoo Kim wrote:
>
>> > Right, so I'm not so taken with this one. The whole load stuff really
>> > is a balance heuristic that's part of move_tasks(), move_one_task()
>> > really doesn&
2013/3/20 Peter Zijlstra :
> On Wed, 2013-03-20 at 16:43 +0900, Joonsoo Kim wrote:
>> On Tue, Mar 19, 2013 at 04:05:46PM +0100, Peter Zijlstra wrote:
>> > On Thu, 2013-02-14 at 14:48 +0900, Joonsoo Kim wrote:
>> > > Commit 88b8dac0 makes load_balance() consider other
2013/3/19 Tejun Heo :
> On Wed, Mar 13, 2013 at 07:57:18PM -0700, Tejun Heo wrote:
>> and available in the following git branch.
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git review-finer-locking
>
> Applied to wq/for-3.10.
Hello, Tejun.
I know I am late, but, please give me a ch
2013/3/20 Tejun Heo :
> Unbound workqueues are going to be NUMA-affine. Add wq_numa_tbl_len
> and wq_numa_possible_cpumask[] in preparation. The former is the
> highest NUMA node ID + 1 and the latter is masks of possibles CPUs for
> each NUMA node.
It is better to move this code to topology.c o
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7 @@ obj-y
: Joonsoo Kim
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 88fd86c..904c15e 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -39,6 +39,70 @@
#include
#include "mm.h"
+
+LIST_HEAD(static_vmlist);
+
+static struct static_vm *find_static_vm_paddr(p
27;s flags bits
[3/3]: Rework according to [2/3] change
Rebased on v3.8-rc5
v2->v3:
coverletter: refer a link related to this work
[2/3]: drop @flags of find_static_vm_vaddr
Rebased on v3.8-rc4
v1->v2:
[2/3]: patch description is improved.
Rebased on v3.7-rc7
Joonsoo Kim (3):
.
With it, we don't need to iterate all mapped areas. Instead, we just
iterate static mapped areas. It helps to reduce an overhead of finding
matched area. And architecture dependency on vmalloc layer is removed,
so it will help to maintainability for vmalloc layer.
Signed-off-by: Joonsoo Kim
Hello, Nicolas.
On Mon, Feb 04, 2013 at 11:44:16PM -0500, Nicolas Pitre wrote:
> On Tue, 5 Feb 2013, Joonsoo Kim wrote:
>
> > A static mapped area is ARM-specific, so it is better not to use
> > generic vmalloc data structure, that is, vmlist and vmlist_lock
> > for man
Hello, Rob.
On Tue, Feb 05, 2013 at 01:12:51PM -0600, Rob Herring wrote:
> On 02/05/2013 12:13 PM, Nicolas Pitre wrote:
> > On Tue, 5 Feb 2013, Rob Herring wrote:
> >
> >> On 02/04/2013 10:44 PM, Nicolas Pitre wrote:
> >>> On Tue, 5 Feb 2013, Joonsoo Kim wrot
Hello, Santosh.
On Tue, Feb 05, 2013 at 02:22:39PM +0530, Santosh Shilimkar wrote:
> On Tuesday 05 February 2013 06:01 AM, Joonsoo Kim wrote:
> >Now, there is no user for vmregion.
> >So remove it.
> >
> >Acked-by: Nicolas Pitre
> >Signed-off-by: Joonsoo Kim
Hello, Santosh.
On Tue, Feb 05, 2013 at 02:32:06PM +0530, Santosh Shilimkar wrote:
> On Tuesday 05 February 2013 06:01 AM, Joonsoo Kim wrote:
> >In current implementation, we used ARM-specific flag, that is,
> >VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapp
On Wed, Feb 06, 2013 at 11:07:07AM +0900, Joonsoo Kim wrote:
> Hello, Rob.
>
> On Tue, Feb 05, 2013 at 01:12:51PM -0600, Rob Herring wrote:
> > On 02/05/2013 12:13 PM, Nicolas Pitre wrote:
> > > On Tue, 5 Feb 2013, Rob Herring wrote:
> > >
> > >>
this problem.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/kernel/sched_clock.c b/arch/arm/kernel/sched_clock.c
index fc6692e..bd6f56b 100644
--- a/arch/arm/kernel/sched_clock.c
+++ b/arch/arm/kernel/sched_clock.c
@@ -93,11 +93,11 @@ static void notrace update_sched_clock(void
of find_static_vm_vaddr
Rebased on v3.8-rc4
v1->v2:
[2/3]: patch description is improved.
Rebased on v3.7-rc7
Joonsoo Kim (3):
ARM: vmregion: remove vmregion code entirely
ARM: ioremap: introduce an infrastructure for static mapped area
ARM: mm: use static_vm for managing static
: Nicolas Pitre
Tested-by: Santosh Shilimkar
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 88fd86c..904c15e 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -39,6 +39,70 @@
#include
#include "mm.h"
+
+LIST_HEAD(sta
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre
Tested-by: Santosh Shilimkar
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7 @@ obj-y
Acked-by: Rob Herring
Tested-by: Santosh Shilimkar
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 904c15e..04d9006 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -261,13 +261,14 @@ void __iomem * __arm_ioremap_pfn_caller(unsigned
Hello, Linus.
2013/2/6 Linus Walleij :
> On Wed, Feb 6, 2013 at 6:21 AM, Joonsoo Kim wrote:
>
>> If we want load epoch_cyc and epoch_ns atomically,
>> we should update epoch_cyc_copy first of all.
>> This notify reader that updating is in progress.
>
> If you think
Hello, Russell.
On Wed, Feb 06, 2013 at 04:33:55PM +, Russell King - ARM Linux wrote:
> On Wed, Feb 06, 2013 at 10:33:53AM +0100, Linus Walleij wrote:
> > On Wed, Feb 6, 2013 at 6:21 AM, Joonsoo Kim wrote:
> >
> > > If we want load epoch_cyc and epoch_ns atomically
2013/2/9 Nicolas Pitre :
> On Fri, 8 Feb 2013, Russell King - ARM Linux wrote:
>
>> On Fri, Feb 08, 2013 at 03:51:25PM +0900, Joonsoo Kim wrote:
>> > I try to put it into patch tracker, but I fail to put it.
>> > I use following command.
>> >
>> > g
Hello, Preeti.
On Tue, Apr 02, 2013 at 10:25:23AM +0530, Preeti U Murthy wrote:
> Hi Joonsoo,
>
> On 04/02/2013 07:55 AM, Joonsoo Kim wrote:
> > Hello, Preeti.
> >
> > On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
> >> Hi Joonsoo,
> >
Hello, Mike.
On Tue, Apr 02, 2013 at 04:35:26AM +0200, Mike Galbraith wrote:
> On Tue, 2013-04-02 at 11:25 +0900, Joonsoo Kim wrote:
> > Hello, Preeti.
> >
> > On Mon, Apr 01, 2013 at 12:36:52PM +0530, Preeti U Murthy wrote:
> > > Hi Joonsoo,
> > >
&g
Hello, Peter.
On Tue, Apr 02, 2013 at 10:10:06AM +0200, Peter Zijlstra wrote:
> On Thu, 2013-03-28 at 16:58 +0900, Joonsoo Kim wrote:
> > Now checking that this cpu is appropriate to balance is embedded into
> > update_sg_lb_stats() and this checking has no direct relations
Hello, Preeti.
On Tue, Apr 02, 2013 at 11:02:43PM +0530, Preeti U Murthy wrote:
> Hi Joonsoo,
>
>
> >>> I think that it is real problem that sysctl_sched_min_granularity is not
> >>> guaranteed for each task.
> >>> Instead of this patch, how about considering low bound?
> >>>
> >>> if (slice < s
Hello, Peter.
On Tue, Apr 02, 2013 at 12:29:42PM +0200, Peter Zijlstra wrote:
> On Tue, 2013-04-02 at 12:00 +0200, Peter Zijlstra wrote:
> > On Tue, 2013-04-02 at 18:50 +0900, Joonsoo Kim wrote:
> > >
> > > It seems that there is some misunderstanding about this pat
Hello, Christoph.
On Tue, Apr 02, 2013 at 07:25:20PM +, Christoph Lameter wrote:
> On Tue, 2 Apr 2013, Joonsoo Kim wrote:
>
> > We need one more fix for correctness.
> > When available is assigned by put_cpu_partial, it doesn't count cpu slab's
> > objects.
Hello, Oskar.
On Thu, Apr 04, 2013 at 02:51:26PM +0200, Oskar Andero wrote:
> From: Toby Collett
>
> The symbol lookup can take a long time and kprobes is
> initialised very early in boot, so delay symbol lookup
> until the blacklist is first used.
>
> Cc: Masami Hiramatsu
> Cc: David S. Mille
On Thu, Apr 04, 2013 at 01:53:25PM +, Christoph Lameter wrote:
> On Thu, 4 Apr 2013, Joonsoo Kim wrote:
>
> > Pekka alreay applied it.
> > Do we need update?
>
> Well I thought the passing of the count via lru.next would be something
> worthwhile to pick up.
&g
Hello, Preeti.
On Thu, Apr 04, 2013 at 12:18:32PM +0530, Preeti U Murthy wrote:
> Hi Joonsoo,
>
> On 04/04/2013 06:12 AM, Joonsoo Kim wrote:
> > Hello, Preeti.
>
> >
> > So, how about extending a sched_period with rq->nr_running, instead of
> > cfs_rq-&g
; > >> [+cc Greg for driver core]
> > >>
> > >> On Fri, Jan 25, 2013 at 10:13:03AM +0900, Joonsoo Kim wrote:
> > >> > Hello, Bjorn.
> > >> >
> > >> > On Thu, Jan 24, 2013 at 10:45:13AM -0700, Bjorn Helgaas wrote:
> &g
On Thu, Jan 24, 2013 at 10:32:32PM -0500, CAI Qian wrote:
>
>
> - Original Message -
> > From: "Greg Kroah-Hartman"
> > To: "Joonsoo Kim"
> > Cc: "Paul Hargrove" , "Pekka Enberg"
> > , linux-kernel@vger.k
On Mon, Jan 28, 2013 at 01:04:24PM -0500, Nicolas Pitre wrote:
> On Mon, 28 Jan 2013, Will Deacon wrote:
>
> > Hello,
> >
> > On Thu, Jan 24, 2013 at 01:28:51AM +, Joonsoo Kim wrote:
> > > In current implementation, we used ARM-specific flag, that is,
Hello, Nicolas.
On Tue, Jan 29, 2013 at 07:05:32PM -0500, Nicolas Pitre wrote:
> On Thu, 24 Jan 2013, Joonsoo Kim wrote:
>
> > From: Joonsoo Kim
> >
> > In current implementation, we used ARM-specific flag, that is,
> > VM_ARM_STATIC_MAPPING, for distinguishing A
Now, there is no user for vmregion.
So remove it.
Acked-by: Nicolas Pitre
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7 @@ obj-y
: Joonsoo Kim
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 88fd86c..ceb34ae 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -39,6 +39,78 @@
#include
#include "mm.h"
+
+LIST_HEAD(static_vmlist);
+static DEFINE_RWLOCK(static_vmlist_lock);
+
+sta
rw_lock
Modify static_vm's flags bits
[3/3]: Rework according to [2/3] change
Rebased on v3.8-rc5
v2->v3:
coverletter: refer a link related to this work
[2/3]: drop @flags of find_static_vm_vaddr
Rebased on v3.8-rc4
v1->v2:
[2/3]: patch description is improved.
Rebased o
.
With it, we don't need to iterate all mapped areas. Instead, we just
iterate static mapped areas. It helps to reduce an overhead of finding
matched area. And architecture dependency on vmalloc layer is removed,
so it will help to maintainability for vmalloc layer.
Signed-off-by: Joonsoo Kim
2013/2/1 Nicolas Pitre :
> On Thu, 31 Jan 2013, Joonsoo Kim wrote:
>
>> A static mapped area is ARM-specific, so it is better not to use
>> generic vmalloc data structure, that is, vmlist and vmlist_lock
>> for managing static mapped area. And it causes some needless overh
Hello, Nicolas.
2013/2/1 Nicolas Pitre :
> On Thu, 31 Jan 2013, Joonsoo Kim wrote:
>
>> In current implementation, we used ARM-specific flag, that is,
>> VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area.
>> The purpose of static mapped area is to r
attributes.
>
> v2: Joonsoo pointed out that it'd better to align struct worker_pool
> rather than the array so that every pool is aligned.
>
> Signed-off-by: Tejun Heo
> Cc: Joonsoo Kim
> ---
> Rebased on top of the current wq/for-3.9 and Joo
Hello, Bjorn.
On Thu, Jan 24, 2013 at 10:45:13AM -0700, Bjorn Helgaas wrote:
> On Fri, Dec 28, 2012 at 6:50 AM, Joonsoo Kim wrote:
> > During early boot phase, PCI bus subsystem is not yet initialized.
> > If panic is occured in early boot phase and panic_timeout is set,
> &
Hello, Seth.
Here comes minor comments.
On Wed, Feb 20, 2013 at 04:04:44PM -0600, Seth Jennings wrote:
> zswap is a thin compression backend for frontswap. It receives
> pages from frontswap and attempts to store them in a compressed
> memory pool, resulting in an effective partial memory reclaim
On Thu, Feb 14, 2013 at 02:48:33PM +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> But, there are some missing parts for this feature to work properly.
> This patchset correct these things and make load_balance() robust.
>
> Oth
Hello, Christoph.
On Sun, Feb 24, 2013 at 12:35:22AM +, Christoph Lameter wrote:
> On Sat, 23 Feb 2013, JoonSoo Kim wrote:
>
> > With flushing, deactivate_slab() occur and it has some overhead to
> > deactivate objects.
> > If my patch properly fix this situation,
Hello, Seth.
I'm not sure that this is right time to review, because I already have
seen many effort of various people to promote zxxx series. I don't want to
be a stopper to promote these. :)
But, I read the code, now, and then some comments below.
On Wed, Feb 13, 2013 at 12:38:44PM -0600, Seth
On Wed, Feb 20, 2013 at 08:37:33AM +0900, Minchan Kim wrote:
> On Tue, Feb 19, 2013 at 11:54:21AM -0600, Seth Jennings wrote:
> > On 02/19/2013 03:18 AM, Joonsoo Kim wrote:
> > > Hello, Seth.
> > > I'm not sure that this is right time to review, because I alread
On Wed, Feb 20, 2013 at 04:04:41PM -0600, Seth Jennings wrote:
> =
> DO NOT MERGE, FOR REVIEW ONLY
> This patch introduces zsmalloc as new code, however, it already
> exists in drivers/staging. In order to build successfully, you
> must select EITHER to driver/staging version OR this versi
2013/2/23 Christoph Lameter :
> On Fri, 22 Feb 2013, Glauber Costa wrote:
>
>> On 02/22/2013 09:01 PM, Christoph Lameter wrote:
>> > Argh. This one was the final version:
>> >
>> > https://patchwork.kernel.org/patch/2009521/
>> >
>>
>> It seems it would work. It is all the same to me.
>> Which one
From: Joonsoo Kim
Although our intention is to unexport internal structure entirely,
but there is one exception for kexec. kexec dumps address of vmlist
and makedumpfile uses this information.
We are about to remove vmlist, then another way to retrieve information
of vmalloc layer is needed for
n v3.9-rc2.
Changes from v1
5/8: skip areas for lazy_free
6/8: skip areas for lazy_free
7/8: export vmap_area_list for kexec, instead of vmlist
Joonsoo Kim (8):
mm, vmalloc: change iterating a vmlist to find_vm_area()
mm, vmalloc: move get_vmalloc_info() to vmalloc.c
mm, vmalloc: protect
From: Joonsoo Kim
This patch is preparing step for removing vmlist entirely.
For above purpose, we change iterating a vmap_list codes to iterating a
vmap_area_list. It is somewhat trivial change, but just one thing
should be noticed.
vmlist is lack of information about some areas in vmalloc
From: Joonsoo Kim
Now, when we hold a vmap_area_lock, va->vm can't be discarded. So we can
safely access to va->vm when iterating a vmap_area_list with holding a
vmap_area_lock. With this property, change iterating vmlist codes in
vread/vwrite() to iterating vmap_area_list.
There
From: Joonsoo Kim
Inserting and removing an entry to vmlist is linear time complexity, so
it is inefficient. Following patches will try to remove vmlist entirely.
This patch is preparing step for it.
For removing vmlist, iterating vmlist codes should be changed to iterating
a vmap_area_list
From: Joonsoo Kim
Now, there is no need to maintain vmlist after initializing vmalloc.
So remove related code and data structure.
Signed-off-by: Joonsoo Kim
Signed-off-by: Joonsoo Kim
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7e63984..151da8a 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
From: Joonsoo Kim
This patch is preparing step for removing vmlist entirely.
For above purpose, we change iterating a vmap_list codes to iterating a
vmap_area_list. It is somewhat trivial change, but just one thing
should be noticed.
Using vmap_area_list in vmallocinfo() introduce ordering
From: Joonsoo Kim
The purpose of iterating a vmlist is finding vm area with specific
virtual address. find_vm_area() is provided for this purpose
and more efficient, because it uses a rbtree.
So change it.
Cc: Chris Metcalf
Cc: Guan Xuetao
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H.
From: Joonsoo Kim
Now get_vmalloc_info() is in fs/proc/mmu.c. There is no reason
that this code must be here and it's implementation needs vmlist_lock
and it iterate a vmlist which may be internal data structure for vmalloc.
It is preferable that vmlist_lock and vmlist is only used in vmal
Hello, Hugh.
On Thu, Mar 07, 2013 at 06:01:26PM -0800, Hugh Dickins wrote:
> On Fri, 8 Mar 2013, Joonsoo Kim wrote:
> > On Thu, Mar 07, 2013 at 10:54:15AM -0800, Hugh Dickins wrote:
> > > On Thu, 7 Mar 2013, Joonsoo Kim wrote:
> > >
> > > > W
2012/10/23 Glauber Costa :
> On 10/23/2012 12:07 PM, Glauber Costa wrote:
>> On 10/23/2012 04:48 AM, JoonSoo Kim wrote:
>>> Hello, Glauber.
>>>
>>> 2012/10/23 Glauber Costa :
>>>> On 10/22/2012 06:45 PM, Christoph Lameter wrote
Hi, Eric.
2012/10/23 Eric Dumazet :
> On Tue, 2012-10-23 at 11:29 +0900, JoonSoo Kim wrote:
>> 2012/10/22 Christoph Lameter :
>> > On Sun, 21 Oct 2012, Joonsoo Kim wrote:
>> >
>> >> kmalloc() and kmalloc_node() of the SLUB isn't inlined when @flags =
Hi, Glauber.
2012/10/19 Glauber Costa :
> For the kmem slab controller, we need to record some extra
> information in the kmem_cache structure.
>
> Signed-off-by: Glauber Costa
> Signed-off-by: Suleiman Souhlal
> CC: Christoph Lameter
> CC: Pekka Enberg
> CC: Michal Hocko
> CC: Kamezawa Hiroy
2012/10/19 Glauber Costa :
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6a1e096..59f6d54 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -339,6 +339,12 @@ struct mem_cgroup {
> #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_INET)
> struct tcp_memcontrol tcp_mem;
2012/10/24 Glauber Costa :
> On 10/24/2012 06:29 PM, Christoph Lameter wrote:
>> On Wed, 24 Oct 2012, Glauber Costa wrote:
>>
>>> Because of that, we either have to move all the entry points to the
>>> mm/slab.h and rely heavily on the pre-processor, or include all .c files
>>> in here.
>>
>> Hmm..
2012/10/19 Glauber Costa :
> @@ -2930,9 +2937,188 @@ int memcg_register_cache(struct mem_cgroup *memcg,
> struct kmem_cache *s)
>
> void memcg_release_cache(struct kmem_cache *s)
> {
> + struct kmem_cache *root;
> + int id = memcg_css_id(s->memcg_params->memcg);
> +
> + if (s->
Hello, Andrew.
2012/10/31 Andrew Morton :
> On Mon, 29 Oct 2012 04:12:53 +0900
> Joonsoo Kim wrote:
>
>> The pool_lock protects the page_address_pool from concurrent access.
>> But, access to the page_address_pool is already protected by kmap_lock.
>> So rem
We can find free page_address_map instance without the page_address_pool.
So remove it.
Cc: Mel Gorman
Cc: Peter Zijlstra
Signed-off-by: Joonsoo Kim
Reviewed-by: Minchan Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index 017bad1..d98b0a9 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
In flush_all_zero_pkmaps(), we have an index of the pkmap associated the page.
Using this index, we can simply get virtual address of the page.
So change it.
Cc: Mel Gorman
Cc: Peter Zijlstra
Signed-off-by: Joonsoo Kim
Reviewed-by: Minchan Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index
ional change.
[2-3] are for clean-up and optimization.
These eliminate an useless lock opearation and list management.
[4-5] is for optimization related to flush_all_zero_pkmaps().
Joonsoo Kim (5):
mm, highmem: use PKMAP_NR() to calculate an index of pkmap
mm, highmem: remove useless pool_lock
To calculate an index of pkmap, using PKMAP_NR() is more understandable
and maintainable, So change it.
Cc: Mel Gorman
Cc: Peter Zijlstra
Signed-off-by: Joonsoo Kim
Reviewed-by: Minchan Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index d517cd1..b3b3d68 100644
--- a/mm/highmem.c
+++ b/mm
Cc: Minchan Kim
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ef788b5..97ad208 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -32,6 +32,7 @@ static inline void invalidate_kernel_vmap_range(void *vaddr,
int size)
#ifdef
The pool_lock protects the page_address_pool from concurrent access.
But, access to the page_address_pool is already protected by kmap_lock.
So remove it.
Cc: Mel Gorman
Cc: Peter Zijlstra
Signed-off-by: Joonsoo Kim
Reviewed-by: Minchan Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index
Hello, Andrew.
2012/10/29 JoonSoo Kim :
> Hi, Minchan.
>
> 2012/10/29 Minchan Kim :
>> Hi Joonsoo,
>>
>> On Mon, Oct 29, 2012 at 04:12:51AM +0900, Joonsoo Kim wrote:
>>> This patchset clean-up and optimize highmem related code.
>>>
>>> [1]
Hello, Minchan.
2012/11/1 Minchan Kim :
> On Thu, Nov 01, 2012 at 01:56:36AM +0900, Joonsoo Kim wrote:
>> In current code, after flush_all_zero_pkmaps() is invoked,
>> then re-iterate all pkmaps. It can be optimized if flush_all_zero_pkmaps()
>> return index of first flu
Hello, Glauber.
2012/11/2 Glauber Costa :
> On 11/02/2012 04:04 AM, Andrew Morton wrote:
>> On Thu, 1 Nov 2012 16:07:16 +0400
>> Glauber Costa wrote:
>>
>>> Hi,
>>>
>>> This work introduces the kernel memory controller for memcg. Unlike previous
>>> submissions, this includes the whole controlle
Signed-off-by: Joonsoo Kim
Cc: Michal Nazarewicz
Cc: Marek Szyprowski
Cc: Minchan Kim
Cc: Christoph Lameter
Acked-by: Christoph Lameter
Acked-by: Michal Nazarewicz
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4403009..02d4519 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@
1 - 100 of 2325 matches
Mail list logo