d on top of submitted patchset for ARM.
'introduce static_vm for ARM-specific static mapped area'
https://lkml.org/lkml/2012/11/27/356
But, running properly on x86 without ARM patchset.
Joonsoo Kim (8):
mm, vmalloc: change iterating a vmlist to find_vm_area()
mm, vmalloc: move get_
ility. So move the code to vmalloc.c
Signed-off-by: Joonsoo Kim
diff --git a/fs/proc/Makefile b/fs/proc/Makefile
index 99349ef..88092c1 100644
--- a/fs/proc/Makefile
+++ b/fs/proc/Makefile
@@ -5,7 +5,7 @@
obj-y += proc.o
proc-y := nommu.o task_nommu.o
-proc-$(CONFI
example, vm_map_ram() allocate area in vmalloc address space,
but it doesn't make a link with vmlist. To provide full information about
vmalloc address space is better idea, so we don't use va->vm and use
vmap_area directly.
This makes get_vmalloc_info() more precise.
Signed-off-by: Joons
t for vmlist is sufficient.
So use vmlist_early for full chain of vm_struct and assign a dummy_vm
to vmlist for supporting kexec.
Cc: Eric Biederman
Signed-off-by: Joonsoo Kim
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f134950..8a1b959 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -272,6 +2
Now, there is no need to maintain vmlist_early after initializing vmalloc.
So remove related code and data structure.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 698b1e5..10d19c9 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux
her CPUs. So we need smp_[rw]mb for ensuring that proper
values is assigned when we see that VM_UNLIST is removed.
Therefore, this patch not only change a iteration list, but also add a
appropriate smp_[rw]mb to right places.
Signed-off-by: Joonsoo Kim
diff --git a/mm/vmalloc.c b/mm/vmalloc
e relate to lock, because vmlist_lock is mutex,
but, vmap_area_lock is spin_lock. It may introduce a spinning overhead
during vread/vwrite() is executing. But, these are debug-oriented
functions, so this overhead is not real problem for common case.
Signed-off-by: Joonsoo Kim
diff --git a/mm/vma
ned-off-by: Joonsoo Kim
diff --git a/arch/tile/mm/pgtable.c b/arch/tile/mm/pgtable.c
index de0de0c..862782d 100644
--- a/arch/tile/mm/pgtable.c
+++ b/arch/tile/mm/pgtable.c
@@ -592,12 +592,7 @@ void iounmap(volatile void __iomem *addr_in)
in parallel. Reuse of the virtual address is pr
that, we should make sure that
when we iterate a vmap_area_list, accessing to va->vm doesn't cause a race
condition. This patch ensure that when iterating a vmap_area_list,
there is no race condition for accessing to vm_struct.
Signed-off-by: Joonsoo Kim
diff --git a/mm/vmalloc.c b/mm/v
2012/11/28 Joonsoo Kim :
> In current implementation, we used ARM-specific flag, that is,
> VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area.
> The purpose of static mapped area is to re-use static mapped area when
> entire physical address range of the ioremap
Hello, Andrew.
2012/12/7 Andrew Morton :
> On Fri, 7 Dec 2012 01:09:27 +0900
> Joonsoo Kim wrote:
>
>> This patchset remove vm_struct list management after initializing vmalloc.
>> Adding and removing an entry to vmlist is linear time complexity, so
>> it is ineff
2012/12/7 Andrew Morton :
> On Fri, 7 Dec 2012 01:09:27 +0900
> Joonsoo Kim wrote:
>
>> I'm not sure that "7/8: makes vmlist only for kexec" is fine.
>> Because it is related to userspace program.
>> As far as I know, makedumpfile use kexec's outpu
Hello, Bob.
2012/12/7 Bob Liu :
> Hi Joonsoo,
>
> On Fri, Dec 7, 2012 at 12:09 AM, Joonsoo Kim wrote:
>> This patchset remove vm_struct list management after initializing vmalloc.
>> Adding and removing an entry to vmlist is linear time complexity, so
>> it is ineff
Hello, Pekka.
2012/12/7 Pekka Enberg :
> On Thu, Dec 6, 2012 at 6:09 PM, Joonsoo Kim wrote:
>> The purpose of iterating a vmlist is finding vm area with specific
>> virtual address. find_vm_area() is provided for this purpose
>> and more efficient, because it uses a rb
Hello, Vivek.
2012/12/7 Vivek Goyal :
> On Fri, Dec 07, 2012 at 10:16:55PM +0900, JoonSoo Kim wrote:
>> 2012/12/7 Andrew Morton :
>> > On Fri, 7 Dec 2012 01:09:27 +0900
>> > Joonsoo Kim wrote:
>> >
>> >> I'm not sure that "7/8: makes v
"make cscope O=. SRCARCH=arm SUBARCH=xxx"
Signed-off-by: Joonsoo Kim
---
v2: change bash specific '[[]]' to 'case in' statement.
v3: quote the patterns.
diff --git a/scripts/tags.sh b/scripts/tags.sh
index 79fdafb..8fb18d1 100755
--- a/scripts/tags.sh
+++
cuted after building the kernel.
Signed-off-by: Joonsoo Kim
---
v2: change bash specific '[[]]' to 'case in' statement.
use COMPILED_SOURCE env var, instead of abusing SUBARCH
v3: change [ "$COMPILED_SOURCE"="compiled" ] to [ -n $COMPILED_SOURCE"
There is no implementation of bootmeme_arch_preferred_node() and
call for this function will makes compile-error.
So, remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/bootmem.c b/mm/bootmem.c
index 434be4a..6f62c03e 100644
--- a/mm/bootmem.c
+++ b/mm/bootmem.c
@@ -589,19 +589,6 @@ static
The name of function is not suitable for now.
And removing function and inlining it's code to each call sites
makes code more understandable.
Additionally, we shouldn't do allocation from bootmem
when slab_is_available(), so directly return kmalloc*'s return value.
Signed-off-
Now, there is no code for CONFIG_HAVE_ARCH_BOOTMEM.
So remove it.
Cc: Haavard Skinnemoen
Cc: Hans-Christian Egtvedt
Signed-off-by: Joonsoo Kim
diff --git a/arch/avr32/Kconfig b/arch/avr32/Kconfig
index 06e73bf..c2bbc9a 100644
--- a/arch/avr32/Kconfig
+++ b/arch/avr32/Kconfig
@@ -193,9 +193,6
jamin Herrenschmidt
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/platforms/cell/celleb_pci.c
b/arch/powerpc/platforms/cell/celleb_pci.c
index abc8af4..1735681 100644
--- a/arch/powerpc/platforms/cell/celleb_pci.c
+++ b/arch/powerpc/platforms/cell/celleb_pci.c
@@ -401,11 +401,11 @@
commit ea96025a('Don't use alloc_bootmem() in init_IRQ() path')
changed alloc_bootmem() to kzalloc(),
but missed to change free_bootmem() to kfree().
So correct it.
Signed-off-by: Joonsoo Kim
diff --git a/arch/powerpc/platforms/82xx/pq2ads-pci-pic.c
b/arch/powerpc/platforms/
s to check whether this vma is for hughtlb, so correct it
according to this purpose.
Cc: Alex Shi
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Signed-off-by: Joonsoo Kim
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 0777f04..60f926c 100644
--- a/arch/x86/mm/tlb
2012/11/3 Minchan Kim :
> Hi Joonsoo,
>
> On Sat, Nov 03, 2012 at 04:07:25AM +0900, JoonSoo Kim wrote:
>> Hello, Minchan.
>>
>> 2012/11/1 Minchan Kim :
>> > On Thu, Nov 01, 2012 at 01:56:36AM +0900, Joonsoo Kim wrote:
>> >> In current code, after f
Hi, Andrew.
2012/11/13 Andrew Morton :
> On Tue, 13 Nov 2012 01:31:55 +0900
> Joonsoo Kim wrote:
>
>> It is somehow strange that alloc_bootmem return virtual address
>> and free_bootmem require physical address.
>> Anyway, free_bootmem()'s first parameter should
2012/11/13 Minchan Kim :
> On Tue, Nov 13, 2012 at 09:30:57AM +0900, JoonSoo Kim wrote:
>> 2012/11/3 Minchan Kim :
>> > Hi Joonsoo,
>> >
>> > On Sat, Nov 03, 2012 at 04:07:25AM +0900, JoonSoo Kim wrote:
>> >> Hello, Minchan.
>> >>
>>
on v3.7-rc5.
Thanks.
Joonsoo Kim (3):
ARM: vmregion: remove vmregion code entirely
ARM: static_vm: introduce an infrastructure for static mapped area
ARM: mm: use static_vm for managing static mapped areas
arch/arm/include/asm/mach/static_vm.h | 51
arch/arm/mm/Makefile
Now, there is no user for vmregion.
So remove it.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile
index 8a9c4cb..4e333fa 100644
--- a/arch/arm/mm/Makefile
+++ b/arch/arm/mm/Makefile
@@ -6,7 +6,7 @@ obj-y := dma-mapping.o extable.o
vmlist_lock. But it is preferable that they are used outside
of vmalloc.c as least as possible.
Now, I introduce an ARM-specific infrastructure for static mapped area. In
the following patch, we will use this and resolve above mentioned problem.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/include/asm
.
With it, we don't need to iterate all mapped areas. Instead, we just
iterate static mapped areas. It helps to reduce an overhead of finding
matched area. And architecture dependency on vmalloc layer is removed,
so it will help to maintainability for vmalloc layer.
Signed-off-by: Joonsoo Kim
Hi, Minchan.
2012/11/14 Minchan Kim :
> On Tue, Nov 13, 2012 at 11:12:28PM +0900, JoonSoo Kim wrote:
>> 2012/11/13 Minchan Kim :
>> > On Tue, Nov 13, 2012 at 09:30:57AM +0900, JoonSoo Kim wrote:
>> >> 2012/11/3 Minchan Kim :
>> >> > Hi Joonsoo,
>&
start address, it is possible error situation, because we already
prepare prev vma, rb_link and rb_parent and these are related to original
address.
So add WARN_ON_ONCE for finding that this situtation really happens.
Signed-off-by: Joonsoo Kim
diff --git a/mm/mmap.c b/mm/mmap.c
index 2d94235..36
Hello, Russell.
Thanks for review.
2012/11/15 Russell King - ARM Linux :
> On Thu, Nov 15, 2012 at 01:55:51AM +0900, Joonsoo Kim wrote:
>> In current implementation, we used ARM-specific flag, that is,
>> VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area.
&g
Hello, Sasha.
2012/12/28 Sasha Levin :
> On 12/27/2012 06:04 PM, David Rientjes wrote:
>> On Thu, 27 Dec 2012, Sasha Levin wrote:
>>
>>> That's exactly what happens with the patch. Note that in the current
>>> upstream
>>> version there are several slab checks scattered all over.
>>>
>>> In this
into panic_smp_self_stop() which prevent system to restart.
For avoid second panic, skip reboot_fixups in early boot phase.
It makes panic_timeout works in early boot phase.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: "H. Peter Anvin"
Signed-off-by: Joonsoo Kim
diff --git a/
. When destory it, it hit
refcount = 0, then kmem_cache_close() is executed and error message is
printed.
This patch assign initial refcount 1 to kmalloc_caches, so fix this
errornous situtation.
Cc: # v3.7
Cc: Christoph Lameter
Reported-by: Paul Hargrove
Signed-off-by: Joonsoo Kim
diff --gi
2012/12/26 Joonsoo Kim :
> commit cce89f4f6911286500cf7be0363f46c9b0a12ce0('Move kmem_cache
> refcounting to common code') moves some refcount manipulation code to
> common code. Unfortunately, it also removed refcount assignment for
> kmalloc_caches. So, kmalloc_caches
On Tue, Mar 26, 2013 at 03:01:34PM +0900, Joonsoo Kim wrote:
> Commit 88b8dac0 makes load_balance() consider other cpus in its group.
> But, there are some missing parts for this feature to work properly.
> This patchset correct these things and make load_balance() robust.
>
> Oth
On Mon, Mar 25, 2013 at 01:11:08PM +0900, Joonsoo Kim wrote:
> Currently, ARM use traditional 'bootmem' allocator. It use a bitmap for
> managing memory space, so initialize a bitmap at first step. It is
> a needless overhead if we use 'nobootmem'. 'nobootmem'
Currently, we do memset() before reserving the area.
This may not cause any problem, but it is somewhat weird.
So change execution order.
Signed-off-by: Joonsoo Kim
diff --git a/mm/nobootmem.c b/mm/nobootmem.c
index a31be7a..bdd3fa2 100644
--- a/mm/nobootmem.c
+++ b/mm/nobootmem.c
@@ -45,9
Remove unused argument and make function static,
because there is no user outside of nobootmem.c
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/bootmem.h b/include/linux/bootmem.h
index cdc3bab..5f0b0e1 100644
--- a/include/linux/bootmem.h
+++ b/include/linux/bootmem.h
@@ -44,7 +44,6
We can get virtual address without virtual field.
So remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/highmem.c b/mm/highmem.c
index b32b70c..8f4c250 100644
--- a/mm/highmem.c
+++ b/mm/highmem.c
@@ -320,7 +320,6 @@ EXPORT_SYMBOL(kunmap_high);
*/
struct page_address_map {
struct
ta bss dec hex filename
347291309 640 366788f46 mm/page_alloc.o
* After *
textdata bss dec hex filename
343151285 640 362408d90 mm/page_alloc.o
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a82238
pages may be helpful to cache usage.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b212554..2632131 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -633,53 +633,71 @@ static inline int free_pages_check(struct page *page)
static void free_pcppages_bulk
There is just one code flow if two for-loops find proper area. So we don't
need to keep this logic in for-loops. Clean-up code to nderstand easily
what it does. It is for following patch.
Signed-off-by: Joonsoo Kim
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8fcced7..a822389 1
cur_ld_moved is reset if env.flags hit LBF_NEED_BREAK.
So, there is possibility that we miss doing resched_cpu().
Correct it as changing position of resched_cpu()
before checking LBF_NEED_BREAK.
Acked-by: Peter Zijlstra
Tested-by: Jason Low
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched
After commit 88b8dac0, dst-cpu can be changed in load_balance(),
then we can't know cpu_idle_type of dst-cpu when load_balance()
return positive. So, add explicit cpu_idle_type checking.
Cc: Srivatsa Vaddagiri
Acked-by: Peter Zijlstra
Tested-by: Jason Low
Signed-off-by: Joonsoo Kim
rrectly.
Acked-by: Peter Zijlstra
Tested-by: Jason Low
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index dfa92b7..b8ef321 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3896,10 +3896,14 @@ int can_migrate_task(struct task_struct *p, stru
This name doesn't represent specific meaning.
So rename it to imply it's purpose.
Acked-by: Peter Zijlstra
Tested-by: Jason Low
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ee8c1bd..cb49b2a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sc
not use one more cpumasks, use env's cpus for prevent to re-select
Joonsoo Kim (6):
sched: change position of resched_cpu() in load_balance()
sched: explicitly cpu_idle_type checking in rebalance_domains()
sched: don't consider other cpus in our group in case of NEWLY_IDLE
sched:
ijlstra
Tested-by: Jason Low
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5b1e966..acaf567 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3905,7 +3905,7 @@ int can_migrate_task(struct task_struct *p, struct lb_env
*env)
c: Srivatsa Vaddagiri
Acked-by: Peter Zijlstra
Tested-by: Jason Low
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 726e129..dfa92b7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5026,8 +5026,21 @@ static int load_balance(i
On Mon, Apr 22, 2013 at 01:07:07PM -0700, Davidlohr Bueso wrote:
> On Mon, 2013-04-22 at 14:01 +0200, Peter Zijlstra wrote:
> > On Mon, 2013-04-22 at 17:01 +0900, Joonsoo Kim wrote:
> >
> > > Hello, Ingo and Peter.
> > >
> > > Just ping for this patchset
On Mon, Apr 22, 2013 at 02:01:18PM +0200, Peter Zijlstra wrote:
> On Mon, 2013-04-22 at 17:01 +0900, Joonsoo Kim wrote:
>
> > Hello, Ingo and Peter.
> >
> > Just ping for this patchset.
>
> The patches look fine -- except a few cosmetic changes. I'm fine with
2013/6/4 Christoph Lameter :
> On Tue, 4 Jun 2013, JoonSoo Kim wrote:
>
>> And I re-read Steven initial problem report in RT kernel and find that
>> unfreeze_partial() do lock and unlock several times. This means that
>> each page in cpu partial list doesn't come fro
Currently, there is no method to quit at specified time later.
We are used to using 'sleep N' as command argument if we need it,
but explicitly supporting this feature maybe makes sense.
Cc: Namhyung Kim
Signed-off-by: Joonsoo Kim
diff --git a/tools/perf/builtin-record.c b/tools/pe
Currently, lib lk doesn't use CROSS_COMPILE environment variable,
so cross build is always failed. This is a quick fix for this problem.
Cc: Namhyung Kim
Signed-off-by: Joonsoo Kim
diff --git a/tools/lib/lk/Makefile b/tools/lib/lk/Makefile
index 926cbf3..91e5174 100644
--- a/tools/l
missed to
move termios.h header.
Cc: David Ahern
Cc: Namhyung Kim
Signed-off-by: Joonsoo Kim
diff --git a/tools/perf/util/util.h b/tools/perf/util/util.h
index a45710b..f2c6483 100644
--- a/tools/perf/util/util.h
+++ b/tools/perf/util/util.h
@@ -72,6 +72,7 @@
#include "types.h"
#incl
In free path, we don't check number of cpu_partial, so one slab can
be linked in cpu partial list even if cpu_partial is 0. To prevent this,
we should check number of cpu_partial in put_cpu_partial().
Signed-off-by: Joonsoo Kim
diff --git a/mm/slub.c b/mm/slub.c
index 57707f0..7033b4f 1
On Wed, Jun 19, 2013 at 04:00:32PM +0800, Wanpeng Li wrote:
> On Wed, Jun 19, 2013 at 03:33:55PM +0900, Joonsoo Kim wrote:
> >In free path, we don't check number of cpu_partial, so one slab can
> >be linked in cpu partial list even if cpu_partial is 0. To prevent this,
> >
On Thu, Jun 20, 2013 at 08:26:03AM +0800, Wanpeng Li wrote:
> On Wed, Jun 19, 2013 at 05:52:50PM +0900, Joonsoo Kim wrote:
> >On Wed, Jun 19, 2013 at 04:00:32PM +0800, Wanpeng Li wrote:
> >> On Wed, Jun 19, 2013 at 03:33:55PM +0900, Joonsoo Kim wrote:
> >> >In free
On Wed, Jun 19, 2013 at 09:16:20PM +0900, Namhyung Kim wrote:
> Hi Ingo,
>
> On Wed, Jun 19, 2013 at 8:33 PM, Ingo Molnar wrote:
> >
> > * Joonsoo Kim wrote:
> >
> >> Building perf for android is failed, because it can't find definition of
> >&g
Hello, Christoph.
2013/5/29 Christoph Lameter :
> I just ran some quick tests and the following seems to work.
>
> Critical portions that need additional review (Joonsoo?) are the
> modifications to get_partialinode() and __slab_free().
IMO, your code is fine to work.
But, this modification adds
en
> remove the if test here.
>
> Signed-off-by: Zhang Yanfei
For all three patches,
Acked-by: Joonsoo Kim
> ---
> mm/vmalloc.c | 11 +--
> 1 files changed, 1 insertions(+), 10 deletions(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d365724..6580c76
On Mon, Jul 01, 2013 at 10:14:45AM -0400, Santosh Shilimkar wrote:
> Joonsoo,
>
> On Monday 25 March 2013 12:11 AM, Joonsoo Kim wrote:
> > nobootmem use max_low_pfn for computing boundary in free_all_bootmem()
> > So we need proper value to max_low_pfn.
> >
> &
Signed-off-by: Joonsoo Kim
diff --git a/mm/readahead.c b/mm/readahead.c
index daed28d..3932f28 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -166,6 +166,8 @@ __do_page_cache_readahead(struct address_space *mapping,
struct file *filp,
goto out;
end_index = ((isize
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h
index ffc444c..045b325 100644
--- a/include/linux/radix-tree.h
+++ b/include/linux/radix-tree.h
@@ -230,6 +230,10 @@ unsigned long radix_tree_next_hole(struct radix_tree_root
*root
inion. I don't have any trouble with
current allocator, however, I think that we need this feature soon,
because device I/O is getting faster rapidly and allocator should
catch up this speed.
Thanks.
Joonsoo Kim (5):
mm, page_alloc: support multiple pages allocation
mm, page_alloc:
allocate multiple pages
in first attempt(fast path). I think that multiple page allocation
is not valid for slow path, so current implementation consider
just fast path.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 0f615eb..8bfa87b 100644
--- a/include/linux
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e3dea75..eb1472c 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -217,28 +217,33 @@ static inline void page_unfreeze_refs(struct page *page,
int count)
}
#ifdef CONFIG_NUMA
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 8bfa87b..f8cde28 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -327,6 +327,16 @@ static inline struct page *alloc_pages_exact_node(int nid,
gfp_t gfp_mask,
return __alloc_pages
On Thu, Jul 04, 2013 at 12:01:43AM +0800, Zhang Yanfei wrote:
> On 07/03/2013 11:51 PM, Zhang Yanfei wrote:
> > On 07/03/2013 11:28 PM, Michal Hocko wrote:
> >> On Wed 03-07-13 17:34:15, Joonsoo Kim wrote:
> >> [...]
> >>> For one page allocation at once, th
On Wed, Jul 03, 2013 at 03:57:45PM +, Christoph Lameter wrote:
> On Wed, 3 Jul 2013, Joonsoo Kim wrote:
>
> > @@ -298,13 +298,15 @@ static inline void arch_alloc_page(struct page *page,
> > int order) { }
> >
> > struct page *
> > __alloc_pages_nodemas
On Fri, Jun 28, 2013 at 05:27:24PM -0700, Sukadev Bhattiprolu wrote:
> Joonsoo Kim [iamjoonsoo@lge.com] wrote:
> | Currently, there is no method to quit at specified time later.
> | We are used to using 'sleep N' as command argument if we need it,
> | but explicitly s
We cannot use ktime_get() API even if we include ktime.h, because there is
no declaration of this API in ktime.h. So add it.
Signed-off-by: Joonsoo Kim
diff --git a/include/linux/ktime.h b/include/linux/ktime.h
index bbca128..29954cd 100644
--- a/include/linux/ktime.h
+++ b/include/linux
On Mon, Jul 01, 2013 at 10:20:46AM +0200, Thomas Gleixner wrote:
> On Mon, 1 Jul 2013, Joonsoo Kim wrote:
>
> > We cannot use ktime_get() API even if we include ktime.h, because there is
> > no declaration of this API in ktime.h. So add it.
>
> It's declared in hr
I did some working test on my android device and it worked. :)
Feel free to give me some opinion about this patset.
This patchset is based on v3.9-rc4.
Thanks.
Joonsoo Kim (6):
ARM, TCM: initialize TCM in paging_init(), instead of setup_arch()
ARM, crashkernel: use ___alloc_bootmem_node_nopanic
arm_bootmem_init() initialize a bitmap for bootmem and
it is not needed for CONFIG_NO_BOOTMEM.
So skip it when CONFIG_NO_BOOTMEM.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index ad722f1..049414a 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
-off-by: Joonsoo Kim
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index b3990a3..99ffe87 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -674,15 +674,20 @@ static void __init reserve_crashkernel(void)
{
unsigned long long crash_size, crash_base
otmem.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 3f6cbb2..b3990a3 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -56,7 +56,6 @@
#include
#include "atags.h"
-#include "tcm.h"
#if defined(CON
There is some platforms which have highmem, so this equation
doesn't represent total_mem size properly.
In addition, max_low_pfn's meaning is different in other architecture and
it is scheduled to be changed, so remove related code to max_low_pfn.
Signed-off-by: Joonsoo Kim
diff --
lowmem pfn,
so this patch may not harm anything.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 049414a..873f4ca 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -423,12 +423,10 @@ void __init bootmem_init(void)
* This doesn't seem
ootmem, it actually give us PAGE_SIZE area.
nobootmem manage memory as byte unit, so there is no waste.
Signed-off-by: Joonsoo Kim
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 13b7394..8b73417 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -58,6 +58,7 @@ config ARM
/6]: don't include a code to evaluate load value in can_migrate_task()
[5/6]: rename load_balance_tmpmask to load_balance_mask
[6/6]: not use one more cpumasks, use env's cpus for prevent to re-select
Joonsoo Kim (6):
sched: change position of resched_cpu() in load_balance()
sched
cur_ld_moved is reset if env.flags hit LBF_NEED_BREAK.
So, there is possibility that we miss doing resched_cpu().
Correct it as changing position of resched_cpu()
before checking LBF_NEED_BREAK.
Acked-by: Peter Zijlstra
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched
c: Srivatsa Vaddagiri
Acked-by: Peter Zijlstra
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9d693d0..3f8c4f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5007,8 +5007,17 @@ static int load_balance(int this_cpu, struct rq *this_rq,
After commit 88b8dac0, dst-cpu can be changed in load_balance(),
then we can't know cpu_idle_type of dst-cpu when load_balance()
return positive. So, add explicit cpu_idle_type checking.
Cc: Srivatsa Vaddagiri
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/f
re-select dst_cpu via
env's cpus, so now, env's cpus is a candidate not only for src_cpus,
but also dst_cpus.
Cc: Srivatsa Vaddagiri
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e3f09f4..6f238d2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sc
This name doesn't represent specific meaning.
So rename it to imply it's purpose.
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7f12624..07b4178 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6865,7 +6865,7 @@ struct
ments correctly.
Signed-off-by: Joonsoo Kim
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 3f8c4f2..d3c6011 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3874,10 +3874,14 @@ int can_migrate_task(struct task_struct *p, struct
lb_env *env)
int ts
Hello, Peter.
2013/8/15 Peter Zijlstra :
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5120,11 +5120,8 @@ static int load_balance(int this_cpu, st
>
> schedstat_inc(sd, lb_count[idle]);
>
> - if (!should_we_balance(&env)) {
> - *should_balance = 0;
> +
>> +static int should_we_balance(struct lb_env *env)
>> +{
>> + struct sched_group *sg = env->sd->groups;
>> + struct cpumask *sg_cpus, *sg_mask;
>> + int cpu, balance_cpu = -1;
>> +
>> + /*
>> + * In the newly idle case, we will allow all the cpu's
>> + * to do the newly
2013/8/15 Peter Zijlstra :
> On Tue, Aug 06, 2013 at 05:36:43PM +0900, Joonsoo Kim wrote:
>> There is no reason to maintain separate variables for this_group
>> and busiest_group in sd_lb_stat, except saving some space.
>> But this structure is always allocated in stack, so
>> - if (sds->max_load < sds->avg_load) {
>> + if (busiest->avg_load < this->avg_load) {
>
> Tsk, inconsistency there.
Okay, this is my mistake.
>
>> - max_pull = min(sds->max_load - sds->avg_load, load_above_capacity);
>> + max_pull = min(busiest->avg_load - sds->sd_avg_load,
>>
2013/8/15 Andrew Morton :
> On Fri, 9 Aug 2013 18:26:18 +0900 Joonsoo Kim wrote:
>
>> Without a hugetlb_instantiation_mutex, if parallel fault occur, we can
>> fail to allocate a hugepage, because many threads dequeue a hugepage
>> to handle a fault of same address.
.
Thanks.
Joonsoo Kim (9):
mm, hugetlb: move up the code which check availability of free huge
page
mm, hugetlb: trivial commenting fix
mm, hugetlb: clean-up alloc_huge_page()
mm, hugetlb: fix and clean-up node iteration code to alloc or free
mm, hugetlb: remove redundant list_empty
) failed: %s\n", strerror(errno));
return -1;
}
fprintf(stdout, "AFTER STEAL PRIVATE WRITE: %c\n", p[0]);
munmap(p, size);
We can see that "AFTER STEAL PRIVATE WRITE: c", not "AFTER STEAL
PRIVATE WRITE: s". If we turn off this optimization
We can unify some codes for succeed allocation.
This makes code more readable.
There is no functional difference.
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d21a33a..0067cf4 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1144,29 +1144,25 @@ static struct page
e_mask_to_[alloc|free]" and
fix and clean-up node iteration code to alloc or free.
This makes code more understandable.
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0067cf4..a838e6b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -752,33 +752,
The name of the mutex written in comment is wrong.
Fix it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d87f70b..d21a33a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -135,9 +135,9 @@ static inline struct hugepage_subpool *subpool_vma(struct
vm_area_struct *vma
201 - 300 of 2325 matches
Mail list logo