Re: oops in slab/leaks_show

2014-03-11 Thread Joonsoo Kim
On Tue, Mar 11, 2014 at 11:58:11AM +0900, Joonsoo Kim wrote: > On Mon, Mar 10, 2014 at 09:24:55PM -0400, Dave Jones wrote: > > On Tue, Mar 11, 2014 at 10:01:35AM +0900, Joonsoo Kim wrote: > > > On Tue, Mar 11, 2014 at 09:35:00AM +0900, Joonsoo Kim wrote: > > > >

Re: [PATCH 2/2] mm/compaction: cleanup isolate_freepages()

2014-04-25 Thread Joonsoo Kim
gt; Okay. I think a way to fix it. > > By assigning pfn(start of scanning window) to > > end_pfn(end of scanning window) for the next loop, we can solve the problem > > you mentioned. How about below? > > > > - pfn -= pageblock_nr_pages, end_pfn -= pageblock_nr_

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-18 Thread Joonsoo Kim
On Thu, May 15, 2014 at 11:43:53AM +0900, Minchan Kim wrote: > On Thu, May 15, 2014 at 10:53:01AM +0900, Joonsoo Kim wrote: > > On Tue, May 13, 2014 at 12:00:57PM +0900, Minchan Kim wrote: > > > Hey Joonsoo, > > > > > > On Thu, May 08, 2014 at 09:32:23AM +090

Re: [RFC PATCH 0/3] Aggressively allocate the pages on cma reserved memory

2014-05-18 Thread Joonsoo Kim
On Thu, May 15, 2014 at 10:47:18AM +0100, Mel Gorman wrote: > On Thu, May 15, 2014 at 11:10:55AM +0900, Joonsoo Kim wrote: > > > That doesn't always prefer CMA region. It would be nice to > > > understand why grouping in pageblock_nr_pages is beneficial. Also in >

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-18 Thread Joonsoo Kim
On Sun, May 18, 2014 at 11:06:08PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > On Wed, May 14, 2014 at 02:12:19PM +0530, Aneesh Kumar K.V wrote: > >> Joonsoo Kim writes: > >> > >> > >> > >> Another issue i am facing wi

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-18 Thread Joonsoo Kim
On Mon, May 19, 2014 at 11:53:05AM +0900, Minchan Kim wrote: > On Mon, May 19, 2014 at 11:11:21AM +0900, Joonsoo Kim wrote: > > On Thu, May 15, 2014 at 11:43:53AM +0900, Minchan Kim wrote: > > > On Thu, May 15, 2014 at 10:53:01AM +0900, Joonsoo Kim wrote: > > > > On T

Re: [RFC][PATCH] CMA: drivers/base/Kconfig: restrict CMA size to non-zero value

2014-05-18 Thread Joonsoo Kim
On Mon, May 19, 2014 at 10:47:12AM +0900, Gioh Kim wrote: > Thank you for your advice. I didn't notice it. > > I'm adding followings according to your advice: > > - range restrict for CMA_SIZE_MBYTES and *CMA_SIZE_PERCENTAGE* > I think this can prevent the wrong kernel option. > > - change size_

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-19 Thread Joonsoo Kim
On Tue, May 20, 2014 at 08:18:59AM +0900, Minchan Kim wrote: > On Mon, May 19, 2014 at 01:50:01PM +0900, Joonsoo Kim wrote: > > On Mon, May 19, 2014 at 11:53:05AM +0900, Minchan Kim wrote: > > > On Mon, May 19, 2014 at 11:11:21AM +0900, Joonsoo Kim wrote: > > > > On T

Re: [RFC PATCH] arm: dma-mapping: fallback allocation for cma failure

2014-05-19 Thread Joonsoo Kim
On Tue, May 20, 2014 at 02:57:47PM +0900, Gioh Kim wrote: > > Thanks for your advise, Michal Nazarewicz. > > Having discuss with Joonsoo, I'm adding fallback allocation after > __alloc_from_contiguous(). > The fallback allocation works if CMA kernel options is turned on but CMA size > is zero.

Re: [RFC PATCH] arm: dma-mapping: fallback allocation for cma failure

2014-05-20 Thread Joonsoo Kim
On Tue, May 20, 2014 at 04:05:52PM +0900, Gioh Kim wrote: > That case, device-specific coherent memory allocation, is handled at > dma_alloc_coherent in arm_dma_alloc. > __dma_alloc handles only general coherent memory allocation. > > I'm sorry missing mention about it. > Hello, AFAIK, *cohere

[PATCH v2 2/5] mm/compaction: do not call suitable_migration_target() on every page

2014-02-13 Thread Joonsoo Kim
a for highorder is pageblock order. So calling it once within pageblock range has no problem. Signed-off-by: Joonsoo Kim diff --git a/mm/compaction.c b/mm/compaction.c index bbe1260..0d821a2 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -245,6 +245,7 @@ static unsigned

[PATCH v2 5/5] mm/compaction: clean-up code on success of ballon isolation

2014-02-13 Thread Joonsoo Kim
It is just for clean-up to reduce code size and improve readability. There is no functional change. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim diff --git a/mm/compaction.c b/mm/compaction.c index 56536d3..a1a9270 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -553,11 +553,7

[PATCH v2 4/5] mm/compaction: check pageblock suitability once per pageblock

2014-02-13 Thread Joonsoo Kim
] Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim diff --git a/mm/compaction.c b/mm/compaction.c index b1ba297..56536d3 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -520,26 +520,31 @@ isolate_migratepages_range(struct zone *zone, struct compact_control *cc, /* If

[PATCH v2 1/5] mm/compaction: disallow high-order page for migration target

2014-02-13 Thread Joonsoo Kim
. Additionally, clean-up logic in suitable_migration_target() to simplify the code. There is no functional changes from this clean-up. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim diff --git a/mm/compaction.c b/mm/compaction.c index 3a91a2e..bbe1260 100644 --- a/mm/compaction.c +++ b/mm

[PATCH v2 3/5] mm/compaction: change the timing to check to drop the spinlock

2014-02-13 Thread Joonsoo Kim
isolating, retry to aquire the lock. I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking the criteria about dropping the lock. This has no harm 0x0 pfn, because, at this time, locked variable would be false. Acked-by: Vlastimil Babka Signed-off-by: Joonsoo Kim diff --git a/mm

[PATCH v2 0/5] compaction related commits

2014-02-13 Thread Joonsoo Kim
12554 110868637 Compaction cost 2469 1998 Joonsoo Kim (5): mm/compaction: disallow high-order page for migration target mm/compaction: do not call suitable_migration_target() on every page mm/compaction: change the timing to check to drop th

[PATCH 1/9] slab: add unlikely macro to help compiler

2014-02-13 Thread Joonsoo Kim
slab_should_failslab() is called on every allocation, so to optimize it is reasonable. We normally don't allocate from kmem_cache. It is just used when new kmem_cache is created, so it's very rare case. Therefore, add unlikely macro to help compiler optimization. Signed-off-by: Joonsoo

[PATCH 4/9] slab: defer slab_destroy in free_block()

2014-02-13 Thread Joonsoo Kim
performance effect of this, but we'd be better not to hold the lock as much as possible. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 53d1a36..551d503 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -242,7 +242,8 @@ static struct kmem_cache_node __initdata init_kmem_cache

[PATCH 2/9] slab: makes clear_obj_pfmemalloc() just return store masked value

2014-02-13 Thread Joonsoo Kim
ff-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 5906f8f..6d17cad 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -215,9 +215,9 @@ static inline void set_obj_pfmemalloc(void **objp) return; } -static inline void clear_obj_pfmemalloc(void **objp) +static inline void *clear_obj_pfmem

[PATCH 9/9] slab: remove a useless lockdep annotation

2014-02-13 Thread Joonsoo Kim
Now, there is no code to hold two lock simultaneously, since we don't call slab_destroy() with holding any lock. So, lockdep annotation is useless now. Remove it. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 9c9d4d4..f723a72 100644 --- a/mm/slab.c +++ b/mm/s

[PATCH 8/9] slab: destroy a slab without holding any alien cache lock

2014-02-13 Thread Joonsoo Kim
I haven't heard that this alien cache lock is contended, but to reduce chance of contention would be better generally. And with this change, we can simplify complex lockdep annotation in slab code. In the following patch, it will be implemented. Signed-off-by: Joonsoo Kim diff --git a/mm/s

[PATCH 7/9] slab: use the lock on alien_cache, instead of the lock on array_cache

2014-02-13 Thread Joonsoo Kim
Now, we have separate alien_cache structure, so it'd be better to hold the lock on alien_cache while manipulating alien_cache. After that, we don't need the lock on array_cache, so remove it. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index c048ac5..ec1df4c 10064

[PATCH 6/9] slab: introduce alien_cache

2014-02-13 Thread Joonsoo Kim
d, so removing it would be better. This patch prepare it by introducing alien_cache and using it. In the following patch, we remove spinlock in array_cache. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 90bfd79..c048ac5 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -203,6 +2

[PATCH 0/9] clean-up and remove lockdep annotation in SLAB

2014-02-13 Thread Joonsoo Kim
tation. As short stat noted, this makes SLAB code much simpler. This patchset is based on slab/next branch on Pekka's git tree. git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux.git Thanks. Joonsoo Kim (9): slab: add unlikely macro to help compiler slab: makes clear_obj_pfmemallo

[PATCH 5/9] slab: factor out initialization of arracy cache

2014-02-13 Thread Joonsoo Kim
Factor out initialization of array cache to use it in following patch. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 551d503..90bfd79 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -741,13 +741,8 @@ static void start_cpu_timer(int cpu) } } -static struct array_cache

[PATCH 3/9] slab: move up code to get kmem_cache_node in free_block()

2014-02-13 Thread Joonsoo Kim
node isn't changed, so we don't need to retreive this structure everytime we move the object. Maybe compiler do this optimization, but making it explicitly is better. Signed-off-by: Joonsoo Kim diff --git a/mm/slab.c b/mm/slab.c index 6d17cad..53d1a36 100644 --- a/mm/slab.c +++ b

Re: [PATCH 2/9] slab: makes clear_obj_pfmemalloc() just return store masked value

2014-02-16 Thread Joonsoo Kim
On Fri, Feb 14, 2014 at 06:26:15PM -0600, Christoph Lameter wrote: > On Fri, 14 Feb 2014, David Rientjes wrote: > > > Yeah, you don't need it, but don't you think it makes the code more > > readable? Otherwise this is going to be just doing > > > > return (unsigned long)objp & ~SLAB_OBJ_PFMEM

Re: [PATCH 3/9] slab: move up code to get kmem_cache_node in free_block()

2014-02-16 Thread Joonsoo Kim
On Fri, Feb 14, 2014 at 03:19:02PM -0800, David Rientjes wrote: > On Fri, 14 Feb 2014, Joonsoo Kim wrote: > > > node isn't changed, so we don't need to retreive this structure > > everytime we move the object. Maybe compiler do this optimization, > >

Re: [PATCH 9/9] slab: remove a useless lockdep annotation

2014-02-16 Thread Joonsoo Kim
On Fri, Feb 14, 2014 at 12:49:57PM -0600, Christoph Lameter wrote: > On Fri, 14 Feb 2014, Joonsoo Kim wrote: > > > @@ -921,7 +784,7 @@ static int transfer_objects(struct array_cache *to, > > static inline struct alien_cache **alloc_al

Re: [patch] mm, thp: do not perform sync compaction on pagefault

2014-04-30 Thread Joonsoo Kim
2014-05-01 9:45 GMT+09:00 David Rientjes : > Synchronous memory compaction can be very expensive: it can iterate an > enormous > amount of memory without aborting and it can wait on page locks and writeback > to > complete if a pageblock cannot be defragmented. > Unfortunately, it's too expensive

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-14 Thread Joonsoo Kim
On Tue, May 13, 2014 at 12:00:57PM +0900, Minchan Kim wrote: > Hey Joonsoo, > > On Thu, May 08, 2014 at 09:32:23AM +0900, Joonsoo Kim wrote: > > CMA is introduced to provide physically contiguous pages at runtime. > > For this purpose, it reserves memory at boot time

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-14 Thread Joonsoo Kim
On Wed, May 14, 2014 at 02:12:19PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > CMA is introduced to provide physically contiguous pages at runtime. > > For this purpose, it reserves memory at boot time. Although it reserve > > memory, this reserved memory

Re: [RFC PATCH 0/3] Aggressively allocate the pages on cma reserved memory

2014-05-14 Thread Joonsoo Kim
On Wed, May 14, 2014 at 03:14:30PM +0530, Aneesh Kumar K.V wrote: > Joonsoo Kim writes: > > > On Fri, May 09, 2014 at 02:39:20PM +0200, Marek Szyprowski wrote: > >> Hello, > >> > >> On 2014-05-08 02:32, Joonsoo Kim wrote: > >> >This series trie

Re: [PATCH] mm, compaction: properly signal and act upon lock and need_sched() contention

2014-05-14 Thread Joonsoo Kim
On Tue, May 13, 2014 at 10:54:58AM +0200, Vlastimil Babka wrote: > On 05/13/2014 02:44 AM, Joonsoo Kim wrote: > >On Mon, May 12, 2014 at 04:15:11PM +0200, Vlastimil Babka wrote: > >>Compaction uses compact_checklock_irqsave() function to periodically check > >>

Re: [PATCH] mm, compaction: properly signal and act upon lock and need_sched() contention

2014-05-12 Thread Joonsoo Kim
e of aborting on contention, and might result in pageblocks not > being scanned completely, since the scanning cursor is advanced. This patch > makes isolate_freepages_block() check the cc->contended flag and abort. > > Reported-by: Joonsoo Kim > Signed-off-by: Vlas

Re: [RFC PATCH 2/3] CMA: aggressively allocate the pages on cma reserved memory when not used

2014-05-12 Thread Joonsoo Kim
On Mon, May 12, 2014 at 10:04:29AM -0700, Laura Abbott wrote: > Hi, > > On 5/7/2014 5:32 PM, Joonsoo Kim wrote: > > CMA is introduced to provide physically contiguous pages at runtime. > > For this purpose, it reserves memory at boot time. Although it reserve > > memor

Re: [PATCH v2 2/2] mm/compaction: avoid rescanning pageblocks in isolate_freepages

2014-05-12 Thread Joonsoo Kim
On Mon, May 12, 2014 at 11:09:25AM +0200, Vlastimil Babka wrote: > On 05/08/2014 07:28 AM, Joonsoo Kim wrote: > >On Wed, May 07, 2014 at 02:09:10PM +0200, Vlastimil Babka wrote: > >>The compaction free scanner in isolate_freepages() currently remembers PFN > >>of >

Re: [PATCH 2/2] mm/page_alloc: DEBUG_VM checks for free_list placement of CMA and RESERVE pages

2014-05-12 Thread Joonsoo Kim
On Mon, May 12, 2014 at 10:28:25AM +0200, Vlastimil Babka wrote: > On 05/08/2014 07:54 AM, Joonsoo Kim wrote: > >On Wed, May 07, 2014 at 04:59:07PM +0200, Vlastimil Babka wrote: > >>On 05/07/2014 03:33 AM, Minchan Kim wrote: > >>>On Mon, May 05, 2014 at 05:50:46P

Re: [PATCH 2/2] mm/page_alloc: DEBUG_VM checks for free_list placement of CMA and RESERVE pages

2014-05-12 Thread Joonsoo Kim
On Thu, May 08, 2014 at 03:34:33PM -0700, Andrew Morton wrote: > On Thu, 8 May 2014 15:19:37 +0900 Minchan Kim wrote: > > > > I also think that VM_DEBUG overhead isn't problem because of same > > > reason from Vlastimil. > > > > Guys, please read this. > > > > https://lkml.org/lkml/2013/7/17/59

Re: [RFC PATCH 0/3] Aggressively allocate the pages on cma reserved memory

2014-05-12 Thread Joonsoo Kim
On Fri, May 09, 2014 at 02:39:20PM +0200, Marek Szyprowski wrote: > Hello, > > On 2014-05-08 02:32, Joonsoo Kim wrote: > >This series tries to improve CMA. > > > >CMA is introduced to provide physically contiguous pages at runtime > >without reserving memory

Re: [3.15-rc1 slab] Oops when reading /proc/slab_allocators

2014-04-10 Thread Joonsoo Kim
On Wed, Apr 09, 2014 at 08:36:10PM +0900, Tetsuo Handa wrote: > Hello. > > I found that > > $ cat /proc/slab_allocators > > causes an oops. > > -- dmesg start -- > [ 22.719620] BUG: unable to handle kernel paging request at 8800389b7ff8 > [ 22.719742] IP: [] handle_sla

[PATCH] slab: fix oops when reading /proc/slab_allocators

2014-04-15 Thread Joonsoo Kim
ig problem. Reported-by: Dave Jones Reported-by: Tetsuo Handa Signed-off-by: Joonsoo Kim --- This patch is based on v3.15-rc1. diff --git a/mm/slab.c b/mm/slab.c index 388cb1a..101eae4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -386,6 +386,41 @@ static void **dbg_userword(struct kmem_ca

Re: [PATCH] slab: fix oops when reading /proc/slab_allocators

2014-04-15 Thread Joonsoo Kim
On Wed, Apr 16, 2014 at 08:45:11AM +0900, Joonsoo Kim wrote: > commit 'b1cb098: change the management method of free objects of the slab' > introduces bug on slab leak detector('/proc/slab_allocators'). This > detector works like as following decription. > >

Re: oops in slab/leaks_show

2014-04-15 Thread Joonsoo Kim
On Fri, Apr 11, 2014 at 10:36:27AM +0300, Pekka Enberg wrote: > On 03/11/2014 10:30 AM, Joonsoo Kim wrote: > >-8<- > > From ff6fe77fb764ca5bf8705bf53d07d38e4111e84c Mon Sep 17 00:00:00 2001 > >From: Joonsoo Kim > >Date: Tue, 11 Mar

Re: [3.15-rc1 slab] Oops when reading /proc/slab_allocators

2014-04-15 Thread Joonsoo Kim
On Thu, Apr 10, 2014 at 08:54:37PM +0900, Tetsuo Handa wrote: > Joonsoo Kim wrote: > > There was another report about this problem and I have already fixed > > it, although it wasn't reviewed and merged. See following link. > > > > https://lkml.org/lkml/2014/3/

Re: v3.15-rc1 slab allocator broken on m68knommu (coldfire)

2014-04-15 Thread Joonsoo Kim
On Mon, Apr 14, 2014 at 05:45:43PM -0700, Steven King wrote: > git bisect suggests it starts somewhere around commit > f315e3fa1cf5b3317fc948708645fff889ce1e63 slab: restrict the number of objects > in a slab > > but its kinda hard to tell as there is some compile breakage in there as well. Hel

Re: [PATCH] mm/vmalloc: Introduce DEBUG_VMALLOCINFO to reduce spinlock contention

2014-04-15 Thread Joonsoo Kim
On Thu, Apr 10, 2014 at 12:40:58PM -0400, Richard Yao wrote: > Performance analysis of software compilation by Gentoo portage on an > Intel E5-2620 with 64GB of RAM revealed that a sizeable amount of time, > anywhere from 5% to 15%, was spent in get_vmalloc_info(), with at least > 40% of that time

Re: [PATCH 1/2] mm/page_alloc: prevent MIGRATE_RESERVE pages from being misplaced

2014-04-15 Thread Joonsoo Kim
UG_ON checks for the invariant that for > MIGRATE_RESERVE and MIGRATE_CMA pageblocks, freepage_migratetype must equal to > pageblock_migratetype so that these pages always go to the correct free_list. > > Reported-by: Yong-Taek Lee > Reported-by: Bartlomiej Zolnierkiewicz > Suggested-by: Joonsoo

Re: [PATCH 2/2] mm/page_alloc: DEBUG_VM checks for free_list placement of CMA and RESERVE pages

2014-04-15 Thread Joonsoo Kim
-Taek Lee > Cc: Bartlomiej Zolnierkiewicz > Cc: Joonsoo Kim > Cc: Mel Gorman > Cc: Minchan Kim > Cc: KOSAKI Motohiro > Cc: Marek Szyprowski > Cc: Hugh Dickins > Cc: Rik van Riel > Cc: Michal Nazarewicz > Signed-off-by: Vlastimil Babka Acked-by: Joonsoo Kim -

Re: [PATCH 1/2] mm/compaction: make isolate_freepages start at pageblock boundary

2014-04-15 Thread Joonsoo Kim
This patch fixes the problem by aligning the initial pfn in > isolate_freepages() > to pageblock boundary. This also allows to replace the end-of-pageblock > alignment within the for loop with a simple pageblock_nr_pages increment. > > Signed-off-by: Vlastimil Babka > Reported-by: Heesub S

Re: [PATCH 2/2] mm/compaction: cleanup isolate_freepages()

2014-04-15 Thread Joonsoo Kim
riable, but in fact it is not. > > This patch renames the 'high_pfn' variable to a hopefully less confusing name, > and slightly changes its handling without a functional change. A comment made > obsolete by recent changes is also updated. > > Signed-off-by: Vlastimil Babk

Re: [PATCH v3 -next 4/9] DMA, CMA: support arbitrary bitmap granularity

2014-06-19 Thread Joonsoo Kim
On Wed, Jun 18, 2014 at 01:48:15PM -0700, Andrew Morton wrote: > On Mon, 16 Jun 2014 14:40:46 +0900 Joonsoo Kim wrote: > > > PPC KVM's CMA area management requires arbitrary bitmap granularity, > > since they want to reserve very large memory and manage this region >

Re: linux-next: build failure after merge of the akpm-current tree

2014-06-19 Thread Joonsoo Kim
r path") > e58e263e5254 ("PPC, KVM, CMA: use general CMA reserved area management > framework") > Hello, If below patch fixes above problem, is it possible to retain above patches in linux-next? Thanks. -8< >From e5c519c4b74914067e43cb55e2

Re: [PATCH v5 02/14] mm, compaction: defer each zone individually instead of preferred zone

2014-07-28 Thread Joonsoo Kim
d compaction for the Normal zone, > and DMA32 zones on both nodes were thus not considered for compaction. > On different machine, success rates were improved with __GFP_NO_KSWAPD > allocations. > > Signed-off-by: Vlastimil Babka > Acked-by: Minchan Kim > Revi

Re: [PATCH v5 07/14] mm, compaction: khugepaged should not give up due to need_resched()

2014-07-28 Thread Joonsoo Kim
On Mon, Jul 28, 2014 at 03:11:34PM +0200, Vlastimil Babka wrote: > Async compaction aborts when it detects zone lock contention or need_resched() > is true. David Rientjes has reported that in practice, most direct async > compactions for THP allocation abort due to need_resched(). This means that

Re: [PATCH v5 14/14] mm, compaction: try to capture the just-created high-order freepage

2014-07-29 Thread Joonsoo Kim
r understated by the vmstats. Could you separate this patch to this patchset? I think that this patch doesn't get much reviewed from other developers unlike other patches. > > Signed-off-by: Vlastimil Babka > Cc: Minchan Kim > Acked-by: Mel Gorman > Cc: Joonsoo Kim > Cc: Mich

Re: [PATCH v5 07/14] mm, compaction: khugepaged should not give up due to need_resched()

2014-07-29 Thread Joonsoo Kim
On Tue, Jul 29, 2014 at 12:31:13AM -0700, David Rientjes wrote: > On Tue, 29 Jul 2014, Joonsoo Kim wrote: > > > I have a silly question here. > > Why need_resched() is criteria to stop async compaction? > > need_resched() is flagged up when time slice runs out or other re

Re: [PATCH 00/10] fix freepage count problems due to memory isolation

2014-07-13 Thread Joonsoo Kim
On Mon, Jul 07, 2014 at 04:33:09PM +0200, Vlastimil Babka wrote: > On 07/07/2014 06:49 AM, Joonsoo Kim wrote: > >Ccing Lisa, because there was bug report it may be related this > >topic last Saturday. > > > >http://www.spinics.net/lists/linux-mm/msg75741.html > >

Re: [PATCH 03/10] mm/page_alloc: handle page on pcp correctly if it's pageblock is isolated

2014-07-13 Thread Joonsoo Kim
On Mon, Jul 07, 2014 at 05:19:48PM +0200, Vlastimil Babka wrote: > On 07/04/2014 09:57 AM, Joonsoo Kim wrote: > >If pageblock of page on pcp are isolated now, we should free it to isolate > >buddy list to prevent future allocation on it. But current code doesn't > >do th

Re: [PATCH 05/10] mm/page_alloc: optimize and unify pageblock migratetype check in free path

2014-07-13 Thread Joonsoo Kim
On Mon, Jul 07, 2014 at 05:50:09PM +0200, Vlastimil Babka wrote: > On 07/04/2014 09:57 AM, Joonsoo Kim wrote: > >Currently, when we free the page from pcp list to buddy, we check > >pageblock of the page in order to isolate the page on isolated > >pageblock. Although this co

Re: [PATCH 08/10] mm/page_alloc: use get_onbuddy_migratetype() to get buddy list type

2014-07-13 Thread Joonsoo Kim
On Mon, Jul 07, 2014 at 05:57:49PM +0200, Vlastimil Babka wrote: > On 07/04/2014 09:57 AM, Joonsoo Kim wrote: > >When isolating free page, what we want to know is which list > >the page is linked. If it is linked in isolate migratetype buddy list, > >we can skip waterma

Re: [PATCH] [RFC] CMA: clear buffer-head lru before page migration

2014-07-13 Thread Joonsoo Kim
On Tue, Jul 08, 2014 at 06:46:31PM +0200, Michal Nazarewicz wrote: > On Mon, Jul 07 2014, Andrew Morton wrote: > > What I proposed is that CMA call invalidate_bh_lrus() right at the > > outset. Something along the lines of > > > > --- a/mm/page_alloc.c~a > > +++ a/mm/page_alloc.c > > @@ -6329,6 +

Re: [RFC/PATCH RESEND -next 08/21] mm: page_alloc: add kasan hooks on alloc and free pathes

2014-07-14 Thread Joonsoo Kim
On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote: > Add kernel address sanitizer hooks to mark allocated page's addresses > as accessible in corresponding shadow region. > Mark freed pages as unaccessible. > > Signed-off-by: Andrey Ryabinin > --- > include/linux/kasan.h | 6 +

Re: [RFC/PATCH RESEND -next 10/21] mm: slab: share virt_to_cache() between slab and slub

2014-07-14 Thread Joonsoo Kim
On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote: > This patch shares virt_to_cache() between slab and slub and > it used in cache_from_obj() now. > Later virt_to_cache() will be kernel address sanitizer also. I think that this patch won't be needed. See comment in 15/21. Thanks.

Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory

2014-07-14 Thread Joonsoo Kim
On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote: > Some code in slub could validly touch memory marked by kasan as unaccessible. > Even though slub.c doesn't instrumented, functions called in it are > instrumented, > so to avoid false positive reports such places are protected by >

Re: [RFC/PATCH RESEND -next 15/21] mm: slub: add kernel address sanitizer hooks to slub allocator

2014-07-14 Thread Joonsoo Kim
On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote: > With this patch kasan will be able to catch bugs in memory allocated > by slub. > Allocated slab page, this whole page marked as unaccessible > in corresponding shadow memory. > On allocation of slub object requested allocation size

Re: [RFC/PATCH RESEND -next 20/21] fs: dcache: manually unpoison dname after allocation to shut up kasan's reports

2014-07-14 Thread Joonsoo Kim
On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote: > We need to manually unpoison rounded up allocation size for dname > to avoid kasan's reports in __d_lookup_rcu. > __d_lookup_rcu may validly read a little beyound allocated size. If it read a little beyond allocated size, IMHO, it

Re: [RFC/PATCH RESEND -next 14/21] mm: slub: kasan: disable kasan when touching unaccessible memory

2014-07-15 Thread Joonsoo Kim
On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote: > On 07/15/14 10:04, Joonsoo Kim wrote: > > On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote: > >> Some code in slub could validly touch memory marked by kasan as > >> unaccessible. >

Re: [PATCH 00/10] fix freepage count problems due to memory isolation

2014-07-15 Thread Joonsoo Kim
On Mon, Jul 14, 2014 at 11:49:25AM +0200, Vlastimil Babka wrote: > On 07/14/2014 08:22 AM, Joonsoo Kim wrote: > >On Mon, Jul 07, 2014 at 04:33:09PM +0200, Vlastimil Babka wrote: > >>On 07/07/2014 06:49 AM, Joonsoo Kim wrote: > >>>Ccing Lisa, because there was bug

[PATCH v3 2/9] slab: move up code to get kmem_cache_node in free_block()

2014-07-01 Thread Joonsoo Kim
node isn't changed, so we don't need to retreive this structure everytime we move the object. Maybe compiler do this optimization, but making it explicitly is better. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/slab.c |3 +-- 1 file changed, 1 insertion(+), 2

[PATCH v3 4/9] slab: factor out initialization of arracy cache

2014-07-01 Thread Joonsoo Kim
Factor out initialization of array cache to use it in following patch. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/slab.c | 33 +++-- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 59b9a4c..00b6bbc

[PATCH v3 9/9] slab: remove BAD_ALIEN_MAGIC

2014-07-01 Thread Joonsoo Kim
BAD_ALIEN_MAGIC value isn't used anymore. So remove it. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/slab.c |4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 7820a45..60c9e11 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -

[PATCH v3 1/9] slab: add unlikely macro to help compiler

2014-07-01 Thread Joonsoo Kim
jes Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/slab.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slab.c b/mm/slab.c index 179272f..f8a0ed1 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3067,7 +3067,7 @@ static void *cache_alloc_debugcheck_af

[PATCH v3 3/9] slab: defer slab_destroy in free_block()

2014-07-01 Thread Joonsoo Kim
performance effect of this, but we'd be better not to hold the lock as much as possible. Commented by Christoph: This is also good because kmem_cache_free is no longer called while holding the node lock. So we avoid one case of recursion. Acked-by: Christoph Lameter Signed-off-by: Joonso

[PATCH v3 0/9] clean-up and remove lockdep annotation in SLAB

2014-07-01 Thread Joonsoo Kim
rent linux-next. Thanks. Joonsoo Kim (9): slab: add unlikely macro to help compiler slab: move up code to get kmem_cache_node in free_block() slab: defer slab_destroy in free_block() slab: factor out initialization of arracy cache slab: introduce alien_cache slab: use the lock on a

[PATCH v3 6/9] slab: use the lock on alien_cache, instead of the lock on array_cache

2014-07-01 Thread Joonsoo Kim
Now, we have separate alien_cache structure, so it'd be better to hold the lock on alien_cache while manipulating alien_cache. After that, we don't need the lock on array_cache, so remove it. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/sla

[PATCH v3 5/9] slab: introduce alien_cache

2014-07-01 Thread Joonsoo Kim
d, so removing it would be better. This patch prepare it by introducing alien_cache and using it. In the following patch, we remove spinlock in array_cache. Acked-by: Christoph Lameter Signed-off-by: Joonsoo Kim --- mm/slab.c | 108 ++--- mm/s

[PATCH v3 8/9] slab: remove a useless lockdep annotation

2014-07-01 Thread Joonsoo Kim
er Signed-off-by: Joonsoo Kim --- mm/slab.c | 153 - 1 file changed, 153 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index 854dfa0..7820a45 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -472,139 +472,6 @@ static struct kmem_cache kmem_

[PATCH v3 7/9] slab: destroy a slab without holding any alien cache lock

2014-07-01 Thread Joonsoo Kim
I haven't heard that this alien cache lock is contended, but to reduce chance of contention would be better generally. And with this change, we can simplify complex lockdep annotation in slab code. In the following patch, it will be implemented. Acked-by: Christoph Lameter Signed-off-by: Jo

Re: [PATCH v3 2/9] slab: move up code to get kmem_cache_node in free_block()

2014-07-01 Thread Joonsoo Kim
On Tue, Jul 01, 2014 at 03:21:21PM -0700, David Rientjes wrote: > On Tue, 1 Jul 2014, Joonsoo Kim wrote: > > > node isn't changed, so we don't need to retreive this structure > > everytime we move the object. Maybe compiler do this optimization, > >

Re: [PATCH v3 3/9] slab: defer slab_destroy in free_block()

2014-07-01 Thread Joonsoo Kim
On Tue, Jul 01, 2014 at 03:25:04PM -0700, David Rientjes wrote: > On Tue, 1 Jul 2014, Joonsoo Kim wrote: > > > In free_block(), if freeing object makes new free slab and number of > > free_objects exceeds free_limit, we start to destroy this new free slab > > with holding

Re: [PATCH v3 4/9] slab: factor out initialization of arracy cache

2014-07-01 Thread Joonsoo Kim
On Tue, Jul 01, 2014 at 03:26:26PM -0700, David Rientjes wrote: > On Tue, 1 Jul 2014, Joonsoo Kim wrote: > > > Factor out initialization of array cache to use it in following patch. > > > > Acked-by: Christoph Lameter > > Signed-off-by: Joonsoo Kim > > Not

Re: [PATCH v3 5/9] slab: introduce alien_cache

2014-07-01 Thread Joonsoo Kim
On Tue, Jul 01, 2014 at 03:15:47PM -0700, Andrew Morton wrote: > On Tue, 1 Jul 2014 17:27:34 +0900 Joonsoo Kim wrote: > > > -static struct array_cache **alloc_alien_cache(int node, int limit, gfp_t > > gfp) > > +static struct alien_cache *__alloc_alien_cac

Re: [PATCH v3 04/13] mm, compaction: move pageblock checks up from isolate_migratepages_range()

2014-06-26 Thread Joonsoo Kim
On Wed, Jun 25, 2014 at 10:59:19AM +0200, Vlastimil Babka wrote: > On 06/25/2014 02:53 AM, Joonsoo Kim wrote: > >On Tue, Jun 24, 2014 at 05:42:50PM +0200, Vlastimil Babka wrote: > >>On 06/24/2014 10:33 AM, Joonsoo Kim wrote: > >>>On Fri, Jun 20, 2014 at 05:49:34P

Re: [RFC] mm: cma: move init_cma_reserved_pageblock to cma.c

2014-06-26 Thread Joonsoo Kim
nitialisation of the area. It's not clear, to me at least, what good > is continuing the work on a PFN that is known to be invalid. > > Signed-off-by: Michal Nazarewicz Acked-by: Joonsoo Kim One question below. > --- >

Re: [PATCH -mm v3 8/8] slab: do not keep free objects/slabs on dead memcg caches

2014-06-26 Thread Joonsoo Kim
On Wed, Jun 25, 2014 at 05:45:45PM +0400, Vladimir Davydov wrote: > On Tue, Jun 24, 2014 at 04:38:41PM +0900, Joonsoo Kim wrote: > > On Fri, Jun 13, 2014 at 12:38:22AM +0400, Vladimir Davydov wrote: > > And, you said that this way of implementation would be slow because > >

Re: [PATCH 00/10] fix freepage count problems due to memory isolation

2014-07-16 Thread Joonsoo Kim
On Tue, Jul 15, 2014 at 10:36:35AM +0200, Vlastimil Babka wrote: > >>A non-trivial fix that comes to mind (and I might have overlooked > >>something) is something like: > >> > >>- distinguish MIGRATETYPE_ISOLATING and MIGRATETYPE_ISOLATED > >>- CPU1 first sets MIGRATETYPE_ISOLATING before the drain

Re: [PATCH 00/10] fix freepage count problems due to memory isolation

2014-07-16 Thread Joonsoo Kim
On Wed, Jul 16, 2014 at 01:14:26PM +0200, Vlastimil Babka wrote: > On 07/16/2014 10:43 AM, Joonsoo Kim wrote: > >> I think your plan of multiple parallel CMA allocations (and thus > >> multiple parallel isolations) is also possible. The isolate pcplists > >> can

[PATCH 00/10] fix freepage count problems due to memory isolation

2014-07-04 Thread Joonsoo Kim
mation. This patchset is based on linux-next-20140703. Thanks. [1]: Aggressively allocate the pages on cma reserved memory https://lkml.org/lkml/2014/5/30/291 Joonsoo Kim (10): mm/page_alloc: remove unlikely macro on free_one_page() mm/page_alloc: correct to clear guard attribut

[PATCH 09/10] mm/page_alloc: fix possible wrongly calculated freepage counter

2014-07-04 Thread Joonsoo Kim
, another future user to this function also missed to fixup number of freepage again. Now, we have proper infrastructure, get_onbuddy_migratetype(), which can be used to get current migratetype of buddy list. So fix this situation. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 37

[PATCH 07/10] mm/page_alloc: store migratetype of the buddy list into freepage correctly

2014-07-04 Thread Joonsoo Kim
page is merged or splited. Hence, this patch adds set_onbuddy_migratetype() to set_page_order(). And this patch makes set/get_onbuddy_migratetype() only enabled if memory isolation is enabeld, because it doesn't needed in other case. Signed-off-by: Joonsoo Kim --- include/linux/m

[PATCH 04/10] mm/page_alloc: carefully free the page on isolate pageblock

2014-07-04 Thread Joonsoo Kim
ock 1 - release the zone lock - grab the zone lock - call __free_one_page() with MIGRATE_ISOLATE - free page go into isolate buddy list and we can't use it anymore To prevent this possibility, re-check migratetype with holding the lock. Signed-off-by: Joonsoo Ki

[PATCH 06/10] mm/page_alloc: separate freepage migratetype interface

2014-07-04 Thread Joonsoo Kim
ff-by: Joonsoo Kim --- include/linux/mm.h | 24 mm/page_alloc.c | 18 +- mm/page_isolation.c |4 ++-- 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e03dd29..278ecfd 100644 --- a/in

[PATCH 05/10] mm/page_alloc: optimize and unify pageblock migratetype check in free path

2014-07-04 Thread Joonsoo Kim
, because it can be done in common part, __free_one_page(). This unifying provides extra guarantee that the page on isolate pageblock don't go into non-isolate buddy list. This is similar situation describing in previous patch so refer it if you need more explanation. Signed-off-by: Joonso

[PATCH 08/10] mm/page_alloc: use get_onbuddy_migratetype() to get buddy list type

2014-07-04 Thread Joonsoo Kim
, get_onbuddy_migratetype() is more fit and cheap than get_pageblock_migratetype(). So use it. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e1c4c3e..d9fb8bb 100644 --- a/mm/page_alloc.c +++ b/mm

[PATCH 10/10] mm/page_alloc: Stop merging pages on non-isolate and isolate buddy list

2014-07-04 Thread Joonsoo Kim
think that this is no problem, because isolation means that we will use page on isolate pageblock specially, so it will split soon in any case. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 18 ++ 1 file changed, 18 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_all

[PATCH 01/10] mm/page_alloc: remove unlikely macro on free_one_page()

2014-07-04 Thread Joonsoo Kim
Isolation is really rare case so !is_migrate_isolate() is likely case. Remove unlikely macro. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 8dac0f0..0d4cf7a 100644 --- a/mm

[PATCH 02/10] mm/page_alloc: correct to clear guard attribute in DEBUG_PAGEALLOC

2014-07-04 Thread Joonsoo Kim
his patch, is that fixing freepage accounting. If we clear guard page and link it onto isolate buddy list, we should not increase freepage count. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 29 - 1 file changed, 16 insertions(+), 13 deletions(-) diff --g

[PATCH 03/10] mm/page_alloc: handle page on pcp correctly if it's pageblock is isolated

2014-07-04 Thread Joonsoo Kim
lling __free_one_page(). And, if we find the page on isolated pageblock, change migratetype to MIGRATE_ISOLATE to prevent future allocation of this page and freepage counting problem. Signed-off-by: Joonsoo Kim --- mm/page_alloc.c | 14 -- 1 file changed, 8 insertions(+), 6 deletions(-)

Re: [PATCH 00/10] fix freepage count problems due to memory isolation

2014-07-06 Thread Joonsoo Kim
Ccing Lisa, because there was bug report it may be related this topic last Saturday. http://www.spinics.net/lists/linux-mm/msg75741.html On Fri, Jul 04, 2014 at 05:33:27PM +0200, Vlastimil Babka wrote: > On 07/04/2014 09:57 AM, Joonsoo Kim wrote: > > Hello, > > Hi Joonsoo, >

<    6   7   8   9   10   11   12   13   14   15   >