On Tue, Mar 11, 2014 at 11:58:11AM +0900, Joonsoo Kim wrote:
> On Mon, Mar 10, 2014 at 09:24:55PM -0400, Dave Jones wrote:
> > On Tue, Mar 11, 2014 at 10:01:35AM +0900, Joonsoo Kim wrote:
> > > On Tue, Mar 11, 2014 at 09:35:00AM +0900, Joonsoo Kim wrote:
> > > >
gt; Okay. I think a way to fix it.
> > By assigning pfn(start of scanning window) to
> > end_pfn(end of scanning window) for the next loop, we can solve the problem
> > you mentioned. How about below?
> >
> > - pfn -= pageblock_nr_pages, end_pfn -= pageblock_nr_
On Thu, May 15, 2014 at 11:43:53AM +0900, Minchan Kim wrote:
> On Thu, May 15, 2014 at 10:53:01AM +0900, Joonsoo Kim wrote:
> > On Tue, May 13, 2014 at 12:00:57PM +0900, Minchan Kim wrote:
> > > Hey Joonsoo,
> > >
> > > On Thu, May 08, 2014 at 09:32:23AM +090
On Thu, May 15, 2014 at 10:47:18AM +0100, Mel Gorman wrote:
> On Thu, May 15, 2014 at 11:10:55AM +0900, Joonsoo Kim wrote:
> > > That doesn't always prefer CMA region. It would be nice to
> > > understand why grouping in pageblock_nr_pages is beneficial. Also in
>
On Sun, May 18, 2014 at 11:06:08PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > On Wed, May 14, 2014 at 02:12:19PM +0530, Aneesh Kumar K.V wrote:
> >> Joonsoo Kim writes:
> >>
> >>
> >>
> >> Another issue i am facing wi
On Mon, May 19, 2014 at 11:53:05AM +0900, Minchan Kim wrote:
> On Mon, May 19, 2014 at 11:11:21AM +0900, Joonsoo Kim wrote:
> > On Thu, May 15, 2014 at 11:43:53AM +0900, Minchan Kim wrote:
> > > On Thu, May 15, 2014 at 10:53:01AM +0900, Joonsoo Kim wrote:
> > > > On T
On Mon, May 19, 2014 at 10:47:12AM +0900, Gioh Kim wrote:
> Thank you for your advice. I didn't notice it.
>
> I'm adding followings according to your advice:
>
> - range restrict for CMA_SIZE_MBYTES and *CMA_SIZE_PERCENTAGE*
> I think this can prevent the wrong kernel option.
>
> - change size_
On Tue, May 20, 2014 at 08:18:59AM +0900, Minchan Kim wrote:
> On Mon, May 19, 2014 at 01:50:01PM +0900, Joonsoo Kim wrote:
> > On Mon, May 19, 2014 at 11:53:05AM +0900, Minchan Kim wrote:
> > > On Mon, May 19, 2014 at 11:11:21AM +0900, Joonsoo Kim wrote:
> > > > On T
On Tue, May 20, 2014 at 02:57:47PM +0900, Gioh Kim wrote:
>
> Thanks for your advise, Michal Nazarewicz.
>
> Having discuss with Joonsoo, I'm adding fallback allocation after
> __alloc_from_contiguous().
> The fallback allocation works if CMA kernel options is turned on but CMA size
> is zero.
On Tue, May 20, 2014 at 04:05:52PM +0900, Gioh Kim wrote:
> That case, device-specific coherent memory allocation, is handled at
> dma_alloc_coherent in arm_dma_alloc.
> __dma_alloc handles only general coherent memory allocation.
>
> I'm sorry missing mention about it.
>
Hello,
AFAIK, *cohere
a for highorder is pageblock order. So calling it once
within pageblock range has no problem.
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index bbe1260..0d821a2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -245,6 +245,7 @@ static unsigned
It is just for clean-up to reduce code size and improve readability.
There is no functional change.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 56536d3..a1a9270 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -553,11 +553,7
]
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index b1ba297..56536d3 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -520,26 +520,31 @@ isolate_migratepages_range(struct zone *zone, struct
compact_control *cc,
/* If
.
Additionally, clean-up logic in suitable_migration_target() to simplify
the code. There is no functional changes from this clean-up.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
diff --git a/mm/compaction.c b/mm/compaction.c
index 3a91a2e..bbe1260 100644
--- a/mm/compaction.c
+++ b/mm
isolating, retry to aquire the lock.
I think that it is better to use SWAP_CLUSTER_MAX th pfn for checking
the criteria about dropping the lock. This has no harm 0x0 pfn, because,
at this time, locked variable would be false.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
diff --git a/mm
12554 110868637
Compaction cost 2469 1998
Joonsoo Kim (5):
mm/compaction: disallow high-order page for migration target
mm/compaction: do not call suitable_migration_target() on every page
mm/compaction: change the timing to check to drop th
slab_should_failslab() is called on every allocation, so to optimize it
is reasonable. We normally don't allocate from kmem_cache. It is just
used when new kmem_cache is created, so it's very rare case. Therefore,
add unlikely macro to help compiler optimization.
Signed-off-by: Joonsoo
performance effect of this, but we'd be better not to hold the lock
as much as possible.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 53d1a36..551d503 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -242,7 +242,8 @@ static struct kmem_cache_node __initdata
init_kmem_cache
ff-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 5906f8f..6d17cad 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -215,9 +215,9 @@ static inline void set_obj_pfmemalloc(void **objp)
return;
}
-static inline void clear_obj_pfmemalloc(void **objp)
+static inline void *clear_obj_pfmem
Now, there is no code to hold two lock simultaneously, since
we don't call slab_destroy() with holding any lock. So, lockdep
annotation is useless now. Remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 9c9d4d4..f723a72 100644
--- a/mm/slab.c
+++ b/mm/s
I haven't heard that this alien cache lock is contended, but to reduce
chance of contention would be better generally. And with this change,
we can simplify complex lockdep annotation in slab code.
In the following patch, it will be implemented.
Signed-off-by: Joonsoo Kim
diff --git a/mm/s
Now, we have separate alien_cache structure, so it'd be better to hold
the lock on alien_cache while manipulating alien_cache. After that,
we don't need the lock on array_cache, so remove it.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index c048ac5..ec1df4c 10064
d, so removing it would be better. This patch prepare it by
introducing alien_cache and using it. In the following patch,
we remove spinlock in array_cache.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 90bfd79..c048ac5 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -203,6 +2
tation. As short stat noted, this makes SLAB code much simpler.
This patchset is based on slab/next branch on Pekka's git tree.
git://git.kernel.org/pub/scm/linux/kernel/git/penberg/linux.git
Thanks.
Joonsoo Kim (9):
slab: add unlikely macro to help compiler
slab: makes clear_obj_pfmemallo
Factor out initialization of array cache to use it in following patch.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 551d503..90bfd79 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -741,13 +741,8 @@ static void start_cpu_timer(int cpu)
}
}
-static struct array_cache
node isn't changed, so we don't need to retreive this structure
everytime we move the object. Maybe compiler do this optimization,
but making it explicitly is better.
Signed-off-by: Joonsoo Kim
diff --git a/mm/slab.c b/mm/slab.c
index 6d17cad..53d1a36 100644
--- a/mm/slab.c
+++ b
On Fri, Feb 14, 2014 at 06:26:15PM -0600, Christoph Lameter wrote:
> On Fri, 14 Feb 2014, David Rientjes wrote:
>
> > Yeah, you don't need it, but don't you think it makes the code more
> > readable? Otherwise this is going to be just doing
> >
> > return (unsigned long)objp & ~SLAB_OBJ_PFMEM
On Fri, Feb 14, 2014 at 03:19:02PM -0800, David Rientjes wrote:
> On Fri, 14 Feb 2014, Joonsoo Kim wrote:
>
> > node isn't changed, so we don't need to retreive this structure
> > everytime we move the object. Maybe compiler do this optimization,
> >
On Fri, Feb 14, 2014 at 12:49:57PM -0600, Christoph Lameter wrote:
> On Fri, 14 Feb 2014, Joonsoo Kim wrote:
>
> > @@ -921,7 +784,7 @@ static int transfer_objects(struct array_cache *to,
> > static inline struct alien_cache **alloc_al
2014-05-01 9:45 GMT+09:00 David Rientjes :
> Synchronous memory compaction can be very expensive: it can iterate an
> enormous
> amount of memory without aborting and it can wait on page locks and writeback
> to
> complete if a pageblock cannot be defragmented.
> Unfortunately, it's too expensive
On Tue, May 13, 2014 at 12:00:57PM +0900, Minchan Kim wrote:
> Hey Joonsoo,
>
> On Thu, May 08, 2014 at 09:32:23AM +0900, Joonsoo Kim wrote:
> > CMA is introduced to provide physically contiguous pages at runtime.
> > For this purpose, it reserves memory at boot time
On Wed, May 14, 2014 at 02:12:19PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > CMA is introduced to provide physically contiguous pages at runtime.
> > For this purpose, it reserves memory at boot time. Although it reserve
> > memory, this reserved memory
On Wed, May 14, 2014 at 03:14:30PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > On Fri, May 09, 2014 at 02:39:20PM +0200, Marek Szyprowski wrote:
> >> Hello,
> >>
> >> On 2014-05-08 02:32, Joonsoo Kim wrote:
> >> >This series trie
On Tue, May 13, 2014 at 10:54:58AM +0200, Vlastimil Babka wrote:
> On 05/13/2014 02:44 AM, Joonsoo Kim wrote:
> >On Mon, May 12, 2014 at 04:15:11PM +0200, Vlastimil Babka wrote:
> >>Compaction uses compact_checklock_irqsave() function to periodically check
> >>
e of aborting on contention, and might result in pageblocks not
> being scanned completely, since the scanning cursor is advanced. This patch
> makes isolate_freepages_block() check the cc->contended flag and abort.
>
> Reported-by: Joonsoo Kim
> Signed-off-by: Vlas
On Mon, May 12, 2014 at 10:04:29AM -0700, Laura Abbott wrote:
> Hi,
>
> On 5/7/2014 5:32 PM, Joonsoo Kim wrote:
> > CMA is introduced to provide physically contiguous pages at runtime.
> > For this purpose, it reserves memory at boot time. Although it reserve
> > memor
On Mon, May 12, 2014 at 11:09:25AM +0200, Vlastimil Babka wrote:
> On 05/08/2014 07:28 AM, Joonsoo Kim wrote:
> >On Wed, May 07, 2014 at 02:09:10PM +0200, Vlastimil Babka wrote:
> >>The compaction free scanner in isolate_freepages() currently remembers PFN
> >>of
>
On Mon, May 12, 2014 at 10:28:25AM +0200, Vlastimil Babka wrote:
> On 05/08/2014 07:54 AM, Joonsoo Kim wrote:
> >On Wed, May 07, 2014 at 04:59:07PM +0200, Vlastimil Babka wrote:
> >>On 05/07/2014 03:33 AM, Minchan Kim wrote:
> >>>On Mon, May 05, 2014 at 05:50:46P
On Thu, May 08, 2014 at 03:34:33PM -0700, Andrew Morton wrote:
> On Thu, 8 May 2014 15:19:37 +0900 Minchan Kim wrote:
>
> > > I also think that VM_DEBUG overhead isn't problem because of same
> > > reason from Vlastimil.
> >
> > Guys, please read this.
> >
> > https://lkml.org/lkml/2013/7/17/59
On Fri, May 09, 2014 at 02:39:20PM +0200, Marek Szyprowski wrote:
> Hello,
>
> On 2014-05-08 02:32, Joonsoo Kim wrote:
> >This series tries to improve CMA.
> >
> >CMA is introduced to provide physically contiguous pages at runtime
> >without reserving memory
On Wed, Apr 09, 2014 at 08:36:10PM +0900, Tetsuo Handa wrote:
> Hello.
>
> I found that
>
> $ cat /proc/slab_allocators
>
> causes an oops.
>
> -- dmesg start --
> [ 22.719620] BUG: unable to handle kernel paging request at 8800389b7ff8
> [ 22.719742] IP: [] handle_sla
ig problem.
Reported-by: Dave Jones
Reported-by: Tetsuo Handa
Signed-off-by: Joonsoo Kim
---
This patch is based on v3.15-rc1.
diff --git a/mm/slab.c b/mm/slab.c
index 388cb1a..101eae4 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -386,6 +386,41 @@ static void **dbg_userword(struct kmem_ca
On Wed, Apr 16, 2014 at 08:45:11AM +0900, Joonsoo Kim wrote:
> commit 'b1cb098: change the management method of free objects of the slab'
> introduces bug on slab leak detector('/proc/slab_allocators'). This
> detector works like as following decription.
>
>
On Fri, Apr 11, 2014 at 10:36:27AM +0300, Pekka Enberg wrote:
> On 03/11/2014 10:30 AM, Joonsoo Kim wrote:
> >-8<-
> > From ff6fe77fb764ca5bf8705bf53d07d38e4111e84c Mon Sep 17 00:00:00 2001
> >From: Joonsoo Kim
> >Date: Tue, 11 Mar
On Thu, Apr 10, 2014 at 08:54:37PM +0900, Tetsuo Handa wrote:
> Joonsoo Kim wrote:
> > There was another report about this problem and I have already fixed
> > it, although it wasn't reviewed and merged. See following link.
> >
> > https://lkml.org/lkml/2014/3/
On Mon, Apr 14, 2014 at 05:45:43PM -0700, Steven King wrote:
> git bisect suggests it starts somewhere around commit
> f315e3fa1cf5b3317fc948708645fff889ce1e63 slab: restrict the number of objects
> in a slab
>
> but its kinda hard to tell as there is some compile breakage in there as well.
Hel
On Thu, Apr 10, 2014 at 12:40:58PM -0400, Richard Yao wrote:
> Performance analysis of software compilation by Gentoo portage on an
> Intel E5-2620 with 64GB of RAM revealed that a sizeable amount of time,
> anywhere from 5% to 15%, was spent in get_vmalloc_info(), with at least
> 40% of that time
UG_ON checks for the invariant that for
> MIGRATE_RESERVE and MIGRATE_CMA pageblocks, freepage_migratetype must equal to
> pageblock_migratetype so that these pages always go to the correct free_list.
>
> Reported-by: Yong-Taek Lee
> Reported-by: Bartlomiej Zolnierkiewicz
> Suggested-by: Joonsoo
-Taek Lee
> Cc: Bartlomiej Zolnierkiewicz
> Cc: Joonsoo Kim
> Cc: Mel Gorman
> Cc: Minchan Kim
> Cc: KOSAKI Motohiro
> Cc: Marek Szyprowski
> Cc: Hugh Dickins
> Cc: Rik van Riel
> Cc: Michal Nazarewicz
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
-
This patch fixes the problem by aligning the initial pfn in
> isolate_freepages()
> to pageblock boundary. This also allows to replace the end-of-pageblock
> alignment within the for loop with a simple pageblock_nr_pages increment.
>
> Signed-off-by: Vlastimil Babka
> Reported-by: Heesub S
riable, but in fact it is not.
>
> This patch renames the 'high_pfn' variable to a hopefully less confusing name,
> and slightly changes its handling without a functional change. A comment made
> obsolete by recent changes is also updated.
>
> Signed-off-by: Vlastimil Babk
On Wed, Jun 18, 2014 at 01:48:15PM -0700, Andrew Morton wrote:
> On Mon, 16 Jun 2014 14:40:46 +0900 Joonsoo Kim wrote:
>
> > PPC KVM's CMA area management requires arbitrary bitmap granularity,
> > since they want to reserve very large memory and manage this region
>
r path")
> e58e263e5254 ("PPC, KVM, CMA: use general CMA reserved area management
> framework")
>
Hello,
If below patch fixes above problem, is it possible to retain above patches
in linux-next?
Thanks.
-8<
>From e5c519c4b74914067e43cb55e2
d compaction for the Normal zone,
> and DMA32 zones on both nodes were thus not considered for compaction.
> On different machine, success rates were improved with __GFP_NO_KSWAPD
> allocations.
>
> Signed-off-by: Vlastimil Babka
> Acked-by: Minchan Kim
> Revi
On Mon, Jul 28, 2014 at 03:11:34PM +0200, Vlastimil Babka wrote:
> Async compaction aborts when it detects zone lock contention or need_resched()
> is true. David Rientjes has reported that in practice, most direct async
> compactions for THP allocation abort due to need_resched(). This means that
r understated by the vmstats.
Could you separate this patch to this patchset?
I think that this patch doesn't get much reviewed from other developers
unlike other patches.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Acked-by: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Mich
On Tue, Jul 29, 2014 at 12:31:13AM -0700, David Rientjes wrote:
> On Tue, 29 Jul 2014, Joonsoo Kim wrote:
>
> > I have a silly question here.
> > Why need_resched() is criteria to stop async compaction?
> > need_resched() is flagged up when time slice runs out or other re
On Mon, Jul 07, 2014 at 04:33:09PM +0200, Vlastimil Babka wrote:
> On 07/07/2014 06:49 AM, Joonsoo Kim wrote:
> >Ccing Lisa, because there was bug report it may be related this
> >topic last Saturday.
> >
> >http://www.spinics.net/lists/linux-mm/msg75741.html
> >
On Mon, Jul 07, 2014 at 05:19:48PM +0200, Vlastimil Babka wrote:
> On 07/04/2014 09:57 AM, Joonsoo Kim wrote:
> >If pageblock of page on pcp are isolated now, we should free it to isolate
> >buddy list to prevent future allocation on it. But current code doesn't
> >do th
On Mon, Jul 07, 2014 at 05:50:09PM +0200, Vlastimil Babka wrote:
> On 07/04/2014 09:57 AM, Joonsoo Kim wrote:
> >Currently, when we free the page from pcp list to buddy, we check
> >pageblock of the page in order to isolate the page on isolated
> >pageblock. Although this co
On Mon, Jul 07, 2014 at 05:57:49PM +0200, Vlastimil Babka wrote:
> On 07/04/2014 09:57 AM, Joonsoo Kim wrote:
> >When isolating free page, what we want to know is which list
> >the page is linked. If it is linked in isolate migratetype buddy list,
> >we can skip waterma
On Tue, Jul 08, 2014 at 06:46:31PM +0200, Michal Nazarewicz wrote:
> On Mon, Jul 07 2014, Andrew Morton wrote:
> > What I proposed is that CMA call invalidate_bh_lrus() right at the
> > outset. Something along the lines of
> >
> > --- a/mm/page_alloc.c~a
> > +++ a/mm/page_alloc.c
> > @@ -6329,6 +
On Wed, Jul 09, 2014 at 03:30:02PM +0400, Andrey Ryabinin wrote:
> Add kernel address sanitizer hooks to mark allocated page's addresses
> as accessible in corresponding shadow region.
> Mark freed pages as unaccessible.
>
> Signed-off-by: Andrey Ryabinin
> ---
> include/linux/kasan.h | 6 +
On Wed, Jul 09, 2014 at 03:30:04PM +0400, Andrey Ryabinin wrote:
> This patch shares virt_to_cache() between slab and slub and
> it used in cache_from_obj() now.
> Later virt_to_cache() will be kernel address sanitizer also.
I think that this patch won't be needed.
See comment in 15/21.
Thanks.
On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> Some code in slub could validly touch memory marked by kasan as unaccessible.
> Even though slub.c doesn't instrumented, functions called in it are
> instrumented,
> so to avoid false positive reports such places are protected by
>
On Wed, Jul 09, 2014 at 03:30:09PM +0400, Andrey Ryabinin wrote:
> With this patch kasan will be able to catch bugs in memory allocated
> by slub.
> Allocated slab page, this whole page marked as unaccessible
> in corresponding shadow memory.
> On allocation of slub object requested allocation size
On Wed, Jul 09, 2014 at 03:30:14PM +0400, Andrey Ryabinin wrote:
> We need to manually unpoison rounded up allocation size for dname
> to avoid kasan's reports in __d_lookup_rcu.
> __d_lookup_rcu may validly read a little beyound allocated size.
If it read a little beyond allocated size, IMHO, it
On Tue, Jul 15, 2014 at 11:37:56AM +0400, Andrey Ryabinin wrote:
> On 07/15/14 10:04, Joonsoo Kim wrote:
> > On Wed, Jul 09, 2014 at 03:30:08PM +0400, Andrey Ryabinin wrote:
> >> Some code in slub could validly touch memory marked by kasan as
> >> unaccessible.
>
On Mon, Jul 14, 2014 at 11:49:25AM +0200, Vlastimil Babka wrote:
> On 07/14/2014 08:22 AM, Joonsoo Kim wrote:
> >On Mon, Jul 07, 2014 at 04:33:09PM +0200, Vlastimil Babka wrote:
> >>On 07/07/2014 06:49 AM, Joonsoo Kim wrote:
> >>>Ccing Lisa, because there was bug
node isn't changed, so we don't need to retreive this structure
everytime we move the object. Maybe compiler do this optimization,
but making it explicitly is better.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
mm/slab.c |3 +--
1 file changed, 1 insertion(+), 2
Factor out initialization of array cache to use it in following patch.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 33 +++--
1 file changed, 19 insertions(+), 14 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 59b9a4c..00b6bbc
BAD_ALIEN_MAGIC value isn't used anymore. So remove it.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
mm/slab.c |4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 7820a45..60c9e11 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -
jes
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
mm/slab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
index 179272f..f8a0ed1 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3067,7 +3067,7 @@ static void *cache_alloc_debugcheck_af
performance effect of this, but we'd be better not to hold the lock
as much as possible.
Commented by Christoph:
This is also good because kmem_cache_free is no longer called while
holding the node lock. So we avoid one case of recursion.
Acked-by: Christoph Lameter
Signed-off-by: Joonso
rent linux-next.
Thanks.
Joonsoo Kim (9):
slab: add unlikely macro to help compiler
slab: move up code to get kmem_cache_node in free_block()
slab: defer slab_destroy in free_block()
slab: factor out initialization of arracy cache
slab: introduce alien_cache
slab: use the lock on a
Now, we have separate alien_cache structure, so it'd be better to hold
the lock on alien_cache while manipulating alien_cache. After that,
we don't need the lock on array_cache, so remove it.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
mm/sla
d, so removing it would be better. This patch prepare it by
introducing alien_cache and using it. In the following patch,
we remove spinlock in array_cache.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 108 ++---
mm/s
er
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 153 -
1 file changed, 153 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index 854dfa0..7820a45 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -472,139 +472,6 @@ static struct kmem_cache kmem_
I haven't heard that this alien cache lock is contended, but to reduce
chance of contention would be better generally. And with this change,
we can simplify complex lockdep annotation in slab code.
In the following patch, it will be implemented.
Acked-by: Christoph Lameter
Signed-off-by: Jo
On Tue, Jul 01, 2014 at 03:21:21PM -0700, David Rientjes wrote:
> On Tue, 1 Jul 2014, Joonsoo Kim wrote:
>
> > node isn't changed, so we don't need to retreive this structure
> > everytime we move the object. Maybe compiler do this optimization,
> >
On Tue, Jul 01, 2014 at 03:25:04PM -0700, David Rientjes wrote:
> On Tue, 1 Jul 2014, Joonsoo Kim wrote:
>
> > In free_block(), if freeing object makes new free slab and number of
> > free_objects exceeds free_limit, we start to destroy this new free slab
> > with holding
On Tue, Jul 01, 2014 at 03:26:26PM -0700, David Rientjes wrote:
> On Tue, 1 Jul 2014, Joonsoo Kim wrote:
>
> > Factor out initialization of array cache to use it in following patch.
> >
> > Acked-by: Christoph Lameter
> > Signed-off-by: Joonsoo Kim
>
> Not
On Tue, Jul 01, 2014 at 03:15:47PM -0700, Andrew Morton wrote:
> On Tue, 1 Jul 2014 17:27:34 +0900 Joonsoo Kim wrote:
>
> > -static struct array_cache **alloc_alien_cache(int node, int limit, gfp_t
> > gfp)
> > +static struct alien_cache *__alloc_alien_cac
On Wed, Jun 25, 2014 at 10:59:19AM +0200, Vlastimil Babka wrote:
> On 06/25/2014 02:53 AM, Joonsoo Kim wrote:
> >On Tue, Jun 24, 2014 at 05:42:50PM +0200, Vlastimil Babka wrote:
> >>On 06/24/2014 10:33 AM, Joonsoo Kim wrote:
> >>>On Fri, Jun 20, 2014 at 05:49:34P
nitialisation of the area. It's not clear, to me at least, what good
> is continuing the work on a PFN that is known to be invalid.
>
> Signed-off-by: Michal Nazarewicz
Acked-by: Joonsoo Kim
One question below.
> ---
>
On Wed, Jun 25, 2014 at 05:45:45PM +0400, Vladimir Davydov wrote:
> On Tue, Jun 24, 2014 at 04:38:41PM +0900, Joonsoo Kim wrote:
> > On Fri, Jun 13, 2014 at 12:38:22AM +0400, Vladimir Davydov wrote:
> > And, you said that this way of implementation would be slow because
> >
On Tue, Jul 15, 2014 at 10:36:35AM +0200, Vlastimil Babka wrote:
> >>A non-trivial fix that comes to mind (and I might have overlooked
> >>something) is something like:
> >>
> >>- distinguish MIGRATETYPE_ISOLATING and MIGRATETYPE_ISOLATED
> >>- CPU1 first sets MIGRATETYPE_ISOLATING before the drain
On Wed, Jul 16, 2014 at 01:14:26PM +0200, Vlastimil Babka wrote:
> On 07/16/2014 10:43 AM, Joonsoo Kim wrote:
> >> I think your plan of multiple parallel CMA allocations (and thus
> >> multiple parallel isolations) is also possible. The isolate pcplists
> >> can
mation.
This patchset is based on linux-next-20140703.
Thanks.
[1]: Aggressively allocate the pages on cma reserved memory
https://lkml.org/lkml/2014/5/30/291
Joonsoo Kim (10):
mm/page_alloc: remove unlikely macro on free_one_page()
mm/page_alloc: correct to clear guard attribut
, another future user to this function also
missed to fixup number of freepage again.
Now, we have proper infrastructure, get_onbuddy_migratetype(), which can
be used to get current migratetype of buddy list. So fix this situation.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 37
page is merged or splited. Hence, this patch adds
set_onbuddy_migratetype() to set_page_order().
And this patch makes set/get_onbuddy_migratetype() only enabled if
memory isolation is enabeld, because it doesn't needed in other case.
Signed-off-by: Joonsoo Kim
---
include/linux/m
ock 1
- release the zone lock
- grab the zone lock
- call __free_one_page() with MIGRATE_ISOLATE
- free page go into isolate buddy list
and we can't use it anymore
To prevent this possibility, re-check migratetype with holding the lock.
Signed-off-by: Joonsoo Ki
ff-by: Joonsoo Kim
---
include/linux/mm.h | 24
mm/page_alloc.c | 18 +-
mm/page_isolation.c |4 ++--
3 files changed, 31 insertions(+), 15 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e03dd29..278ecfd 100644
--- a/in
, because it can be done in
common part, __free_one_page(). This unifying provides extra guarantee
that the page on isolate pageblock don't go into non-isolate buddy list.
This is similar situation describing in previous patch so refer it
if you need more explanation.
Signed-off-by: Joonso
, get_onbuddy_migratetype() is more fit and cheap than
get_pageblock_migratetype(). So use it.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e1c4c3e..d9fb8bb 100644
--- a/mm/page_alloc.c
+++ b/mm
think that this is no problem, because isolation means that we will use
page on isolate pageblock specially, so it will split soon in any case.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_all
Isolation is really rare case so !is_migrate_isolate() is
likely case. Remove unlikely macro.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8dac0f0..0d4cf7a 100644
--- a/mm
his patch, is that fixing freepage accounting.
If we clear guard page and link it onto isolate buddy list, we should
not increase freepage count.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 29 -
1 file changed, 16 insertions(+), 13 deletions(-)
diff --g
lling __free_one_page(). And, if we find the page on isolated
pageblock, change migratetype to MIGRATE_ISOLATE to prevent future
allocation of this page and freepage counting problem.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
Ccing Lisa, because there was bug report it may be related this
topic last Saturday.
http://www.spinics.net/lists/linux-mm/msg75741.html
On Fri, Jul 04, 2014 at 05:33:27PM +0200, Vlastimil Babka wrote:
> On 07/04/2014 09:57 AM, Joonsoo Kim wrote:
> > Hello,
>
> Hi Joonsoo,
>
1001 - 1100 of 2325 matches
Mail list logo