On Thu, Aug 21, 2014 at 09:21:30AM -0500, Christoph Lameter wrote:
> On Thu, 21 Aug 2014, Joonsoo Kim wrote:
>
> > So, this patch try to use percpu allocator in SLAB. This simplify
> > initialization step in SLAB so that we could maintain SLAB code more
> > easily.
>
On Thu, Aug 21, 2014 at 04:27:52PM +0800, Zhang Yanfei wrote:
> Hello Joonsoo,
Hello. :)
>
> Seems like this is a cleanup patchset. I want to mention another
> tiny cleanup here.
I think these are not only cleanup but also build improvement.
> You removed the "struct slab" before but it seems
On Mon, Aug 25, 2014 at 08:13:58AM -0500, Christoph Lameter wrote:
> On Mon, 25 Aug 2014, Joonsoo Kim wrote:
>
> > On Thu, Aug 21, 2014 at 09:21:30AM -0500, Christoph Lameter wrote:
> > > On Thu, 21 Aug 2014, Joonsoo Kim wrote:
> > >
> > > > So, this
On Mon, Aug 25, 2014 at 10:27:58AM -0500, Christoph Lameter wrote:
> On Thu, 21 Aug 2014, Joonsoo Kim wrote:
>
> > +static int __init setup_slab_nomerge(char *str)
> > +{
> > + slab_nomerge = 1;
> > + return 1;
> > +}
> > +__setup("slub_nomerge&q
On Mon, Aug 25, 2014 at 10:29:19AM -0500, Christoph Lameter wrote:
> On Thu, 21 Aug 2014, Joonsoo Kim wrote:
>
> > diff --git a/mm/slab.c b/mm/slab.c
> > index 09b060e..a1cc1c9 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -2052,6 +2052,26 @@ static int
On Mon, Aug 25, 2014 at 09:05:55AM +0900, Minchan Kim wrote:
> @@ -513,6 +540,14 @@ static int zram_bvec_write(struct zram *zram, struct
> bio_vec *bvec, u32 index,
> ret = -ENOMEM;
> goto out;
> }
> +
> + if (zram->limit_pages &&
> + zs_get_total_
can
avoid re-fetching in common case with this optimization.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 51e0d13..6c952b6 100644
--- a/mm/page_alloc.c
+++ b/mm/pa
think it doesn't matter
because 1) almost allocation request are for equal or below pageblock
order, 2) caller of pageblock isolation will use this freepage so
freepage will split in any case and 3) merge would happen soon after
some alloc/free on this and buddy pageblock.
Signed-off-by: Jo
e
freepage accouting problem on freepage with more than pageblock order.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 22 +++---
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6c952b6..809bfd3 100644
--- a/mm/page_alloc.
h this, we can avoid to re-check migratetype in common case and do
it only if there is isolated pageblock. This solve above
mentioned problems.
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h |4
include/linux/page-isolation.h |8
mm/page_alloc.c
ate the pages on cma reserved memory
https://lkml.org/lkml/2014/5/30/291
Joonsoo Kim (4):
mm/page_alloc: fix incorrect isolation behavior by rechecking
migratetype
mm/page_alloc: add freepage on isolate pageblock to correct buddy
list
mm/page_alloc: move migratetype recheck logic to __
Hello, Minchan and David.
On Tue, Aug 26, 2014 at 08:22:29AM -0400, David Horner wrote:
> On Tue, Aug 26, 2014 at 3:55 AM, Minchan Kim wrote:
> > Hey Joonsoo,
> >
> > On Tue, Aug 26, 2014 at 04:37:30PM +0900, Joonsoo Kim wrote:
> >> On Mon, Aug 25, 2014 at 09:05
On Wed, Aug 27, 2014 at 11:51:32AM +0900, Minchan Kim wrote:
> Hey Joonsoo,
>
> On Wed, Aug 27, 2014 at 10:26:11AM +0900, Joonsoo Kim wrote:
> > Hello, Minchan and David.
> >
> > On Tue, Aug 26, 2014 at 08:22:29AM -0400, David Horner wrote:
> > > On Tue,
On Wed, Aug 27, 2014 at 04:28:19PM +0900, Minchan Kim wrote:
> On Wed, Aug 27, 2014 at 02:04:38PM +0900, Joonsoo Kim wrote:
> > On Wed, Aug 27, 2014 at 11:51:32AM +0900, Minchan Kim wrote:
> > > Hey Joonsoo,
> > >
> > > On Wed, Aug 27, 2014 at 10:26:11AM +09
on VM_BUG_ON().
>
> This replaces get_order() with order_base_2() (round-up version of ilog2).
>
> Suggested-by: Paul Mackerras
> Cc: Alexander Graf
> Cc: Aneesh Kumar K.V
> Cc: Joonsoo Kim
> Cc: Benjamin Herrenschmidt
> Signed-off-by: Alexey Kardashevskiy
Sorry
ed to inline it. Therfore, move it to slab_common.c and
move kmem_cache definition to internal header.
After this change, we can change kmem_cache definition easily without
full kernel build. For instance, we can turn on/off CONFIG_SLUB_STATS
without full kernel build.
Signed-off-by: Joonsoo Ki
;s okay to change this situation.
>From this change, we can turn on/off CONFIG_DEBUG_SLAB without full
kernel build and remove some complicated '#if' defintion. It looks
more benefitial to me.
Signed-off-by: Joonsoo Kim
---
include/linux/slab.h | 22 --
mm/s
mm/slab.o | grep -e "T kfree" -e "T kmem_cache_free"
1110 01b5 T kfree
0750 0181 T kmem_cache_free
You can see slightly reduced size of text: 0x228->0x1b5, 0x216->0x181.
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 38
.
Signed-off-by: Joonsoo Kim
---
mm/slab.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
index d80b654..d364e3f 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3406,7 +3406,7 @@ static inline void __cache_free(struct kmem_cache
*cachep, void *objp
ned-off-by: Joonsoo Kim
---
include/linux/slab_def.h | 20 +---
mm/slab.c| 237 +++---
mm/slab.h|1 -
3 files changed, 81 insertions(+), 177 deletions(-)
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
in
Slab merge is good feature to reduce fragmentation. Now, it is only
applied to SLUB, but, it would be good to apply it to SLAB. This patch
is preparation step to apply slab merge to SLAB by commonizing slab
merge logic.
Signed-off-by: Joonsoo Kim
---
mm/slab.h| 15 +
mm
t;0x2e5
kfree: 0x256->0x228
kmem_cache_free: 0x24c->0x216
code size of each function is reduced slightly.
Signed-off-by: Joonsoo Kim
---
mm/slab.c |4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/slab.c b/mm/slab.c
index d364e3f..c9f137f 100644
--- a/mm/slab.c
.
* After boot, sleep 20; cat /proc/meminfo | grep Slab
Slab: 25136 kB
Slab: 24364 kB
We can save 3% memory used by slab.
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 20
mm/slab.h |2 +-
2 files changed, 21 insertions(+), 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
heck.c:95:8: error: dereferencing pointer to incomplete type
> ../mm/kmemcheck.c:95:21: error: dereferencing pointer to incomplete type
>
> ../mm/slab.h: In function 'cache_from_obj':
> ../mm/slab.h:283:2: error: implicit declaration of function
> 'memcg_kmem_enabled
On Thu, Sep 04, 2014 at 09:14:19PM -0400, Theodore Ts'o wrote:
> On Fri, Sep 05, 2014 at 09:37:05AM +0900, Gioh Kim wrote:
> > >But what were the problems which were observed in standard kernels and
> > >what effect did this patchset have upon them? Some quantitative
> > >measurements will really
header files to fix kmemcheck.c build errors.
[iamjoonsoo@lge.com] move up memcontrol.h header
to fix build failure if CONFIG_MEMCG_KMEM=y too.
Signed-off-by: Randy Dunlap
Signed-off-by: Joonsoo Kim
---
mm/kmemcheck.c |1 +
mm/slab.h |2 ++
2 files changed, 3 insertions(+)
diff --git a
On Thu, Sep 04, 2014 at 11:17:35PM -0400, Theodore Ts'o wrote:
> Joonson,
>
> Thanks for the update. I've applied Gioh's patches to the ext4 tree,
> but I'd appreciate a further clarification. My understanding with the
> problem you were trying to address is that with the current CMA
> implement
On Sun, Aug 31, 2014 at 04:33:12PM -0700, Randy Dunlap wrote:
> On 08/31/14 07:48, Randy Dunlap wrote:
> > On 08/31/14 04:36, Andrey Ryabinin wrote:
> >> 2014-08-30 5:48 GMT+04:00 Randy Dunlap :
> >>> From: Randy Dunlap
> >>>
> >>> Add header file to fix kmemcheck.c build errors:
> >>>
> >>> ../mm
On Fri, Aug 29, 2014 at 01:46:41PM -0400, Naoya Horiguchi wrote:
> On Tue, Aug 26, 2014 at 05:08:15PM +0900, Joonsoo Kim wrote:
> > There are two paths to reach core free function of buddy allocator,
> > __free_one_page(), one is free_one_page()->__free_one_page() an
On Fri, Aug 29, 2014 at 12:52:44PM -0400, Naoya Horiguchi wrote:
> Hi Joonsoo,
>
> On Tue, Aug 26, 2014 at 05:08:18PM +0900, Joonsoo Kim wrote:
> > Current pageblock isolation logic could isolate each pageblock
> > individually. This causes freepage accounting pro
On Wed, Aug 27, 2014 at 06:37:33PM -0500, Christoph Lameter wrote:
> One minor nit. Otherwise
>
> Acked-by: Christoph Lameter
>
> On Thu, 21 Aug 2014, Joonsoo Kim wrote:
>
> > @@ -2041,56 +1982,63 @@ static size_t calculate_slab_order(struct
> > kmem_cache *ca
On Sun, Aug 31, 2014 at 05:17:14PM -0700, Randy Dunlap wrote:
> On 08/31/14 17:13, Joonsoo Kim wrote:
> > On Sun, Aug 31, 2014 at 04:33:12PM -0700, Randy Dunlap wrote:
> >> On 08/31/14 07:48, Randy Dunlap wrote:
> >>> On 08/31/14 04:36, Andrey Ryabinin wrote:
> >
his patch, is that fixing freepage accounting.
If we clear guard page and link it onto isolate buddy list, we should
not increase freepage count.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 29 -
1 file changed, 16 insertions(+), 13 d
l here. Current
logic handles each CPU's pcp update one by one. To reduce sending IPI,
we need to re-ogranize the code to handle all CPU's pcp update at one go.
This patch implement these requirements.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 139 -
llocate the pages on cma reserved memory
https://lkml.org/lkml/2014/5/30/291
Joonsoo Kim (8):
mm/page_alloc: correct to clear guard attribute in DEBUG_PAGEALLOC
mm/isolation: remove unstable check for isolated page
mm/page_alloc: fix pcp high, batch management
mm/isolation: close t
llowing patch
will fix it, too.
Signed-off-by: Joonsoo Kim
---
include/linux/page-isolation.h |2 +
mm/internal.h |3 ++
mm/page_alloc.c| 28 ++-
mm/page_isolation.c| 107
4 files changed, 1
pageblock isolation' and 'mm/isolation: change pageblock isolation logic
to fix freepage counting bugs') solves the race related to pageblock
isolation. So, this misplacement cannot happen and this workaround
aren't needed anymore.
Signed-off-by: Joonsoo Kim
---
mm/page_isolat
his patch, is that fixing freepage accounting.
If we clear guard page and link it onto isolate buddy list, we should
not increase freepage count.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 29 -
1 file changed, 16 insertions(+), 13 d
l here. Current
logic handles each CPU's pcp update one by one. To reduce sending IPI,
we need to re-ogranize the code to handle all CPU's pcp update at one go.
This patch implement these requirements.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 139 -
ility, disabling and draining pcp
list is needed during isolation. It guaratees that there is no page on pcp
list on all cpus while isolation, so misplacement problem can't happen.
Note that this doesn't fix freepage counting problem. To fix it,
we need more logic. Following patches will do i
So remove
it.
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c |6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index d1473b2..3100f98 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -198,11 +198,7 @@ __test_
So remove
it.
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c |6 +-
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index d1473b2..3100f98 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -198,11 +198,7 @@ __test_
that work.
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 45 +
1 file changed, 29 insertions(+), 16 deletions(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index 898361f..b91f9ec 100644
--- a/mm/page_isolation.c
+++ b/mm
ike as CMA and aligning range is caller's duty.
Although we can go with solution 1., this patch is still useful since
some synchronization call is reduced since we call them in batch.
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 105
On Wed, Aug 06, 2014 at 04:18:26PM +0900, Joonsoo Kim wrote:
> Joonsoo Kim (8):
> mm/page_alloc: correct to clear guard attribute in DEBUG_PAGEALLOC
> mm/isolation: remove unstable check for isolated page
> mm/page_alloc: fix pcp high, batch management
> mm/isolation: clo
On Fri, Aug 01, 2014 at 09:12:06AM +0900, Gioh Kim wrote:
>
>
> 2014-08-01 오전 7:57, Andrew Morton 쓴 글:
> >On Thu, 31 Jul 2014 11:22:35 +0900 Gioh Kim wrote:
> >
> >>The previous PATCH inserts invalidate_bh_lrus() only into CMA code.
> >>HOTPLUG needs also dropping bh of lru.
> >>So v2 inserts in
On Wed, Aug 06, 2014 at 05:12:20PM +0200, Vlastimil Babka wrote:
> On 08/06/2014 09:18 AM, Joonsoo Kim wrote:
> >Overall design of changed pageblock isolation logic is as following.
>
> I'll reply here since the overall design part is described in this
> patch (would be wo
On Thu, Aug 07, 2014 at 08:49:00AM +0800, Zhang Yanfei wrote:
> Hi Joonsoo,
>
> The first 3 patches in this patchset are in a bit of mess.
Sorry about that.
I will do better in next spin. ):
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
On Thu, Aug 07, 2014 at 10:11:14AM +0800, Zhang Yanfei wrote:
> Hi Joonsoo,
>
> On 08/06/2014 03:18 PM, Joonsoo Kim wrote:
> > per cpu pages structure, aka pcp, has high and batch values to control
> > how many pages we perform caching. This values could be updated
> >
2014-08-07 17:53 GMT+09:00 Vlastimil Babka :
> On 08/07/2014 10:19 AM, Joonsoo Kim wrote:
>>>
>>> Is it needed to disable the pcp list? Shouldn't drain be enough?
>>> After the drain you already are sure that future freeing will see
>>> MIGRATE_ISOLATE
2014-08-07 20:52 GMT+09:00 Geert Uytterhoeven :
> Hi Joonsoo,
>
> On Tue, Jul 1, 2014 at 10:27 AM, Joonsoo Kim wrote:
>> BAD_ALIEN_MAGIC value isn't used anymore. So remove it.
>>
>> Acked-by: Christoph Lameter
>> Signed-off-by: Joonsoo Kim
>> ---
>
2014-08-07 21:53 GMT+09:00 Geert Uytterhoeven :
> Hi,
>
> On Thu, Aug 7, 2014 at 2:36 PM, Joonsoo Kim wrote:
>>> With latest mainline, I'm getting a crash during bootup on m68k/ARAnyM:
>>>
>>> enable_cpucache failed for radix_tree_node, error 12.
>>
2014-08-07 22:04 GMT+09:00 Vlastimil Babka :
> On 08/07/2014 02:26 PM, Joonsoo Kim wrote:
>>
>> 2014-08-07 17:53 GMT+09:00 Vlastimil Babka :
>>>
>>> Ah, right. I thought that everything going to pcp lists would be through
>>>
>>> freeing which wou
On Thu, Aug 07, 2014 at 03:49:17PM +0200, Vlastimil Babka wrote:
> On 08/06/2014 09:18 AM, Joonsoo Kim wrote:
> >The check '!PageBuddy(page) && page_count(page) == 0 &&
> >migratetype == MIGRATE_ISOLATE' would mean the page on free processing.
>
> W
On Thu, Aug 07, 2014 at 04:34:41PM +0200, Vlastimil Babka wrote:
> On 08/06/2014 09:18 AM, Joonsoo Kim wrote:
> >We got migratetype of the freeing page without holding the zone lock so
> >it could be racy. There are two cases of this race.
> >
> >1. pages are added
On Thu, Aug 07, 2014 at 05:15:17PM +0200, Vlastimil Babka wrote:
> On 08/06/2014 09:18 AM, Joonsoo Kim wrote:
> >Current pageblock isolation logic has a problem that results in incorrect
> >freepage counting. move_freepages_block() doesn't return number of
> >moved pages
IG_NUMA, but, reverting issued commit is better
to me in this time.
Reported-by: Geert Uytterhoeven
Signed-off-by: Joonsoo Kim
---
mm/slab.c |4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slab.c b/mm/slab.c
index c727a16..0376429 100644
--- a/mm/slab.c
+++ b/mm/s
On Thu, Aug 07, 2014 at 10:03:09PM +0900, Joonsoo Kim wrote:
> 2014-08-07 21:53 GMT+09:00 Geert Uytterhoeven :
> > Hi,
> >
> > On Thu, Aug 7, 2014 at 2:36 PM, Joonsoo Kim wrote:
> >>> With latest mainline, I'm getting a crash during bootup on m68k/ARAnyM:
On Tue, Aug 12, 2014 at 11:45:32AM +0200, Vlastimil Babka wrote:
> On 08/12/2014 07:17 AM, Minchan Kim wrote:
> >On Wed, Aug 06, 2014 at 04:18:33PM +0900, Joonsoo Kim wrote:
> >>
> >>One solution to this problem is checking pageblock migratetype with
> >>holdin
On Mon, Aug 11, 2014 at 02:53:35PM +0530, Aneesh Kumar K.V wrote:
> Joonsoo Kim writes:
>
> > The check '!PageBuddy(page) && page_count(page) == 0 &&
> > migratetype == MIGRATE_ISOLATE' would mean the page on free processing.
> > Although it
On Tue, Aug 12, 2014 at 01:45:23AM +, Minchan Kim wrote:
> On Wed, Aug 06, 2014 at 04:18:30PM +0900, Joonsoo Kim wrote:
> > In __free_one_page(), we check the buddy page if it is guard page.
> > And, if so, we should clear guard attribute on the buddy page. But,
> >
On Tue, Aug 12, 2014 at 01:24:09AM +, Minchan Kim wrote:
> Hey Joonsoo,
>
> On Wed, Aug 06, 2014 at 04:18:28PM +0900, Joonsoo Kim wrote:
> > per cpu pages structure, aka pcp, has high and batch values to control
> > how many pages we perform caching. This va
On Tue, Aug 12, 2014 at 05:17:45AM +, Minchan Kim wrote:
> On Wed, Aug 06, 2014 at 04:18:33PM +0900, Joonsoo Kim wrote:
> > 2. #1 requires IPI for synchronization and we can't hold the zone lock
> > during processing IPI. In this time, some pages could be moved from buddy
On Fri, Aug 01, 2014 at 10:51:07AM +0200, Vlastimil Babka wrote:
> On 07/30/2014 06:22 PM, Vlastimil Babka wrote:
> >On 07/29/2014 11:12 AM, Vlastimil Babka wrote:
> >>On 07/29/2014 08:38 AM, Joonsoo Kim wrote:
> >>>
> >>>I still don't u
On Sun, Aug 03, 2014 at 10:57:02PM -0400, Sasha Levin wrote:
> Hi all,
>
> While fuzzing with trinity inside a KVM tools guest running the latest -next
> kernel, I've stumbled on the following spew:
>
>
> [ 1226.701012] WARNING: CPU: 6 PID: 8624 at kernel/smp.c:673
> on_each_cpu_cond+0x27f/0x2f
On Wed, Oct 22, 2014 at 10:55:17AM -0500, Christoph Lameter wrote:
> We had to insert a preempt enable/disable in the fastpath a while ago. This
> was mainly due to a lot of state that is kept to be allocating from the per
> cpu freelist. In particular the page field is not covered by
> this_cpu_cm
mation.
Thanks.
[1]: https://lkml.org/lkml/2014/7/4/79
[2]: lkml.org/lkml/2014/8/6/52
[3]: Aggressively allocate the pages on cma reserved memory
https://lkml.org/lkml/2014/5/30/291
Joonsoo Kim (4):
mm/page_alloc: fix incorrect isolation behavior by rechecking
migratetype
mm/page_allo
e
freepage accouting problem on freepage with more than pageblock order.
Cc:
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 24
1 file changed, 8 insertions(+), 16 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5d2f807..433f92c 100644
--- a/mm/pa
ff-by: Joonsoo Kim
---
mm/page_alloc.c | 15 ---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 433f92c..3ec58db 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -571,6 +571,7 @@ static inline void __free_one_page(struct page
hout this, abovementioned case 1 could happens.
Cc:
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h |9 +
include/linux/page-isolation.h |8
mm/page_alloc.c| 11 +--
mm/page_isolation.c|2 ++
4 files changed, 28 ins
can
avoid re-fetching in common case with this optimization.
Cc:
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 13 -
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4a5d8e5..5d2f807 100644
--- a/mm/page_alloc.c
+++ b/mm/pa
the mailing list and do a
> reply-to-all to that, adding suitable additional cc's
>
> *** Remember to use Documentation/SubmitChecklist when testing your code ***
>
> The -mm tree is included into linux-next and is updated
> there every 3-4 working days
>
> -
On Thu, Nov 06, 2014 at 04:09:08PM +0800, Weijie Yang wrote:
> If race between isolatation and allocation happens, we could need to move
> some freepages to MIGRATE_ISOLATE in __test_page_isolated_in_pageblock().
> The current code ignores the zone_freepage accounting after the move,
> which cause
this patch, CMA with more than pageblock
order always fail, but, with this patch, it will succeed.
Signed-off-by: Joonsoo Kim
---
mm/compaction.c |6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/mm/compaction.c b/mm/compaction.c
index ec74cf0..212682a 10064
at problems are solved.
On my simple memory offlining test, these problems also occur on that
environment, too.
This patchset is based on v3.18-rc2.
Please see individual patches for more information.
Thanks.
Joonsoo Kim (4):
mm/page_alloc: fix incorrect isolation behavior by rechecking
m
hout this, abovementioned case 1 could happens.
Cc:
Acked-by: Minchan Kim
Acked-by: Michal Nazarewicz
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h |9 +
include/linux/page-isolation.h |8
mm/page_alloc.c| 11
can
avoid re-fetching in common case with this optimization.
This patch also correct migratetype of the tracepoint output.
Cc:
Acked-by: Minchan Kim
Acked-by: Michal Nazarewicz
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 13 -
1 file changed, 8
e
freepage counting problem on freepage with more than pageblock order.
Changes from v4:
Only freepage counting logic is moved. Others remains as is.
Cc:
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 14 +++---
1 file changed, 3 insertions(+), 11 deletions(-)
diff --git a/mm/page_al
om v4:
Consider merging on un-isolation process.
Cc:
Signed-off-by: Joonsoo Kim
---
mm/internal.h | 25 +
mm/page_alloc.c | 40 +---
mm/page_isolation.c | 31 +++
3 files changed, 69 insertions(+
On Wed, Oct 29, 2014 at 02:51:11PM +0100, Vlastimil Babka wrote:
> On 10/28/2014 08:16 AM, Joonsoo Kim wrote:> On Mon, Oct 27, 2014 at
> 10:11:31AM +0100, Vlastimil Babka wrote:
> >> On 10/27/2014 07:46 AM, Joonsoo Kim wrote:
> >>> On Tue, Oct 07, 2014 at 05:33:35P
ove failure occurs. However,
in x86, kmalloc-256 is luckily aligned in 256 bytes, so the problem
didn't happen on it.
To fix this problem, this patch introduces alignment mismatch check
in find_mergeable(). This will fix the problem.
Reported-by: Markos Chandras
Tested-by: Markos Chandras
S
used kmem_caches, such as
kmalloc kmem_caches.
Signed-off-by: Joonsoo Kim
---
mm/slab_common.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 2657084..f6510d9 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -250,7 +2
On Tue, Oct 28, 2014 at 12:08:46PM -0700, Florian Fainelli wrote:
> Hello,
>
> While debugging why some dma_alloc_coherent() allocations where
> returning NULL on our brcmstb platform, specifically with
> drivers/net/ethernet/broadcom/bcmcsysport.c, I came across the
> fatal_signal_pending() check
On Mon, Oct 27, 2014 at 10:39:01AM +0100, Vlastimil Babka wrote:
> On 10/27/2014 08:35 AM, Joonsoo Kim wrote:> On Tue, Oct 07, 2014 at
> 05:33:38PM +0200, Vlastimil Babka wrote:
> > Hmm... I'm not sure that this patch is good thing.
> >
> > In asynchronous compac
On Mon, Oct 27, 2014 at 10:11:31AM +0100, Vlastimil Babka wrote:
> On 10/27/2014 07:46 AM, Joonsoo Kim wrote:
> > On Tue, Oct 07, 2014 at 05:33:35PM +0200, Vlastimil Babka wrote:
> >
> > Hello,
> >
> > compaction_suitable() has one more zone_watermark_ok().
On Mon, Oct 27, 2014 at 11:33:20AM +0100, Vlastimil Babka wrote:
> On 10/23/2014 10:10 AM, Joonsoo Kim wrote:
> > Changes from v3:
> > Add one more check in free_one_page() that checks whether migratetype is
> > MIGRATE_ISOLATE or not. Without this, abovementioned case 1 cou
On Mon, Oct 27, 2014 at 11:40:23AM +0100, Vlastimil Babka wrote:
> On 10/23/2014 10:10 AM, Joonsoo Kim wrote:
> > All the caller of __free_one_page() has similar migratetype recheck logic,
> > so we can move it to __free_one_page(). This reduce line of code and help
> > future
id you start to bisect from v3.18-rc1?
I'd like to be sure that this is another bug which is fixed by following commit.
commit 85c9f4b04a08f6bc770b77530c22d04103468b8f
Author: Joonsoo Kim
Date: Mon Oct 13 15:51:01 2014 -0700
mm/slab: fix unaligned access on sparc64
This fix is m
2014-10-28 22:24 GMT+09:00 Markos Chandras :
> On 10/28/2014 01:19 PM, Markos Chandras wrote:
>> On 10/28/2014 01:01 PM, Joonsoo Kim wrote:
>>> 2014-10-28 19:45 GMT+09:00 Markos Chandras :
>>>> Hi,
>>>>
>>>> It seems I am unable to boot my
2014-10-28 22:48 GMT+09:00 Joonsoo Kim :
> 2014-10-28 22:24 GMT+09:00 Markos Chandras :
>> On 10/28/2014 01:19 PM, Markos Chandras wrote:
>>> On 10/28/2014 01:01 PM, Joonsoo Kim wrote:
>>>> 2014-10-28 19:45 GMT+09:00 Markos Chandras :
>>>>> Hi,
>
2014-10-28 23:32 GMT+09:00 Markos Chandras :
> On 10/28/2014 02:21 PM, Joonsoo Kim wrote:
>> 2014-10-28 22:48 GMT+09:00 Joonsoo Kim :
>>> 2014-10-28 22:24 GMT+09:00 Markos Chandras :
>>>> On 10/28/2014 01:19 PM, Markos Chandras wrote:
>>>>> On 10/28/201
2014-10-29 0:45 GMT+09:00 Markos Chandras :
> On 10/28/2014 03:00 PM, Joonsoo Kim wrote:
>> 2014-10-28 23:32 GMT+09:00 Markos Chandras :
>>> On 10/28/2014 02:21 PM, Joonsoo Kim wrote:
>>>> 2014-10-28 22:48 GMT+09:00 Joonsoo Kim :
>>>>> 2014-10-28 22:2
2014-10-12 11:15 GMT+09:00 David Miller :
>
> I'm getting tons of the following on sparc64:
>
> [603965.383447] Kernel unaligned access at TPC[546b58] free_block+0x98/0x1a0
> [603965.396987] Kernel unaligned access at TPC[546b60] free_block+0xa0/0x1a0
> [603965.410523] Kernel unaligned access at TP
2014-10-13 2:30 GMT+09:00 David Miller :
> From: Joonsoo Kim
> Date: Mon, 13 Oct 2014 02:22:15 +0900
>
>> Could you test below patch?
>> If it fixes your problem, I will send it with proper description.
>
> It works, I just tested using ARCH_KMALLOC_MINALIGN which would
callbacks suppressed
snip...
This patch provides proper alignment parameter when allocating cpu cache to
fix this unaligned memory access problem on sparc64.
Reported-by: David Miller
Tested-by: David Miller
Signed-off-by: Joonsoo Kim
---
mm/slab.c |2 +-
1 file changed, 1 insertion(+), 1 del
t; In all of the cases, the address is 4-byte aligned but not 8-byte
> > aligned. And they are vmalloc addresses.
> >
> > Which made me suspect the percpu commit:
> >
> >
> > commit bf0dea23a9c094ae869a88bb694fbe966671bf6d
> > Author: J
On Mon, Oct 13, 2014 at 08:04:16PM -0400, David Miller wrote:
> From: Joonsoo Kim
> Date: Tue, 14 Oct 2014 08:52:19 +0900
>
> > I'd like to know that your another problem is related to commit
> > bf0dea23a9c0 ("mm/slab: use percpu allocator for cpu cache")
tion after just one call of isolate_migratepages_block().
Signed-off-by: Joonsoo Kim
---
mm/compaction.c |3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/compaction.c b/mm/compaction.c
index edba18a..ec74cf0 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -78
On Thu, Oct 02, 2014 at 10:47:51AM -0400, Dan Streetman wrote:
> >> I think that using ref would makes intuitive code. Although there is
> >> some memory overhead, it is really small. So I prefer to this way.
> >>
> >> But, if you think that removing ref is better, I will do it.
> >> Please let me
g logic in zs_create_pool (Dan)
Changes from v4:
- Remove reference count. Instead, use class->index to identify
merged size_class (Minchan, Dan)
Signed-off-by: Joonsoo Kim
---
mm/zsmalloc.c | 80 +++--
1 file changed, 66 insertions(+),
1301 - 1400 of 2325 matches
Mail list logo