il to boot. percpu allocator regards GFP_KERNEL as
the sign of the system fully initialized so aggressively try to make
spare room. With GFP_NOWAIT, it doesn't do that so succeed to boot.
Signed-off-by: Joonsoo Kim
---
mm/slab.c |5 -
1 file changed, 4 insertions(+), 1 deletion(-)
di
, that is, if we have pfmemalloc object and we are not legimate
user for this memory, exchanging it to non-pfmemalloc object, is
unchanged.
Signed-off-by: Joonsoo Kim
---
mm/slab.c | 91 +++--
1 file changed, 52 insertions(+), 39 deletions
me know.
Thanks.
Joonsoo Kim (6):
mm/slab: fix gfp flags of percpu allocation at boot phase
mm/slab: remove kmemleak_erase() call
mm/slab: clean-up __ac_get_obj() to prepare future changes
mm/slab: rearrange irq management
mm/slab: cleanup cache_alloc()
mm/slab: allocation fastpath
Hello,
On Tue, Dec 30, 2014 at 06:17:25PM +0800, Hui Zhu wrote:
> The original of this patch [1] is used to fix the issue in Joonsoo's CMA patch
> "CMA: always treat free cma pages as non-free on watermark checking" [2].
>
> Joonsoo reminded me that this issue affect current kernel too. So made
On Wed, Dec 03, 2014 at 04:52:05PM +0900, Joonsoo Kim wrote:
> It'd be useful to know where the both scanner is start. And, it also be
> useful to know current range where compaction work. It will help to find
> odd behaviour or problem on compaction.
>
> Signed-off-by:
On Mon, Jan 05, 2015 at 09:28:14AM -0600, Christoph Lameter wrote:
> On Mon, 5 Jan 2015, Joonsoo Kim wrote:
>
> > index 449fc6b..54656f0 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -168,6 +168,41 @@ typedef unsigned short freelist_idx_t;
> >
> &g
Hello,
On Mon, Jan 05, 2015 at 06:21:39PM +0100, Andreas Mohr wrote:
> Hi,
>
> Joonsoo Kim wrote:
> > + * Calculate the next globally unique transaction for disambiguiation
>
> "disambiguation"
Okay.
>
> > + ac->tid = next_tid(ac->
ree -> 65 cycles
> > 1 times kmalloc(64)/kfree -> 66 cycles
> > 1 times kmalloc(128)/kfree -> 66 cycles
> > 1 times kmalloc(256)/kfree -> 71 cycles
> > 1 times kmalloc(512)/kfree -> 72 cycles
> > 1 times kmalloc(1024)/kfree -> 71
On Mon, Jan 05, 2015 at 07:03:12PM -0800, Davidlohr Bueso wrote:
> On Mon, 2015-01-05 at 10:36 +0900, Joonsoo Kim wrote:
> > - preempt_disable();
> > - c = this_cpu_ptr(s->cpu_slab);
> > + do {
> > + tid = this_cpu_read(s->cpu_slab->tid)
On Mon, Jan 05, 2015 at 08:01:45PM -0800, Gregory Fong wrote:
> +linux-mm and linux-kernel (not sure how those got removed from cc,
> sorry about that)
>
> On Mon, Jan 5, 2015 at 7:58 PM, Gregory Fong wrote:
> > Hi Joonsoo,
> >
> > On Wed, May 28, 2014 at 12:
On Mon, Jan 05, 2015 at 09:25:02PM -0500, Steven Rostedt wrote:
> On Tue, 6 Jan 2015 10:32:47 +0900
> Joonsoo Kim wrote:
>
>
> > > > +++ b/mm/slub.c
> > > > @@ -2398,13 +2398,15 @@ redo:
> > > > * reading from one cpu area. That does no
On Thu, Dec 04, 2014 at 06:12:56PM +0100, Vlastimil Babka wrote:
> When __rmqueue_fallback() is called to allocate a page of order X, it will
> find a page of order Y >= X of a fallback migratetype, which is different from
> the desired migratetype. With the help of try_to_steal_freepages(), it may
On Thu, Dec 04, 2014 at 06:12:57PM +0100, Vlastimil Babka wrote:
> When allocation falls back to stealing free pages of another migratetype,
> it can decide to steal extra pages, or even the whole pageblock in order to
> reduce fragmentation, which could happen if further allocation fallbacks
> pic
s rate on phase 1 and compaction success rate.
Allocation success rate on phase 1 (%)
57.00 : 63.67
Compaction success rate (Compaction success * 100 / Compaction stalls, %)
28.94 : 35.13
Signed-off-by: Joonsoo Kim
---
include/linux/mmzone.h |3 +++
mm/compaction.
ess rate, but, it shows more
compaction success rate and reduced elapsed time.
Compaction success rate (Compaction success * 100 / Compaction stalls, %)
18.47 : 28.94
Elapsed time (sec)
1429 : 1411
Cc:
Signed-off-by: Joonsoo Kim
---
mm/compaction.c |2 +-
1 file changed, 1 insertion(+), 1
teria.
Signed-off-by: Joonsoo Kim
---
include/trace/events/kmem.h |7 +++--
mm/page_alloc.c | 72 +--
2 files changed, 46 insertions(+), 33 deletions(-)
diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index aece134..4a
From: Joonsoo Kim
Currently, freepage isolation in one pageblock doesn't consider how many
freepages we isolate. When I traced flow of compaction, compaction
sometimes isolates more than 256 freepages to migrate just 32 pages.
In this patch, freepage isolation is stopped at the point th
asePatch-1 Patch-3 Patch-4
55.00 57.00 62.67 64.00
And, compaction success rate (%) on same test are,
BasePatch-1 Patch-3 Patch-4
18.47 28.94 35.13 41.50
This patchset is based on my tracepoint update on compaction.
https://lkml.org/lkml/2014/12/3/71
Joonsoo Kim (4):
On Thu, Dec 04, 2014 at 06:12:58PM +0100, Vlastimil Babka wrote:
> When allocation falls back to another migratetype, it will steal a page with
> highest available order, and (depending on this order and desired
> migratetype),
> it might also steal the rest of free pages from the same pageblock.
On Fri, Dec 26, 2014 at 05:39:04PM +0300, Stefan I. Strogin wrote:
> From: Dmitry Safonov
>
> Here are two functions that provide interface to compute/get used size
> and size of biggest free chunk in cma region.
> Added that information in cmainfo.
>
> Signed-off-by: Dmitry Safonov
> ---
> in
On Fri, Dec 26, 2014 at 05:39:03PM +0300, Stefan I. Strogin wrote:
> /proc/cmainfo contains a list of currently allocated CMA buffers for every
> CMA area when CONFIG_CMA_DEBUG is enabled.
Hello,
I think that providing these information looks useful, but, we need better
implementation. As Laura s
On Thu, Dec 25, 2014 at 05:43:26PM +0800, Hui Zhu wrote:
> In Joonsoo's CMA patch "CMA: always treat free cma pages as non-free on
> watermark checking" [1], it changes __zone_watermark_ok to substruct CMA
> pages number from free_pages if system use CMA:
> if (IS_ENABLED(CONFIG_CMA) && z->ma
On Thu, Dec 25, 2014 at 05:43:28PM +0800, Hui Zhu wrote:
> In [1], Joonsoo said that cma_alloc_counter is useless because pageblock
> is isolated.
> But if alloc_contig_range meet a busy range, it will undo_isolate_page_range
> before goto try next range. At this time, __rmqueue_cma can begin alloc
On Thu, Jan 22, 2015 at 10:12:43AM -0600, Christoph Lameter wrote:
> On Thu, 22 Jan 2015, Joonsoo Kim wrote:
>
> > > Just out of curiosity, "new zone"? Something like movable zone?
> >
> > Yes, I named it as ZONE_CMA. Maybe I can send prototype of
&
On Wed, Jan 21, 2015 at 04:52:36PM +0300, Stefan Strogin wrote:
> Sorry for such a long delay. Now I'll try to answer all the questions
> and make a second version.
>
> The original reason of why we need a new debugging tool for CMA is
> written by Minchan (http://www.spinics.net/lists/linux-mm/ms
On Thu, Jan 22, 2015 at 06:35:53PM +0300, Stefan Strogin wrote:
> Hello Joonsoo,
>
> On 30/12/14 07:38, Joonsoo Kim wrote:
> > On Fri, Dec 26, 2014 at 05:39:03PM +0300, Stefan I. Strogin wrote:
> >> /proc/cmainfo contains a list of currently allocated CMA buffers for
On Thu, Jan 22, 2015 at 09:48:25PM -0500, Sasha Levin wrote:
> On 01/22/2015 03:26 AM, Joonsoo Kim wrote:
> > On Tue, Jan 20, 2015 at 12:38:32PM -0500, Sasha Levin wrote:
> >> Provides a userspace interface to trigger a CMA allocation.
> >>
> >> Usag
ter than before.
Note that this change slightly worses performance in !CONFIG_PREEMPT,
roughly 0.3%. Implementing each case separately would help performance,
but, since it's so marginal, I didn't do that. This would help
maintanance since we have same code for all cases.
Change from v
then,
virt_to_head_page() uses this optimized function to improve performance.
I saw 1.8% win in a fast-path loop over kmem_cache_alloc/free,
(14.063 ns -> 13.810 ns) if target object is on tail page.
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
include/linux/mm.h | 10 +-
1 file ch
, classzone_idx from tracepoint output
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h|3 ++
include/trace/events/compaction.h | 74 +
mm/compaction.c | 38 +--
3 files changed, 111 insertions(+), 4
ka
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h|2 ++
include/trace/events/compaction.h | 49 ++---
mm/compaction.c | 14 +--
3 files changed, 49 insertions(+), 16 deletions(-)
diff --git a/include/linux/comp
would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/trace/events/compaction.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
ction deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.
Changes from v2: Remove reason part from tracepoint output
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h|
imil Babka
Signed-off-by: Joonsoo Kim
---
include/trace/events/compaction.h | 30 +++---
mm/compaction.c |9 ++---
2 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/include/trace/events/compaction.h
b/include/trace/events/compacti
On Tue, Jan 06, 2015 at 09:02:17AM -0800, Davidlohr Bueso wrote:
> On Tue, 2015-01-06 at 17:09 +0900, Joonsoo Kim wrote:
> > On Mon, Jan 05, 2015 at 07:03:12PM -0800, Davidlohr Bueso wrote:
> > > On Mon, 2015-01-05 at 10:36 +0900, Joonsoo Kim wrote:
> > >
On Tue, Jan 06, 2015 at 11:34:39AM +0100, Andreas Mohr wrote:
> On Tue, Jan 06, 2015 at 10:31:22AM +0900, Joonsoo Kim wrote:
> > Hello,
> >
> > On Mon, Jan 05, 2015 at 06:21:39PM +0100, Andreas Mohr wrote:
> > > Hi,
> > >
> > > Joonsoo Kim w
On Tue, Jan 06, 2015 at 10:05:39AM +0100, Vlastimil Babka wrote:
> On 12/03/2014 08:52 AM, Joonsoo Kim wrote:
> > It'd be useful to know where the both scanner is start. And, it also be
> > useful to know current range where compaction work. It will help to find
> > o
On Tue, Jan 06, 2015 at 12:04:28PM +0100, Vlastimil Babka wrote:
> On 12/03/2014 08:52 AM, Joonsoo Kim wrote:
> > It is not well analyzed that when compaction start and when compaction
> > finish. With this tracepoint for compaction start/finish condition, I can
> &g
On Tue, Jan 06, 2015 at 12:27:43PM +0100, Vlastimil Babka wrote:
> On 12/03/2014 08:52 AM, Joonsoo Kim wrote:
> > compaction deferring logic is heavy hammer that block the way to
> > the compaction. It doesn't consider overall system state, so it
> > could prevent
On Thu, Jan 08, 2015 at 09:46:27AM +0100, Vlastimil Babka wrote:
> On 01/08/2015 09:18 AM, Joonsoo Kim wrote:
> > On Tue, Jan 06, 2015 at 10:05:39AM +0100, Vlastimil Babka wrote:
> >> On 12/03/2014 08:52 AM, Joonsoo Kim wrote:
> >> > It'd be useful to know where
On Fri, Jan 09, 2015 at 10:57:10AM +, Mel Gorman wrote:
> On Thu, Jan 08, 2015 at 09:46:27AM +0100, Vlastimil Babka wrote:
> > On 01/08/2015 09:18 AM, Joonsoo Kim wrote:
> > > On Tue, Jan 06, 2015 at 10:05:39AM +0100, Vlastimil Babka wrote:
> > >> On 12/03/20
It is not well analyzed that when/why compaction start/finish or not. With
these new tracepoints, we can know much more about start/finish reason of
compaction. I can find following bug with these tracepoint.
http://www.spinics.net/lists/linux-mm/msg81582.html
Signed-off-by: Joonsoo Kim
It'd be useful to know current range where compaction work for detailed
analysis. With it, we can know pageblock where we actually scan and
isolate, and, how much pages we try in that pageblock and can guess why
it doesn't become freepage with pageblock order roughly.
Signed-off-by: J
ction deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h| 65 +++--
include/trace/events/compaction.h |
lp to find odd behavior or problem on compaction
internal logic.
And, mode is added to both begin/end tracepoint output, since
according to mode, compaction behavior is quite different.
And, lastly, status format is changed to string rather than
status number for readability.
Signed-off-by: J
would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.
Signed-off-by: Joonsoo Kim
---
include/trace/events/compaction.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/trace/events
On Mon, Jan 12, 2015 at 04:53:53PM +0100, Vlastimil Babka wrote:
> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> > It is not well analyzed that when/why compaction start/finish or not. With
> > these new tracepoints, we can know much more about start/finish reason of
> > c
On Mon, Jan 12, 2015 at 05:35:47PM +0100, Vlastimil Babka wrote:
> On 01/12/2015 09:21 AM, Joonsoo Kim wrote:
> > compaction deferring logic is heavy hammer that block the way to
> > the compaction. It doesn't consider overall system state, so it
> > could prevent
then,
virt_to_head_page() uses this optimized function to improve performance.
I saw 1.8% win in a fast-path loop over kmem_cache_alloc/free,
(14.063 ns -> 13.810 ns) if target object is on tail page.
Change from v2: Add some code comments
Acked-by: Christoph Lameter
Signed-off-by: Joonsoo Kim
---
inclu
v1: add comment about barrier() usage
Change from v2:
- use raw_cpu_ptr() rather than this_cpu_ptr() to avoid warning from
preemption debug check since this is intended behaviour
- fix typo alogorithm -> algorithm
Acked-by: Christoph Lameter
Acked-by: Jesper Dangaard Brouer
Tested-by: Jesp
would improve readability. For example, it makes us
easily notice whether current scanner try to compact previously
attempted pageblock or not.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/trace/events/compaction.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff
ction deferring logic. This patch add new tracepoint
to understand work of deferring logic. This will also help to check
compaction success and fail.
Changes from v2: Remove reason part from tracepoint output
Changes from v3: Build fix for !CONFIG_COMPACTION
Signed-off-by: Joonsoo Kim
---
include/
ld fix for !CONFIG_COMPACTION, !CONFIG_TRACEPOINTS
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h|1 +
include/trace/events/compaction.h | 49 ++---
mm/compaction.c | 15 ++--
3 files c
, classzone_idx from tracepoint output
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h|3 ++
include/trace/events/compaction.h | 74 +
mm/compaction.c | 38 +--
3 files changed, 111 insertions(+), 4
imil Babka
Signed-off-by: Joonsoo Kim
---
include/trace/events/compaction.h | 30 +++---
mm/compaction.c |9 ++---
2 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/include/trace/events/compaction.h
b/include/trace/events/compacti
On Thu, Jan 15, 2015 at 05:16:27PM -0800, Andrew Morton wrote:
> On Thu, 15 Jan 2015 16:41:10 +0900 Joonsoo Kim wrote:
>
> > We now have tracepoint for begin event of compaction and it prints
> > start position of both scanners, but, tracepoint for end event of
> >
On Thu, Jan 15, 2015 at 05:16:46PM -0800, Andrew Morton wrote:
> On Thu, 15 Jan 2015 16:40:33 +0900 Joonsoo Kim wrote:
>
> > compound_head() is implemented with assumption that there would be
> > race condition when checking tail flag. This assumption is only true
> &g
On Fri, Jan 16, 2015 at 10:40:23PM -0800, Guenter Roeck wrote:
> On Fri, Jan 16, 2015 at 03:50:38PM -0800, a...@linux-foundation.org wrote:
> > The mm-of-the-moment snapshot 2015-01-16-15-50 has been uploaded to
> >
> >http://www.ozlabs.org/~akpm/mmotm/
> >
> > mmotm-readme.txt says
> >
> >
On Thu, Jan 22, 2015 at 10:45:51AM +0900, Joonsoo Kim wrote:
> On Wed, Jan 21, 2015 at 09:57:59PM +0900, Akinobu Mita wrote:
> > 2015-01-21 9:07 GMT+09:00 Andrew Morton :
> > > On Tue, 20 Jan 2015 15:01:50 -0800 j...@joshtriplett.org wrote:
> > >
> > >> O
On Mon, Jan 26, 2015 at 03:55:29PM +0300, Vladimir Davydov wrote:
> To speed up further allocations SLUB may store empty slabs in per
> cpu/node partial lists instead of freeing them immediately. This
> prevents per memcg caches destruction, because kmem caches created for a
> memory cgroup are onl
On Mon, Jan 26, 2015 at 09:26:04AM -0500, Sasha Levin wrote:
> Provides a userspace interface to trigger a CMA allocation.
>
> Usage:
>
> echo [pages] > alloc
>
> This would provide testing/fuzzing access to the CMA allocation paths.
>
> Signed-off-by: Sasha Levin
> ---
> mm/cma_debug.c
On Mon, Jan 26, 2015 at 09:26:05AM -0500, Sasha Levin wrote:
> Provides a userspace interface to trigger a CMA release.
>
> Usage:
>
> echo [pages] > free
>
> This would provide testing/fuzzing access to the CMA release paths.
>
> Signed-off-by: Sasha Levin
> ---
> mm/cma_debug.c | 54
On Fri, Jan 23, 2015 at 03:37:28PM -0600, Christoph Lameter wrote:
> This patch adds the basic infrastructure for alloc / free operations
> on pointer arrays. It includes a fallback function that can perform
> the array operations using the single alloc and free that every
> slab allocator performs
On Tue, Jan 27, 2015 at 08:35:17AM +0100, Vlastimil Babka wrote:
> On 12/10/2014 07:38 AM, Joonsoo Kim wrote:
> > After your patch is merged, I will resubmit these on top of it.
>
> Hi Joonsoo,
>
> my page stealing patches are now in -mm so are you planning to resubmit this
ret += snprint_stack_trace(kbuf + ret, count - ret,
> - &page_ext->trace, 0);
> + trace.nr_entries = page_ext->nr_entries;
> + trace.entries = &page_ext->trace_entries[0];
> +
> + ret += snprint_stack_trace(kbuf + r
2015-01-27 17:23 GMT+09:00 Vladimir Davydov :
> Hi Joonsoo,
>
> On Tue, Jan 27, 2015 at 05:00:09PM +0900, Joonsoo Kim wrote:
>> On Mon, Jan 26, 2015 at 03:55:29PM +0300, Vladimir Davydov wrote:
>> > @@ -3381,6 +3390,15 @@ void __kmem_cache_shrink(struct kmem_cache *s)
2015-01-28 1:57 GMT+09:00 Christoph Lameter :
> On Tue, 27 Jan 2015, Joonsoo Kim wrote:
>
>> IMHO, exposing these options is not a good idea. It's really
>> implementation specific. And, this flag won't show consistent performance
>> according to specific slab i
gt;> - changed designation from 'mm:' to 'powerpc/mm:', as I think this
>> now belongs in ppc-land
>>
>> v2:
>> - corrected SUPPORTS_DEBUG_PAGEALLOC selection to enable
>> non-STD_MMU_64 builds to use the generic __kernel_map_pages().
>
> I
2015-01-28 0:08 GMT+09:00 Sasha Levin :
> On 01/27/2015 03:06 AM, Joonsoo Kim wrote:
>> On Mon, Jan 26, 2015 at 09:26:04AM -0500, Sasha Levin wrote:
>>> Provides a userspace interface to trigger a CMA allocation.
>>>
>>> Usage:
>>>
>>> e
2015-01-28 5:13 GMT+09:00 Sasha Levin :
> On 01/27/2015 01:25 PM, Sasha Levin wrote:
>> On 01/27/2015 03:10 AM, Joonsoo Kim wrote:
>>>> >> +if (mem->n <= count) {
>>>>> >> > + cma_release(cma, mem->
2015-03-19 0:21 GMT+09:00 Mark Rutland :
> Hi,
>
>> > do {
>> > tid = this_cpu_read(s->cpu_slab->tid);
>> > c = raw_cpu_ptr(s->cpu_slab);
>> > - } while (IS_ENABLED(CONFIG_PREEMPT) && unlikely(tid != c->tid));
>> > + } while (IS_ENABLED(CONFIG_PRE
On Wed, Mar 18, 2015 at 03:33:02PM +0530, Aneesh Kumar K.V wrote:
>
> >
> > #ifdef CONFIG_CMA
> > +static void __init adjust_present_page_count(struct page *page, long count)
> > +{
> > + struct zone *zone = page_zone(page);
> > +
> > + zone->present_pages += count;
> > +}
> > +
>
> May be a
Hello,
On Wed, Apr 01, 2015 at 04:31:43PM +0300, Stefan Strogin wrote:
> Add trace events for cma_alloc() and cma_release().
>
> The cma_alloc tracepoint is used both for successful and failed allocations,
> in case of allocation failure pfn=-1UL is stored and printed.
>
> Signed-off-by: Stefan
Hello, Johannes.
Ccing Vlastimil, because this patch causes some regression on
stress-highalloc test in mmtests and he is a expert on compaction
and would have interest on it. :)
On Fri, Nov 28, 2014 at 07:06:37PM +0300, Vladimir Davydov wrote:
> Hi Johannes,
>
> The patch generally looks good t
t;. This patch add it.
Signed-off-by: Joonsoo Kim
---
kernel/trace/trace_events.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index db54dda..ce5b194 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/t
On Thu, Apr 16, 2015 at 10:34:13AM -0400, Johannes Weiner wrote:
> Hi Joonsoo,
>
> On Thu, Apr 16, 2015 at 12:57:36PM +0900, Joonsoo Kim wrote:
> > Hello, Johannes.
> >
> > Ccing Vlastimil, because this patch causes some regression on
> > stress-highalloc test
On Fri, Apr 17, 2015 at 09:17:53AM +1000, Dave Chinner wrote:
> On Thu, Apr 16, 2015 at 10:34:13AM -0400, Johannes Weiner wrote:
> > On Thu, Apr 16, 2015 at 12:57:36PM +0900, Joonsoo Kim wrote:
> > > This causes following success rate regression of phase 1,2 on
>
On Thu, Apr 16, 2015 at 09:39:52AM -0400, Steven Rostedt wrote:
> On Thu, 16 Apr 2015 13:44:44 +0900
> Joonsoo Kim wrote:
>
> > There is a problem that trace events are not properly enabled with
> > boot cmdline. Problem is that if we pass "trace_event=kmem:mm_page_allo
is tainted by other migratetype
allocation.
* After
Number of blocks type (movable)
DMA32: 207
Number of mixed blocks (movable)
DMA32: 111.2
This result shows that non-mixed block increase by 38% in this case.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 10 +++---
1 file changed, 7
textdata bss dec hex filename
374131440 624 394779a35 mm/page_alloc.o
372491440 624 393139991 mm/page_alloc.o
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 40 +---
1 file changed, 21 insertions(+), 19 dele
Below is result of this idea.
* After
Number of blocks type (movable)
DMA32: 208.2
Number of mixed blocks (movable)
DMA32: 55.8
Result shows that non-mixed block increase by 59% in this case.
Signed-off-by: Joonsoo Kim
---
include/linux/compaction.h | 8 +++
include/linux/gfp.h|
On Mon, Apr 27, 2015 at 09:08:50AM +0100, Mel Gorman wrote:
> On Mon, Apr 27, 2015 at 04:23:39PM +0900, Joonsoo Kim wrote:
> > When we steal whole pageblock, we don't need to break highest order
> > freepage. Perhaps, there is small order freepage so we can use it.
> >
On Mon, Apr 27, 2015 at 09:29:23AM +0100, Mel Gorman wrote:
> On Mon, Apr 27, 2015 at 04:23:41PM +0900, Joonsoo Kim wrote:
> > We already have antifragmentation policy in page allocator. It works well
> > when system memory is sufficient, but, it doesn't works well when sys
migrate_pages() should return number of pages not migrated or error code.
When unmap_and_move return -EAGAIN, outer loop is re-execution without
initialising nr_failed. This makes nr_failed over-counted.
So this patch correct it by initialising nr_failed in outer loop.
Signed-off-by: Joonsoo Kim
Additionally, Correct comment above do_migrate_pages()
Signed-off-by: Joonsoo Kim
Cc: Sasha Levin
Cc: Christoph Lameter
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1d771e4..f7df271 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -948,7 +948,7 @@ static int migrate_to_node(struct mm_struct *mm
migrate_pages() would return positive value in some failure case,
so 'ret > 0 ? 0 : ret' may be wrong.
This fix it and remove one dead statement.
Signed-off-by: Joonsoo Kim
Cc: Michal Nazarewicz
Cc: Marek Szyprowski
Cc: Minchan Kim
Cc: Christoph Lameter
diff --git a/mm/page
move_pages() syscall may return success in case that
do_move_page_to_node_array return positive value which means migration failed.
This patch changes return value of do_move_page_to_node_array
for not returning positive value. It can fix the problem.
Signed-off-by: Joonsoo Kim
Cc: Brice Goglin
2012/7/17 Christoph Lameter :
> On Tue, 17 Jul 2012, Joonsoo Kim wrote:
>
>> migrate_pages() should return number of pages not migrated or error code.
>> When unmap_and_move return -EAGAIN, outer loop is re-execution without
>> initialising nr_failed. This makes nr_fail
2012/7/17 Michal Nazarewicz :
> Acked-by: Michal Nazarewicz
Thanks.
> Actually, it makes me wonder if there is any code that uses this
> information. If not, it would be best in my opinion to make it return
> zero or negative error code, but that would have to be checked.
I think that, too.
I
2012/7/17 Michal Nazarewicz :
> Joonsoo Kim writes:
>> do_migrate_pages() can return the number of pages not migrated.
>> Because migrate_pages() syscall return this value directly,
>> migrate_pages() syscall may return the number of pages not migrated.
>> In fail case
2012/7/17 Michal Nazarewicz :
> Joonsoo Kim writes:
>
>> migrate_pages() would return positive value in some failure case,
>> so 'ret > 0 ? 0 : ret' may be wrong.
>> This fix it and remove one dead statement.
>>
>> Signed-off-by: Joonsoo Kim
identical case as migrate_pages()
Signed-off-by: Joonsoo Kim
Cc: Christoph Lameter
Acked-by: Christoph Lameter
Acked-by: Michal Nazarewicz
diff --git a/mm/migrate.c b/mm/migrate.c
index be26d5c..f495c58 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -982,6 +982,7 @@ int migrate_pages(struct
Additionally, Correct comment above do_migrate_pages()
Signed-off-by: Joonsoo Kim
Cc: Sasha Levin
Cc: Christoph Lameter
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 1d771e4..0732729 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -948,7 +948,7 @@ static int migrate_to_node(struct mm_struct *mm
move_pages() syscall may return success in case that
do_move_page_to_node_array return positive value which means migration failed.
This patch changes return value of do_move_page_to_node_array
for not returning positive value. It can fix the problem.
Signed-off-by: Joonsoo Kim
Cc: Brice Goglin
migrate_pages() would return positive value in some failure case,
so 'ret > 0 ? 0 : ret' may be wrong.
This fix it and remove one dead statement.
Signed-off-by: Joonsoo Kim
Cc: Michal Nazarewicz
Cc: Marek Szyprowski
Cc: Minchan Kim
Cc: Christoph Lameter
Acked-by: Christoph L
2012/7/17 Christoph Lameter :
> On Tue, 17 Jul 2012, Joonsoo Kim wrote:
>
>> @@ -1382,6 +1382,8 @@ SYSCALL_DEFINE4(migrate_pages, pid_t, pid, unsigned
>> long, maxnode,
>>
>> err = do_migrate_pages(mm, old, new,
>> capable(CAP_SYS_NIC
2012/7/17 Michal Nazarewicz :
> On Tue, 17 Jul 2012 14:33:34 +0200, Joonsoo Kim wrote:
>>
>> migrate_pages() would return positive value in some failure case,
>> so 'ret > 0 ? 0 : ret' may be wrong.
>> This fix it and remove one dead statemen
Signed-off-by: Joonsoo Kim
Cc: Michal Nazarewicz
Cc: Marek Szyprowski
Cc: Minchan Kim
Cc: Christoph Lameter
Acked-by: Christoph Lameter
Acked-by: Michal Nazarewicz
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4403009..02d4519 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@
hould call with MIGRATE_SYNC.
So change it.
Additionally, there is mismatch between type of argument and function
declaration for migrate_pages(). So fix this simple case, too.
Signed-off-by: Joonsoo Kim
Cc: Christoph Lameter
Cc: Mel Gorman
diff --git a/mm/memory-failure.c b/mm/memory-failu
ner.
> The special case in isolate_migratepages() introduced by 1d5bfe1ffb5b is
> removed.
>
> Suggested-by: Joonsoo Kim
> Signed-off-by: Vlastimil Babka
Acked-by: Joonsoo Kim
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
701 - 800 of 2325 matches
Mail list logo