Hello,
At 2015/2/2 18:20, Vlastimil Babka wrote:
> On 02/02/2015 08:15 AM, Joonsoo Kim wrote:
>> Compaction has anti fragmentation algorithm. It is that freepage
>> should be more than pageblock order to finish the compaction if we don't
>> find any freepage in requested migratetype buddy list. Th
Hello Joonsoo,
At 2015/2/2 15:15, Joonsoo Kim wrote:
> This is preparation step to use page allocator's anti fragmentation logic
> in compaction. This patch just separates fallback freepage checking part
> from fallback freepage management part. Therefore, there is no functional
> change.
>
> Sig
At 2015/1/30 20:34, Joonsoo Kim wrote:
> From: Joonsoo
>
> Compaction has anti fragmentation algorithm. It is that freepage
> should be more than pageblock order to finish the compaction if we don't
> find any freepage in requested migratetype buddy list. This is for
> mitigating fragmentation, b
At 2015/1/30 20:34, Joonsoo Kim wrote:
> From: Joonsoo
>
> This is preparation step to use page allocator's anti fragmentation logic
> in compaction. This patch just separates steal decision part from actual
> steal behaviour part so there is no functional change.
>
> Signed-off-by: Joonsoo Kim
At 2015/1/31 16:31, Vlastimil Babka wrote:
> On 01/31/2015 08:49 AM, Zhang Yanfei wrote:
>> Hello,
>>
>> At 2015/1/30 20:34, Joonsoo Kim wrote:
>>
>> Reviewed-by: Zhang Yanfei
>>
>> IMHO, the patch making the free scanner move slower makes both scann
mpaction
> success rate would decrease. To prevent this effect, I tested with adding
> pcp drain code on release_freepages(), but, it has no good effect.
>
> Anyway, this patch reduces waste time to isolate unneeded freepages so
> seems reasonable.
Reviewed-by: Zhang Yanfei
IMHO,
8.47 : 28.94
>
> Cc:
> Acked-by: Vlastimil Babka
> Signed-off-by: Joonsoo Kim
Reviewed-by: Zhang Yanfei
> ---
> mm/compaction.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index b68736c..4954e19 10064
he memory of the program. The percentage did not increase
> over time.
>
> With this patch, after 5 minutes of waiting khugepaged had
> collapsed 50% of the program's memory back into THPs.
>
> Signed-off-by: Ebru Akagunduz
> Reviewed-by: Rik van Riel
> Acked-by: Vlas
Hello
在 2015/1/28 8:27, Andrea Arcangeli 写道:
> On Tue, Jan 27, 2015 at 07:39:13PM +0200, Ebru Akagunduz wrote:
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 817a875..17d6e59 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2148,17 +2148,18 @@ static int __collapse_h
Hello
在 2015/1/25 17:25, Vlastimil Babka 写道:
> On 23.1.2015 20:18, Andrea Arcangeli wrote:
>>> >+if (!pte_write(pteval)) {
>>> >+if (++ro > khugepaged_max_ptes_none)
>>> >+goto out_unmap;
>>> >+}
>> It's true this is maxed out at 511, so there must be at
Hello Minchan,
How are you?
在 2015/1/19 14:55, Minchan Kim 写道:
> Hello,
>
> On Sun, Jan 18, 2015 at 04:32:59PM +0800, Hui Zhu wrote:
>> From: Hui Zhu
>>
>> The original of this patch [1] is part of Joonsoo's CMA patch series.
>> I made a patch [2] to fix the issue of this patch. Joonsoo remind
For easier bisection of potential regressions, this patch always uses the
> first zone's pfn as the pivot. That means the free scanner immediately wraps
> to the last pageblock and the operation of scanners is thus unchanged. The
> actual pivot changing is done by the next patch.
>
>
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
Should the new function be inline?
Thanks.
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: Dav
Hello,
在 2015/1/19 18:05, Vlastimil Babka 写道:
> Handling the position where compaction free scanner should restart (stored in
> cc->free_pfn) got more complex with commit e14c720efdd7 ("mm, compaction:
> remember position within pageblock in free pages scanner"). Currently the
> position is update
te_migratepages() introduced by 1d5bfe1ffb5b is
> removed.
>
> Suggested-by: Joonsoo Kim
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph La
x27;unsubscribe linux-mm' in
>> the body to majord...@kvack.org. For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: mailto:"d...@kvack.org";> em...@kvack.org
> --
> To unsubscribe from this list: send the line "unsubsc
+#include
> +#endif
> +
> +#ifdef CONFIG_SLUB
> +#include
> +#endif
> +
> /*
> * State of the slab allocator.
> *
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index d319502..2088904 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
>
_populated_zone(zone) {
> - unsigned int cpu;
> + int high, batch;
>
> - for_each_possible_cpu(cpu)
> - pageset_set_high_and_batch(zone,
> - per_cpu_ptr(zone->pageset, cpu));
> + pageset_get_value
rks
> needed for guard page. This may make code more understandable.
>
> One more thing, I did in this patch, is that fixing freepage accounting.
> If we clear guard page and link it onto isolate buddy list, we should
> not increase freepage count.
>
> Acked-by: Vlastimil Babk
e/linux/page-isolation.h |2 +
> mm/internal.h |5 +
> mm/page_alloc.c| 223 +-
> mm/page_isolation.c| 292
> +++-
> 4 files changed, 368 insertions(+), 154 deletions(-)
>
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
not sure if it really makes sense to check the migratetype here. This
>> check
>> doesn't add any new information to the code and make false impression that
>> this
>> function can be called for other migratetypes than CMA or MOVABLE. Even if
>> so,
>> then invalidating bh_lrus unconditionally will make more sense, IMHO.
>
> I agree. I cannot understand why alloc_contig_range has an argument of
> migratetype.
> Can the alloc_contig_range is called for other migrate type than CMA/MOVABLE?
>
> What do you think about removing the argument of migratetype and
> checking migratetype (if (migratetype == MIGRATE_CMA || migratetype ==
> MIGRATE_MOVABLE))?
>
Remove the checking only. Because gigantic page allocation used for hugetlb is
using alloc_contig_range(.. MIGRATE_MOVABLE).
Thanks.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
emory-hotplug: sh: suitable memory should go to ZONE_MOVABLE
> memory-hotplug: powerpc: suitable memory should go to ZONE_MOVABLE
>
> arch/ia64/mm/init.c | 7 +++
> arch/powerpc/mm/mem.c | 6 ++
> arch/sh/mm/init.c | 13 -
> arch/x86/mm/init_32.c | 6 ++
c: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: "H. Peter Anvin"
> Cc: x...@kernel.org
> Acked-by: Kirill A. Shutemov
> Signed-off-by: Minchan Kim
Acked-by: Zhang Yanfei
> ---
> arch/x86/include/asm/pgtable.h | 10 ++
> 1 file changed, 10 insertions(+)
&
00 max: 37266.00
> min: 22108.00min: 34149.00
>
> In summary, MADV_FREE is about 2 time faster than MADV_DONTNEED.
>
> Cc: Michael Kerrisk
> Cc: Linux API
> Cc: Hugh Dickins
> Cc: Johannes Weiner
> Cc: KOSAKI Motohiro
> Cc: Mel Gorman
tes of
> the range.
This should be updated because the implementation has been changed.
It also remove the page from the swapcache if it is.
Thank you for your effort!
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a
return ISOLATE_ABORT
> return COMPACT_PARTIAL with *contended = cc.contended ==
> COMPACT_CONTENDED_LOCK (1)
> COMPACTFAIL
> if (contended_compaction && gfp_mask & __GFP_NO_KSWAPD)
> no goto nopage because contended_compaction was false by (1)
>
> __alloc_pages
On 06/23/2014 05:52 PM, Vlastimil Babka wrote:
> On 06/23/2014 07:39 AM, Zhang Yanfei wrote:
>> Hello
>>
>> On 06/21/2014 01:45 AM, Kirill A. Shutemov wrote:
>>> On Fri, Jun 20, 2014 at 05:49:31PM +0200, Vlastimil Babka wrote:
>>>> When allocating huge
> Signed-off-by: David Rientjes
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
Reviewed-by: Zhang Yanfei
> ---
> mm/compa
alues must be handled gracefully.
> + *
> + * ACCESS_ONCE is used so that if the caller assigns the result into a local
> + * variable and e.g. tests it for valid range before using, the compiler
> cannot
> + * decide to remove the variable and inline the page_private(page) multip
per migrate
> page, to 2.25 free pages per migrate page, without affecting success rates.
>
> Signed-off-by: Vlastimil Babka
> Acked-by: David Rientjes
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Chri
> Cc: Rik van Riel
> Acked-by: David Rientjes
Reviewed-by: Zhang Yanfei
> ---
> mm/compaction.c | 53 +++--
> 1 file changed, 31 insertions(+), 22 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> inde
he lock contention
> avoidance for async compaction is achieved by the periodical unlock by
> compact_unlock_should_abort() and by using trylock in
> compact_trylock_irqsave()
> and aborting when trylock fails. Sync compaction does not use trylock.
>
> Signed-off-by: Vlastimil
t;> -bool contended; /* True if a lock was contended, or
>> - * need_resched() true during async
>> - * compaction
>> - */
>> +enum com
t async compaction.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van Riel
> Cc: David Rientjes
I think this is a good clean-up to
Normal zone,
> and DMA32 zones on both nodes were thus not considered for compaction.
>
> Signed-off-by: Vlastimil Babka
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
> Cc: Christoph Lameter
> Cc: Rik van R
Please, move up_read() outside khugepaged_alloc_page().
>
I might be wrong. If we up_read in khugepaged_scan_pmd(), then if we round again
do the for loop to get the next vma and handle it. Does we do this without
holding
the mmap_sem in any mode?
And if the loop end, we have another up_
++-
> 7 files changed, 248 insertions(+), 1 deletions(-)
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kerne
On 06/12/2014 11:21 AM, Joonsoo Kim wrote:
> We can remove one call sites for clear_cma_bitmap() if we first
> call it before checking error number.
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Zhang Yanfei
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 1e1b017..01a0713 10
tional change in DMA APIs.
>
> v2: There is no big change from v1 in mm/cma.c. Mostly renaming.
>
> Acked-by: Michal Nazarewicz
> Signed-off-by: Joonsoo Kim
Acked-by: Zhang Yanfei
>
> diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
> index 00e13ce..4eac559 100644
arbitrary bitmap granularity for following generalization.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Zhang Yanfei
>
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index bc4c171..9bc9340 100644
> --- a/drivers/base/dma-contiguous.c
> +++ b/drivers/
ort more
>> meaningful error message like what was successful zone and what is
>> new zone and failed pfn number?
>
> What I want to do in early phase of this patchset is to make cma code
> on DMA APIs similar to ppc kvm's cma code. ppc kvm's cma code already
og format to print function name consistently.
>
> Lastly, I add one more debug log on cma_activate_area().
>
> Signed-off-by: Joonsoo Kim
Reviewed-by: Zhang Yanfei
>
> diff --git a/drivers/base/dma-contiguous.c b/drivers/base/dma-contiguous.c
> index 83969f8..bd0bb81 100
count)
>> -{
>> -mutex_lock(&cma->lock);
>> -bitmap_clear(cma->bitmap, pfn - cma->base_pfn, count);
>> -mutex_unlock(&cma->lock);
>> -}
>> -
>> /**
>> * dma_alloc_from_contiguous() - allocate pages from contiguous
the
detailed function description to make it clear only.
Reviewed-by: Zhang Yanfei
>
> Acked-by: Minchan Kim
>
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More ma
per migrate
> page, to 2.25 free pages per migrate page, without affecting success rates.
>
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
> Cc: Minchan Kim
> Cc: Mel Gorman
> Cc: Joonsoo Kim
> Cc: Michal Nazarewicz
> Cc: Naoya Horiguchi
>
,
> it's simpler to just rely on the check done in isolate_freepages() without
> lock, and not pretend that the recheck under lock guarantees anything. It is
> just a heuristic after all.
>
> Signed-off-by: Vlastimil Babka
Reviewed-by: Zhang Yanfei
> Cc: Minchan Kim
>
On 04/21/2014 12:02 PM, Jianyu Zhan wrote:
> Hi, Yanfei,
>
> On Mon, Apr 21, 2014 at 9:00 AM, Zhang Yanfei
> wrote:
>> What should be exported?
>>
>> lru_cache_add()
>> lru_cache_add_anon()
>> lru_cache_add_file()
>>
>> It seems you onl
e(page);
> + __lru_cache_add(page);
> +}
> +EXPORT_SYMBOL(lru_cache_add_file);
>
> /**
> * lru_cache_add - add a page to a page list
> * @page: the page to be added to the LRU.
> + *
> + * Queue the page for addition to the LRU via pagevec. The decision on
> wh
gt; to reclaim, dirty bit is set so VM can swap out the page instead of
> discarding.
>
> Firstly, heavy users would be general allocators(ex, jemalloc,
> tcmalloc and hope glibc supports it) and jemalloc/tcmalloc already
> have supported the feature for other OS(ex, FreeBSD)
Reviewe
end memory block id, which should always be the same as phys_index.
> So it is removed here.
>
> Signed-off-by: Li Zhong
Reviewed-by: Zhang Yanfei
Still the nitpick there.
> ---
> Documentation/memory-hotplug.txt | 125
> +++---
> drivers/b
Clear explanation and implementation!
Reviewed-by: Zhang Yanfei
On 04/11/2014 01:58 AM, Luiz Capitulino wrote:
> [Full introduction right after the changelog]
>
> Changelog
> -
>
> v3
>
> - Dropped unnecessary WARN_ON() call [Kirill]
> - Always check if
present the last
section
number of a memory block (for end_section_nr), but what he did in the patch
seems not matching the log.
So what is the motivation of adding this 'end_phys_index' file here?
Confused.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "un
plain
> that memory blocks are mode of memory sections.
>
> Thoughts?
I think the change is basically ok. So
Reviewed-by: Zhang Yanfei
Only one nitpick below.
>
> -Nathan
> ---
> Documentation/memory-hotplug.txt | 113
> ---
> 1 file
pages to writeback */
> if (referenced_page && !PageSwapBacked(page))
> return PAGEREF_RECLAIM_CLEAN;
> @@ -932,6 +948,8 @@ static unsigned long shrink_page_list(struct list_head
> *page_list,
> goto activate_locked;
> case PAGEREF_KE
ortunately, the zone_reclaim_mode() path is already slow and it is the path
> that takes the hit.
>
> Signed-off-by: Mel Gorman
Reviewed-by: Zhang Yanfei
> ---
> include/linux/mmzone.h | 1 -
> mm/page_alloc.c| 15 +--
> 2 files changed, 1 insertion(+
t are
> sophisticated enough to know they need zone_reclaim_mode will detect it.
>
> Signed-off-by: Mel Gorman
Reviewed-by: Zhang Yanfei
> ---
> Documentation/sysctl/vm.txt | 17 +
> mm/page_alloc.c | 2 --
> 2 files changed, 9 insertions(+), 10 de
On 04/03/2014 10:37 AM, Li Zhong wrote:
> On Thu, 2014-04-03 at 09:37 +0800, Zhang Yanfei wrote:
>> Add ccing
>>
>> On 04/02/2014 04:56 PM, Li Zhong wrote:
>>> I noticed the phys_index and end_phys_index under
>>> /sys/devices/system/memory/memoryXXX/ have
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
s/MADV_NODUMP/MADV_DONTDUMP/
Signed-off-by: Zhang Yanfei
---
include/uapi/asm-generic/mman-common.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/include/uapi/asm-generic/mman-common.h
b/include/uapi/asm-generic/mman-common.h
index 4164529..ddc3b36 100644
--- a
n't know where we are with respect to these things and
I doubt if many of our users know either. How can Michael write a manpage for
this is we don't tell him what it all does?
--
Thanks
Zhang Yanfei
>
> I tweaked jamalloc t
py 4MB vm size permamently. 100 pages (just 400KB)
>> could
>> + * takes 400MB with bad luck.
>> + *
>
> If you use this function for below VMAP_MAX_ALLOC pages, it could be
> faster
> than vmap so it's good but if you mix long-life and short-life object
for the second time, it is promoted to
>>>> + *the active list, shrinking the inactive list by one slot. This
>>>> + *also slides all inactive pages that were faulted into the cache
>>>> + *more recently than the activated page towards the tail of
NUMA_MISPLACED);
> if (nr_remaining) {
> + if (!list_empty(&migratepages)) {
> + list_del(&page->lru);
> + dec_zone_page_state(page, NR_ISOLATED_ANON +
> + page_is_file_cache(page));
st* parse SRAT earlier comparing
to the current approach in this patchset, right?
Should we follow "Make it work first and optimize/beautify it later"?
I think if we have the scene that must parse SRAT earlier, I think tejun
will have no objection to it.
--
Thanks.
Zhang Yanfei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Hello tejun,
On 10/14/2013 11:19 PM, Tejun Heo wrote:
> Hey,
>
> On Mon, Oct 14, 2013 at 11:06:14PM +0800, Zhang Yanfei wrote:
>> a little difference here, consider a 16-GB node. If we parse SRAT earlier,
>> and still use the top-down allocation, and kernel image is loaded a
Hello guys, this is the part2 of our memory hotplug work. This part
is based on the part1:
"x86, memblock: Allocate memory near kernel image before SRAT parsed"
which is base on 3.12-rc4.
You could refer part1 from: https://lkml.org/lkml/2013/10/10/644
Any comments are welcome! Thanks!
[Prob
g Liu
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
Reviewed-by: Wanpeng Li
Acked-by: Toshi Kani
---
arch/x86/mm/numa.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 24aec58..e17db5d 100644
--- a/arch/x86/
ernel will
arrange hotpluggable memory in SRAT as ZONE_MOVABLE. And if users do this, all
the other movablecore=nn@ss and kernelcore=nn@ss options should be ignored.
For those who don't want this, just specify nothing. The kernel will act as
before.
Signed-off-by: Tang Chen
Signed-off-by: Zhang Y
ns in
the default top-down allocation function if movable_node boot option is
specified.
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
include/linux/memblock.h | 18 ++
mm/memblock.c| 12
mm/memory_hotplug.c |1 +
3 files change
From: Tang Chen
At very early time, the kernel have to use some memory such as
loading the kernel image. We cannot prevent this anyway. So any
node the kernel resides in should be un-hotpluggable.
Signed-off-by: Zhang Yanfei
Reviewed-by: Zhang Yanfei
---
arch/x86/mm/numa.c | 44
From: Tang Chen
When parsing SRAT, we know that which memory area is hotpluggable.
So we invoke function memblock_mark_hotplug() introduced by previous
patch to mark hotpluggable memory in memblock.
Signed-off-by: Tang Chen
Reviewed-by: Zhang Yanfei
---
arch/x86/mm/numa.c |2 ++
arch/x86
From: Tang Chen
Signed-off-by: Tang Chen
Reviewed-by: Zhang Yanfei
---
arch/metag/mm/init.c |3 ++-
arch/metag/mm/numa.c |3 ++-
arch/microblaze/mm/init.c |3 ++-
arch/powerpc/mm/mem.c |2 +-
arch/powerpc/mm/numa.c|8 +---
arch/sh/kernel/setup.c
flag to indicate the
hotpluggable memory regions in memblock and a function memblock_mark_hotplug()
to mark hotpluggable memory if we find one.
Signed-off-by: Tang Chen
Reviewed-by: Zhang Yanfei
---
include/linux/memblock.h | 17 +++
mm/memblock.c| 52
use MEMBLK_DEFAULT | MEMBLK_HOTPLUG or just MEMBLK_HOTPLUG. So remove
MEMBLK_DEFAULT (which is 0), and just use 0 by default to avoid confusions
to users.
Suggested-by: Wen Congyang
Suggested-by: Liu Jiang
Signed-off-by: Tang Chen
Reviewed-by: Zhang Yanfei
---
include/linux/memblock.
e not worrisome
about the approach.
Thanks.
On 10/11/2013 04:13 AM, Zhang Yanfei wrote:
> Hello, here is the v7 version. Any comments are welcome!
>
> The v7 version is based on linus's tree (3.12-rc4)
> HEAD is:
> commit d0e639c9e06d44e713170031fe05fb60ebe680af
> Author: Linus T
t want
to lose their NUMA performance, just don't specify anything. The kernel
will work as before.
Suggested-by: Kamezawa Hiroyuki
Suggested-by: Ingo Molnar
Acked-by: Tejun Heo
Acked-by: Toshi Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
Documentation/kernel-paramete
reserve_crashkernel() after SRAT is parsed.
Acked-by: Tejun Heo
Acked-by: Toshi Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
arch/x86/kernel/setup.c |9 +++--
1 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index
]
Acked-by: Tejun Heo
Acked-by: Toshi Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
arch/x86/mm/init.c | 66 ++-
1 files changed, 64 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index ea2be
into separate
functions, and choose to use which way in init_mem_mapping,
which makes the code more clear.
Acked-by: Tejun Heo
Acked-by: Toshi Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
arch/x86/mm/init.c | 60 ++-
1 files
emory top-down. So this patch introduces
a new bottom-up allocation mode to allocate memory bottom-up. And later
when we use this allocation direction to allocate memory, we will limit
the start address above the kernel.
Acked-by: Toshi Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Y
Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
mm/memblock.c | 47 ++-
1 files changed, 34 insertions(+), 13 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index 0ac412a..accff10 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
Hello, here is the v7 version. Any comments are welcome!
The v7 version is based on linus's tree (3.12-rc4)
HEAD is:
commit d0e639c9e06d44e713170031fe05fb60ebe680af
Author: Linus Torvalds
Date: Sun Oct 6 14:00:20 2013 -0700
Linux 3.12-rc4
[Problem]
The current Linux cannot migrate pages
Hello guys,
On 10/10/2013 07:26 AM, Zhang Yanfei wrote:
> Hello Peter,
>
> On 10/10/2013 07:10 AM, H. Peter Anvin wrote:
>> On 10/09/2013 02:45 PM, Zhang Yanfei wrote:
>>>>
>>>> I would also argue that in the VM scenario -- and arguable even in the
>>
Hello tejun
CC: Peter
On 10/07/2013 08:00 AM, H. Peter Anvin wrote:
> On 10/03/2013 07:00 PM, Zhang Yanfei wrote:
>> From: Tang Chen
>>
>> The Linux kernel cannot migrate pages used by the kernel. As a
>> result, kernel pages cannot be hot-removed. So we cannot allo
t want
to lose their NUMA performance, just don't specify anything. The kernel
will work as before.
Suggested-by: Kamezawa Hiroyuki
Acked-by: Tejun Heo
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
Documentation/kernel-parameters.txt |3 +++
arch/x86/mm/numa.c
From: Zhang Yanfei
The macro is nowhere used, so remove it.
Signed-off-by: Zhang Yanfei
---
mm/page_alloc.c |2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1fb13b6..9d8508d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
From: Zhang Yanfei
Implement an empty get_pfn_range_for_nid for !CONFIG_HAVE_MEMBLOCK_NODE_MAP,
so that we could remove the #ifdef in free_area_init_node.
Signed-off-by: Zhang Yanfei
---
mm/page_alloc.c |7 +--
1 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/mm
From: Zhang Yanfei
We pass the number of pages which hold page structs of a memory
section to function free_map_bootmem. This is right when
!CONFIG_SPARSEMEM_VMEMMAP but wrong when CONFIG_SPARSEMEM_VMEMMAP.
When CONFIG_SPARSEMEM_VMEMMAP, we should pass the number of pages
of a memory section to
Hello andrew,
On 10/04/2013 04:42 AM, Andrew Morton wrote:
> On Thu, 03 Oct 2013 11:32:02 +0800 Zhang Yanfei
> wrote:
>
>> We pass the number of pages which hold page structs of a memory
>> section to function free_map_bootmem. This is right when
>> !CONFIG_SPARS
Hello wanpeng,
On 10/05/2013 01:54 PM, Wanpeng Li wrote:
> Hi Yanfei,
> On Thu, Oct 03, 2013 at 11:32:02AM +0800, Zhang Yanfei wrote:
>> From: Zhang Yanfei
>>
>> We pass the number of pages which hold page structs of a memory
>> section to function free_m
t want
to lose their NUMA performance, just don't specify anything. The kernel
will work as before.
Suggested-by: Kamezawa Hiroyuki
Suggested-by: Ingo Molnar
Acked-by: Tejun Heo
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
Documentation/kernel-parameters.txt
reserve_crashkernel() after SRAT is parsed.
Acked-by: Tejun Heo
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
arch/x86/kernel/setup.c |9 +++--
1 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index f0de629..b5e350d 100644
higher memory.
Acked-by: Tejun Heo
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
arch/x86/mm/init.c | 71 ++-
1 files changed, 69 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index ea2be79..5cea9ed
into separate
functions, and choose to use which way in init_mem_mapping,
which makes the code more clear.
Acked-by: Tejun Heo
Acked-by: Toshi Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
arch/x86/mm/init.c | 60 ++-
1 files
emory top-down. So this patch introduces
a new bottom-up allocation mode to allocate memory bottom-up. And later
when we use this allocation direction to allocate memory, we will limit
the start address above the kernel.
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
include/linux/membl
Kani
Signed-off-by: Tang Chen
Signed-off-by: Zhang Yanfei
---
mm/memblock.c | 47 ++-
1 files changed, 34 insertions(+), 13 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index 0ac412a..accff10 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
Hello, here is the v6 version. Any comments are welcome!
The v6 version is based on linus's tree (3.12-rc3)
HEAD is:
commit 15c03dd4859ab16f9212238f29dd315654aa94f6
Author: Linus Torvalds
Date: Sun Sep 29 15:02:38 2013 -0700
Linux 3.12-rc3
[Problem]
The current Linux cannot migrate page
From: Zhang Yanfei
Fix typo in __page_to_pfn comment: s/encorded/encoded.
Signed-off-by: Zhang Yanfei
---
include/asm-generic/memory_model.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/include/asm-generic/memory_model.h
b/include/asm-generic/memory_model.h
index
From: Zhang Yanfei
To be consistent with early_ioremap which had a change in
commit 4f4319a ("x86/ioremap: Correct function name output"),
let the complier enter the function name too.
Signed-off-by: Zhang Yanfei
---
arch/x86/mm/ioremap.c |8
1 files changed, 4 insert
From: Zhang Yanfei
We pass the number of pages which hold page structs of a memory
section to function free_map_bootmem. This is right when
!CONFIG_SPARSEMEM_VMEMMAP but wrong when CONFIG_SPARSEMEM_VMEMMAP.
When CONFIG_SPARSEMEM_VMEMMAP, we should pass the number of pages
of a memory section to
1 - 100 of 389 matches
Mail list logo