On Thu, Oct 29, 2020 at 05:27:17PM +0100, David Hildenbrand wrote:
> Let's revert what we did in case seomthing goes wrong and we return an
> error.
Dumb question, but should not we do this for other arches as well?
--
Oscar Salvador
SUSE L3
On Thu, Oct 29, 2020 at 05:27:16PM +0100, David Hildenbrand wrote:
> Let's print a warning similar to in arch_add_linear_mapping() instead of
> WARN_ON_ONCE() and eventually crashing the kernel.
>
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Rashmica Gupta
> C
On Thu, Oct 29, 2020 at 08:57:32AM -0700, Yang Shi wrote:
> IMHO, we don't have to modify those two places at all. They are used
> to rebalance the anon lru active/inactive ratio even if we did not try
> to evict anon pages at all, so "total_swap_pages" is used instead of
> checking swappiness and
On 2020-10-16 15:42, Michal Hocko wrote:
OK, I finally managed to convince my friday brain to think and grasped
what the code is intended to do. The loop is hairy and we want to
prevent from spurious EIO when all the pages are on a proper node. So
the check has to be done inside the loop. Anyway
On 2020-10-16 14:31, Michal Hocko wrote:
I do not like the fix though. The code is really confusing. Why should
we check for flags in each iteration of the loop when it cannot change?
Also why should we take the ptl lock in the first place when the look
is
broken out immediately?
About checki
On 2020-10-15 14:15, Shijie Luo wrote:
When flags don't have MPOL_MF_MOVE or MPOL_MF_MOVE_ALL bits, code
breaks
and passing origin pte - 1 to pte_unmap_unlock seems like not a good
idea.
Signed-off-by: Shijie Luo
Signed-off-by: linmiaohe
---
mm/mempolicy.c | 6 +-
1 file changed, 5 ins
On 2020-10-07 18:17, Dave Hansen wrote:
From: Dave Hansen
Reclaim-based migration is attempting to optimize data placement in
memory based on the system topology. If the system changes, so must
the migration ordering.
The implementation here is pretty simple and entirely unoptimized. On
any
On 2020-09-22 19:03, Andrew Morton wrote:
On Tue, 22 Sep 2020 15:56:36 +0200 Oscar Salvador
wrote:
This patchset is the latest version of soft offline rework patchset
targetted for v5.9.
Thanks.
Where do we now stand with the followon patches:
mmhwpoison-take-free-pages-off-the-buddy-free
On 2020-09-19 02:23, Andrew Morton wrote:
On Fri, 18 Sep 2020 09:58:22 +0200 osalva...@suse.de wrote:
I just found out yesterday that the patchset Naoya sent has diverged
from mine in some aspects that lead to some bugs [1].
This was due to a misunderstanding so no blame here.
So, patch#8 and p
On 2020-08-06 20:49, nao.horigu...@gmail.com wrote:
From: Oscar Salvador
This patch changes the way we set and handle in-use poisoned pages.
Until now, poisoned pages were released to the buddy allocator,
trusting
that the checks that take place prior to hand the page would act as a
safe net
On 2020-09-17 17:27, HORIGUCHI NAOYA wrote:
Sorry, I modified the patches based on the different assumption from
yours.
I firstly thought of taking page off after confirming the error page
is freed back to buddy. This approach leaves the possibility of reusing
the error page (which is acceptable
On 2020-09-16 18:30, osalva...@suse.de wrote:
On 2020-09-16 16:46, Aristeu Rozanski wrote:
Hi Oscar,
On Wed, Sep 16, 2020 at 04:09:30PM +0200, Oscar Salvador wrote:
On Wed, Sep 16, 2020 at 09:53:58AM -0400, Aristeu Rozanski wrote:
Can you try the other patch I posted in response to Naoya?
Sa
On 2020-09-16 16:46, Aristeu Rozanski wrote:
Hi Oscar,
On Wed, Sep 16, 2020 at 04:09:30PM +0200, Oscar Salvador wrote:
On Wed, Sep 16, 2020 at 09:53:58AM -0400, Aristeu Rozanski wrote:
Can you try the other patch I posted in response to Naoya?
Same thing:
[ 369.195056] Soft offlining pfn 0x
On 2020-09-16 20:34, David Hildenbrand wrote:
When adding separate memory blocks via add_memory*() and onlining them
immediately, the metadata (especially the memmap) of the next block
will be
placed onto one of the just added+onlined block. This creates a chain
of unmovable allocations: If the
On 2020-09-16 19:58, Aristeu Rozanski wrote:
On Wed, Sep 16, 2020 at 06:34:52PM +0200, osalva...@suse.de wrote:
Fat fingers, sorry:
Ok, this is something different.
The race you saw previously is kinda normal as there is a race window
between spotting a freepage and taking it off the buddy free
On 2020-09-08 09:56, Oscar Salvador wrote:
The important bit of this patchset is patch#1, which is a fix to take
off
HWPoison pages off a buddy freelist since it can lead us to having
HWPoison
pages back in the game without no one noticing it.
So fix it (we did that already for soft_offline_pag
On 2020-09-09 12:54, Vlastimil Babka wrote:
Thanks! I expect no performance change while no isolation is in
progress, as
there are no new tests added in alloc/free paths. During page isolation
there's
a single drain instead of once-per-pageblock, which is a benefit. But
the
pcplists are effect
On 2020-08-04 03:49, Qian Cai wrote:
Well, each iteration will mmap/munmap, so there should be no leaking.
https://gitlab.com/cailca/linux-mm/-/blob/master/random.c#L376
It also seem to me madvise(MADV_SOFT_OFFLINE) does start to fragment
memory
somehow, because after this "madvise: Cannot al
On 2020-07-20 10:27, osalva...@suse.de wrote:
On 2020-07-17 08:55, HORIGUCHI NAOYA wrote:
I ran Quan Cai's test program (https://github.com/cailca/linux-mm) on
a
small (4GB memory) VM, and weiredly found that (1) the target
hugepages
are not always dissolved and (2) dissovled hugetpages are sti
On 2020-07-17 08:55, HORIGUCHI NAOYA wrote:
I ran Quan Cai's test program (https://github.com/cailca/linux-mm) on a
small (4GB memory) VM, and weiredly found that (1) the target hugepages
are not always dissolved and (2) dissovled hugetpages are still counted
in "HugePages_Total:". See below:
On 2020-07-16 14:38, Oscar Salvador wrote:
From: David Woodhouse
Sorry for the noise.
This should not be here.
I dunno how this patch sneaked in.
Please ignore it.
On 2019-10-11 23:32, Qian Cai wrote:
# /opt/ltp/runtest/bin/move_pages12
move_pages12.c:263: INFO: Free RAM 258988928 kB
move_pages12.c:281: INFO: Increasing 2048kB hugepages pool on node 0 to
4
move_pages12.c:291: INFO: Increasing 2048kB hugepages pool on node 8 to
4
move_pages12.c:207: INFO:
On 2019-09-11 08:22, Naoya Horiguchi wrote:
I found another panic ...
Hi Naoya,
Thanks for giving it a try. Are these testcase public?
I will definetely take a look and try to solve these cases.
Thanks!
This testcase is testing the corner case where hugepage migration fails
by allocation fa
On 2019-07-24 22:11, Dan Williams wrote:
On Tue, Jun 25, 2019 at 12:53 AM Oscar Salvador
wrote:
This patch introduces MHP_MEMMAP_DEVICE and MHP_MEMMAP_MEMBLOCK flags,
and prepares the callers that add memory to take a "flags" parameter.
This "flags" parameter will be evaluated later on in Patc
On 2019-05-07 01:39, Dan Williams wrote:
Towards enabling memory hotplug to track partial population of a
section, introduce 'struct mem_section_usage'.
A pointer to a 'struct mem_section_usage' instance replaces the
existing
pointer to a 'pageblock_flags' bitmap. Effectively it adds one more
On 2019-05-07 01:39, Dan Williams wrote:
Prepare for hot{plug,remove} of sub-ranges of a section by tracking a
sub-section active bitmask, each bit representing a PMD_SIZE span of
the
architecture's memory hotplug section size.
The implications of a partially populated section is that pfn_vali
On 2018-12-17 16:29, Michal Hocko wrote:
On Mon 17-12-18 16:06:51, Oscar Salvador wrote:
[...]
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a6e7bfd18cde..18d41e85f672 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8038,11 +8038,12 @@ bool has_unmovable_pages(struct zone *zone,
s
On 2018-12-11 11:18, Michal Hocko wrote:
Currently, if we fail to isolate a single page, we put all already
isolated pages back to their LRU and we bail out from the function.
This is quite suboptimal, as this will force us to start over again
because scan_movable_pages will give us the same rang
On 2018-12-11 09:50, Oscar Salvador wrote:
- } else {
- pr_warn("failed to isolate pfn %lx\n", pfn);
- dump_page(page, "isolation failed");
- put_page(page);
- /* Because we don't have big zone-
This commit adds shake_page() for mlocked pages to make sure that the
target
page is flushed out from LRU cache. Without this shake_page(),
subsequent
delete_from_lru_cache() (from me_pagecache_clean()) fails to isolate
it and
the page will finally return back to LRU list. So this scenario lead
> Btw. the way how we drop all the work on the first page that we
> cannot
> isolate is just goofy. Why don't we simply migrate all that we
> already
> have on the list and go on? Something for a followup cleanup though.
Indeed, that is just wrong.
I will try to send a followup cleanup to fix th
On 2018-12-03 12:16, David Hildenbrand wrote:
Let's use the easier to read (and not mess up) variants:
- Use DEVICE_ATTR_RO
- Use DEVICE_ATTR_WO
- Use DEVICE_ATTR_RW
instead of the more generic DEVICE_ATTR() we're using right now.
We have to rename most callback functions. By fixing the intendat
On 2018-12-03 11:03, Michal Hocko wrote:
Debugged-by: Oscar Salvador
Cc: stable
Signed-off-by: Michal Hocko
Bit by bit memory-hotplug is getting trained :-)
Reviewed-by: Oscar Salvador
> Signed-off-by: Michal Hocko
[...]
> + do {
> + for (pfn = start_pfn; pfn;)
> + {
> + /* start memory hot removal */
Should we change thAT comment? I mean, this is not really the hot-
removal stage.
Maybe "start memory migration" suits better? o
On Tue, 2018-11-20 at 14:43 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> do_migrate_range has been limiting the number of pages to migrate to
> 256
> for some reason which is not documented.
When looking back at old memory-hotplug commits one feels pretty sad
about the brevity of the cha
On Fri, 2018-11-16 at 14:41 -0800, Dave Hansen wrote:
> On 11/16/18 2:12 AM, Oscar Salvador wrote:
> > Physical memory hotadd has to allocate a memmap (struct page array)
> > for
> > the newly added memory section. Currently, kmalloc is used for
> > those
> > allocations.
>
> Did you literally mea
On Mon, 2018-11-12 at 21:28 +, Pavel Tatashin wrote:
> >
> > This collides with the refactoring of hmm, to be done in terms of
> > devm_memremap_pages(). I'd rather not introduce another common
> > function *beneath* hmm and devm_memremap_pages() and rather make
> > devm_memremap_pages() the c
> This collides with the refactoring of hmm, to be done in terms of
> devm_memremap_pages(). I'd rather not introduce another common
> function *beneath* hmm and devm_memremap_pages() and rather make
> devm_memremap_pages() the common function.
Hi Dan,
That is true.
Previous version of this patch
On Fri, 2018-11-16 at 12:22 +0100, Michal Hocko wrote:
> On Fri 16-11-18 11:47:01, osalvador wrote:
> > On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> > > From: Michal Hocko
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index a919ba5cb3c8
On Fri, 2018-11-16 at 10:57 +0100, Michal Hocko wrote:
> On Thu 15-11-18 13:37:35, Andrew Morton wrote:
> [...]
> > Worse, the situations in which managed_zone() != populated_zone()
> > are
> > rare(?), so it will take a long time for problems to be discovered,
> > I
> > expect.
>
> We would basic
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a919ba5cb3c8..ec2c7916dc2d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7845,6 +7845,7 @@ bool has_unmovable_pages(struct zone *zone,
> struct page *
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> The memory offlining failure reporting is inconsistent and
> insufficient.
> Some error paths simply do not report the failure to the log at all.
> When we do report there are no details about the reason of the
> fail
On Fri, 2018-11-16 at 09:30 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> This function is never called from a context which would provide
> misaligned pfn range so drop the pointless check.
>
> Signed-off-by: Michal Hocko
I vaguely remember that someone reported a problem about misalign
On Wed, 2018-11-07 at 08:35 +0100, Michal Hocko wrote:
> On Wed 07-11-18 07:35:18, Balbir Singh wrote:
> > The check seems to be quite aggressive and in a loop that iterates
> > pages, but has nothing to do with the page, did you mean to make
> > the check
> >
> > zone_idx(page_zone(page)) == ZONE
On Tue, 2018-11-06 at 10:55 +0100, Michal Hocko wrote:
> From: Michal Hocko
>
> Reported-and-tested-by: Baoquan He
> Acked-by: Baoquan He
> Fixes: "mm, memory_hotplug: make has_unmovable_pages more robust")
> Signed-off-by: Michal Hocko
Looks good to me.
Reviewed-by: Oscar Salvador
Oscar
From: Oscar Salvador
unregister_memory_section() calls remove_memory_section()
with three arguments:
* node_id
* section
* phys_device
Neither node_id nor phys_device are used.
Let us drop them from the function.
Signed-off-by: Oscar Salvador
---
drivers/base/memory.c | 5 ++---
1 file chang
From: Oscar Salvador
This patchset does some cleanups and refactoring in the memory-hotplug code.
The first and the second patch are pretty straightforward, as they
only remove unused arguments/checks.
The third one refactors unregister_mem_sect_under_nodes.
This is needed to have a proper fall
From: Oscar Salvador
unregister_mem_sect_under_nodes() tries to allocate a nodemask_t
in order to check whithin the loop which nodes have already been unlinked,
so we do not repeat the operation on them.
NODEMASK_ALLOC calls kmalloc() if NODES_SHIFT > 8, otherwise
it just declares a nodemask_t v
From: Oscar Salvador
Before calling to unregister_mem_sect_under_nodes(),
remove_memory_section() already checks if we got a valid
memory_block.
No need to check that again in unregister_mem_sect_under_nodes().
Signed-off-by: Oscar Salvador
---
drivers/base/node.c | 4
1 file changed, 4
From: Oscar Salvador
unregister_memory_section() calls remove_memory_section()
with three arguments:
* node_id
* section
* phys_device
Neither node_id nor phys_device are used.
Let us drop them from the function.
Signed-off-by: Oscar Salvador
---
drivers/base/memory.c | 5 ++---
1 file chang
From: Oscar Salvador
This patchset is about cleaning up/refactoring a few functions
from the memory-hotplug code.
The first and the second patch are pretty straightforward, as they
only remove unused arguments/checks.
The third one change the layout of the unregister_mem_sect_under_nodes a bit.
From: Oscar Salvador
Before calling to unregister_mem_sect_under_nodes(),
remove_memory_section() already checks if we got a valid
memory_block.
No need to check that again in unregister_mem_sect_under_nodes().
Signed-off-by: Oscar Salvador
---
drivers/base/node.c | 4
1 file changed, 4
From: Oscar Salvador
With the assumption that the relationship between
memory_block <-> node is 1:1, we can refactor this function a bit.
This assumption is being taken from register_mem_sect_under_node()
code.
register_mem_sect_under_node() takes the mem_blk's nid, and compares it
to the pfn's
From: Oscar Salvador
This tries to fix [1], which was reported by David Hildenbrand, and also
does some cleanups/refactoring.
I am sending this as RFC to see if the direction I am going is right before
spending more time into it.
And also to gather feedback about hmm/zone_device stuff.
The code
From: Oscar Salvador
This patch is only a preparation for the following-up patches.
The idea is to remove the zone parameter and pass the nid instead.
The zone parameter was needed because down the chain we call
__remove_zone, which adjusts the spanned pages of a zone/node.
online_pages() increm
From: Oscar Salvador
This patch refactors shrink_zone_span and shrink_pgdat_span functions.
In case that find_smallest/biggest_section do not return any pfn,
it means that the zone/pgdat has no online sections left, so we can
set the respective values to 0:
zone case:
zone->zone_star
From: Oscar Salvador
Currently, we decrement zone/node spanned_pages when we
__remove__ the memory.
This is not really great.
Incrementing of spanned pages is done in online_pages() path,
decrementing spanned pages should be moved to offline_pages().
This, besides making the core more consisten
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(
From: Pavel Tatashin
__paginginit is the same thing as __meminit except for platforms without
sparsemem, there it is defined as __init.
Remove __paginginit and use __meminit. Use __ref in one single function
that merges __meminit and __init sections: setup_usemap().
Signed-off-by: Pavel Tatashi
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Not having an ifdef in the function makes the code more readable.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 26 ++
From: Oscar Salvador
Currently, whenever a new node is created/re-used from the memhotplug path,
we call free_area_init_node()->free_area_init_core().
But there is some code that we do not really need to run when we are coming
from such path.
free_area_init_core() performs the following actions:
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
From: Oscar Salvador
Changes:
v5 -> v6:
- Added patch from Pavel that removes __paginginit
- Convert all __meminit(old __paginginit) to __init
for functions we do not need after initialization.
- Move definition of free_area_init_core_hotplug
to includ
From: Oscar Salvador
__pagininit macro is being used to mark functions for:
a) Functions that we do not need to keep once the system is fully
initialized with regard to memory.
b) Functions that will be needed for the memory-hotplug code,
and because of that we need to keep them after init
From: Oscar Salvador
is_dev_zone() is using zone_id() to check if the zone is ZONE_DEVICE.
zone_id() looks pretty much the same as zone_idx(), and while the use of
zone_idx() is quite spread in the kernel, zone_id() is only being
used by is_dev_zone().
This patch removes zone_id() and makes is_d
From: Oscar Salvador
Changes:
v4 -> v5:
- Remove __ref from hotadd_new_pgdat and placed it to
free_area_init_core_hotplug. (Suggested by Pavel)
- Since free_area_init_core_hotplug is now allowed to be in a different
section (__ref), remove the __paginginit.)
From: Oscar Salvador
Currently, whenever a new node is created/re-used from the memhotplug path,
we call free_area_init_node()->free_area_init_core().
But there is some code that we do not really need to run when we are coming
from such path.
free_area_init_core() performs the following actions:
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Not having an ifdef in the function makes the code more readable.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 25 ++
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
Acked-by: Michal Hocko
---
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Not having an ifdef in the function makes the code more readable.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 25 ++
From: Oscar Salvador
Currently, whenever a new node is created/re-used from the memhotplug path,
we call free_area_init_node()->free_area_init_core().
But there is some code that we do not really need to run when we are coming
from such path.
free_area_init_core() performs the following actions:
From: Oscar Salvador
Changes:
v3 -> v4:
- Unify patch-5 and patch-4
- Make free_area_init_core __init (Suggested by Michal)
- Make zone_init_internals __paginginit (Suggested by Pavel)
- Add Reviewed-by/Acked-by:
v2 -> v3:
- Think better about split free_
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
---
include/linux/mm.h |
From: Oscar Salvador
Currently, we call free_area_init_node() from the memhotplug path.
In there, we set some pgdat's fields, and call calculate_node_totalpages().
calculate_node_totalpages() calculates the # of pages the node has.
Since the node is either new, or we are re-using it, the zones b
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(
From: Oscar Salvador
This patchset does three things:
1) Clean ups/refactor free_area_init_core/free_area_init_node
by moving the ifdefery out of the functions.
2) Move the pgdat/zone initialization in free_area_init_core to its
own function.
3) Introduce free_area_init_core_hotplug,
From: Oscar Salvador
Currently, whenever a new node is created/re-used from the memhotplug path,
we call free_area_init_node()->free_area_init_core().
But there is some code that we do not really need to run when we are coming
from such path.
free_area_init_core() performs the following actions:
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Not having an ifdef in the function makes the code more readable.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
---
mm/page_alloc.c | 25 -
1 file changed,
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
Acked-by: Michal Hocko
Reviewed-by: Pavel Tatashin
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(
From: Pavel Tatashin
zone->node is configured only when CONFIG_NUMA=y, so it is a good idea to
have inline functions to access this field in order to avoid ifdef's in
c files.
Signed-off-by: Pavel Tatashin
Signed-off-by: Oscar Salvador
Reviewed-by: Oscar Salvador
---
include/linux/mm.h |
From: Oscar Salvador
Let us move the code between CONFIG_DEFERRED_STRUCT_PAGE_INIT
to an inline function.
Signed-off-by: Oscar Salvador
---
mm/page_alloc.c | 25 -
1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f7a
From: Oscar Salvador
We should only care about deferred initialization when booting.
Signed-off-by: Oscar Salvador
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d77bc2a7ec2c..5911b64a88ab 100644
--- a/mm/page_a
From: Oscar Salvador
In free_area_init_core we calculate the amount of managed pages
we are left with, by substracting the memmap pages and the pages
reserved for dma.
With the values left, we also account the total of kernel pages and
the total of pages.
Since memmap pages are calculated from z
From: Oscar Salvador
This patchset pretends to make free_area_init_core more readable by
moving the ifdefery to inline functions, and while we are at it,
it optimizes the function a little bit (better explained in patch 3).
Oscar Salvador (4):
mm/page_alloc: Move ifdefery out of free_area_init
From: Oscar Salvador
When free_area_init_core gets called from the memhotplug code,
we only need to perform some of the operations in
there.
Since memhotplug code is the only place where free_area_init_core
gets called while node being still offline, we can better separate
the context from where
From: Oscar Salvador
When free_area_init_node()->free_area_init_core() get called
from memhotplug path, there are some things that we do need to run.
This patchset __pretends__ to make more clear what things get executed
when those two functions get called depending on the context (non-/memhotpl
From: Oscar Salvador
Moving the #ifdefs out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
in
From: Oscar Salvador
If free_area_init_node gets called from memhotplug code,
we do not need to call calculate_node_totalpages(),
as the node has no pages.
The same goes for the deferred initialization, as
memmap_init_zone skips that when the context is MEMMAP_HOTPLUG.
Signed-off-by: Oscar Salv
From: Oscar Salvador
While trying to cleanup the memhotplug code, I found quite difficult to follow
free_area_init_node / free_area_init_core wrt which functions get called
from the memhotplug path.
This is en effort to try to refactor / cleanup those two functions a little bit,
to make them eas
From: Oscar Salvador
If free_area_init_node got called from memhotplug code, we do not need
to call calculate_node_totalpages(), as the node has no pages.
We do not need to set the range for the deferred initialization either,
as memmap_init_zone skips that when the context is MEMMAP_HOTPLUG.
S
From: Oscar Salvador
When free_area_init_core gets called from the memhotplug code,
we do not really need to go through all memmap calculations.
This structures the code a bit better.
Signed-off-by: Oscar Salvador
---
mm/page_alloc.c | 106 ++---
From: Oscar Salvador
Moving the #ifdefery out of the function makes it easier to follow.
Signed-off-by: Oscar Salvador
---
mm/page_alloc.c | 50 +-
1 file changed, 37 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
From: Oscar Salvador
The current code does not make sure to page align bss before calling
vm_brk(), and this can lead to a VM_BUG_ON() in __mm_populate()
due to the requested lenght not being correctly aligned.
Let us make sure to align it properly.
Signed-off-by: Oscar Salvador
Tested-by: Tet
From: Oscar Salvador
sparse_init_one_section() is being called from two sites:
sparse_init() and sparse_add_one_section().
The former calls it from a for_each_present_section_nr() loop,
and the latter marks the section as present before calling it.
This means that when sparse_init_one_section() g
From: Oscar Salvador
link_mem_sections() and walk_memory_range() share most of the code,
so we can use convert link_mem_sections() into a dummy function that calls
walk_memory_range() with a callback to register_mem_sect_under_node().
This patch converts register_mem_sect_under_node() in order t
From: Oscar Salvador
Callers of register_mem_sect_under_node() are always passing a valid
memory_block (not NULL), so we can safely drop the check for NULL.
In the same way, register_mem_sect_under_node() is only called in case
the node is online, so we can safely remove that check as well.
Sig
From: Oscar Salvador
add_memory_resource() contains code to allocate a new node in case
it is necessary.
Since try_online_node() also hast some code for this purpose,
let us make use of that and remove duplicate code.
This introduces __try_online_node(), which is called by
add_memory_resource()
1 - 100 of 107 matches
Mail list logo