From: Joonsoo Kim
memalloc_nocma_{save/restore} APIs can be used to skip page allocation
on CMA area, but, there is a missing case and the page on CMA area could
be allocated even if APIs are used. This patch handles this case to fix
the potential issue.
For now, these APIs are used to prevent l
From: Joonsoo Kim
memalloc_nocma_{save/restore} APIs can be used to skip page allocation
on CMA area, but, there is a missing case and the page on CMA area could
be allocated even if APIs are used. This patch handles this case to fix
the potential issue.
Missing case is an allocation from the pc
From: Joonsoo Kim
memalloc_nocma_{save/restore} APIs can be used to skip page allocation
on CMA area, but, there is a missing case and the page on CMA area could
be allocated even if APIs are used. This patch handles this case to fix
the potential issue.
Missing case is an allocation from the pc
From: Joonsoo Kim
We have well defined scope API to exclude CMA region.
Use it rather than manipulating gfp_mask manually. With this change,
we can now restore __GFP_MOVABLE for gfp_mask like as usual migration
target allocation. It would result in that the ZONE_MOVABLE is also
searched by page a
From: Joonsoo Kim
There is a well-defined migration target allocation callback. Use it.
Acked-by: Vlastimil Babka
Acked-by: Michal Hocko
Signed-off-by: Joonsoo Kim
---
mm/gup.c | 54 ++
1 file changed, 6 insertions(+), 48 deletions(-)
diff
From: Joonsoo Kim
new_non_cma_page() in gup.c requires to allocate the new page that is not
on the CMA area. new_non_cma_page() implements it by using allocation
scope APIs.
However, there is a work-around for hugetlb. Normal hugetlb page
allocation API for migration is alloc_huge_page_nodemask(
From: Joonsoo Kim
Now that workingset detection is implemented for anonymous LRU, we don't
need large inactive list to allow detecting frequently accessed pages
before they are reclaimed, anymore. This effectively reverts the temporary
measure put in by commit "mm/vmscan: make active/inactive rat
From: Joonsoo Kim
Current implementation of LRU management for anonymous page has some
problems. Most important one is that it doesn't protect the workingset,
that is, pages on the active LRU list. Although, this problem will be
fixed in the following patchset, the preparation is required and
thi
From: Joonsoo Kim
Workingset detection for anonymous page will be implemented in the
following patch and it requires to store the shadow entries into the
swapcache. This patch implements an infrastructure to store the shadow
entry in the swapcache.
Acked-by: Johannes Weiner
Signed-off-by: Joons
From: Joonsoo Kim
To prepare the workingset detection for anon LRU, this patch splits
workingset event counters for refault, activate and restore into anon
and file variants, as well as the refaults counter in struct lruvec.
Acked-by: Johannes Weiner
Acked-by: Vlastimil Babka
Signed-off-by: Jo
From: Joonsoo Kim
Hello,
This patchset implements workingset protection and detection on
the anonymous LRU list.
* Changes on v7
- fix a bug on clear_shadow_from_swap_cache()
- enhance the commit description
- fix workingset detection formula
* Changes on v6
- rework to reflect a new LRU balan
From: Joonsoo Kim
This patch implements workingset detection for anonymous LRU.
All the infrastructure is implemented by the previous patches so this
patch just activates the workingset detection by installing/retrieving
the shadow entry and adding refault calculation.
Acked-by: Johannes Weiner
From: Joonsoo Kim
In current implementation, newly created or swap-in anonymous page
is started on active list. Growing active list results in rebalancing
active/inactive list so old pages on active list are demoted to inactive
list. Hence, the page on active list isn't protected at all.
Followi
From: Joonsoo Kim
Currently, memalloc_nocma_{save/restore} API that prevents CMA area
in page allocation is implemented by using current_gfp_context(). However,
there are two problems of this implementation.
First, this doesn't work for allocation fastpath. In the fastpath,
original gfp_mask is
From: Joonsoo Kim
Currently, memalloc_nocma_{save/restore} API that prevents CMA area
in page allocation is implemented by using current_gfp_context(). However,
there are two problems of this implementation.
First, this doesn't work for allocation fastpath. In the fastpath,
original gfp_mask is
From: Joonsoo Kim
There is a well-defined migration target allocation callback. Use it.
Acked-by: Vlastimil Babka
Acked-by: Michal Hocko
Signed-off-by: Joonsoo Kim
---
mm/gup.c | 54 ++
1 file changed, 6 insertions(+), 48 deletions(-)
diff
From: Joonsoo Kim
new_non_cma_page() in gup.c requires to allocate the new page that is not
on the CMA area. new_non_cma_page() implements it by using allocation
scope APIs.
However, there is a work-around for hugetlb. Normal hugetlb page
allocation API for migration is alloc_huge_page_nodemask(
From: Joonsoo Kim
Currently, preventing cma area in page allocation is implemented by using
current_gfp_context(). However, there are two problems of this
implementation.
First, this doesn't work for allocation fastpath. In the fastpath,
original gfp_mask is used since current_gfp_context() is i
From: Joonsoo Kim
We have well defined scope API to exclude CMA region.
Use it rather than manipulating gfp_mask manually. With this change,
we can now restore __GFP_MOVABLE for gfp_mask like as usual migration
target allocation. It would result in that the ZONE_MOVABLE is also
searched by page a
From: Joonsoo Kim
We have well defined scope API to exclude CMA region.
Use it rather than manipulating gfp_mask manually. With this change,
we can now use __GFP_MOVABLE for gfp_mask and the ZONE_MOVABLE is also
searched by page allocator. For hugetlb, gfp_mask is redefined since
it has a regular
From: Joonsoo Kim
new_non_cma_page() in gup.c requires to allocate the new page that is not
on the CMA area. new_non_cma_page() implements it by using allocation
scope APIs.
However, there is a work-around for hugetlb. Normal hugetlb page
allocation API for migration is alloc_huge_page_nodemask(
From: Joonsoo Kim
There is a well-defined migration target allocation callback. Use it.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/gup.c | 54 ++
1 file changed, 6 insertions(+), 48 deletions(-)
diff --git a/mm/gup.c b/mm/g
From: Joonsoo Kim
Currently, preventing cma area in page allocation is implemented by using
current_gfp_context(). However, there are two problems of this
implementation.
First, this doesn't work for allocation fastpath. In the fastpath,
original gfp_mask is used since current_gfp_context() is i
From: Joonsoo Kim
There are some similar functions for migration target allocation. Since
there is no fundamental difference, it's better to keep just one rather
than keeping all variants. This patch implements base migration target
allocation function. In the following patches, variants will
From: Joonsoo Kim
There is a well-defined standard migration target callback. Use it
directly.
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/memory-failure.c | 18 ++
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-f
From: Joonsoo Kim
There is a well-defined migration target allocation callback. Use it.
Acked-by: Michal Hocko
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/internal.h | 1 -
mm/mempolicy.c | 31 ++-
mm/migrate.c | 8 ++--
3 files changed,
From: Joonsoo Kim
It's not performance sensitive function. Move it to .c. This is a
preparation step for future change.
Acked-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/linux/migrate.h | 33 +
From: Joonsoo Kim
To calculate the correct node to migrate the page for hotplug, we need
to check node id of the page. Wrapper for alloc_migration_target() exists
for this purpose.
However, Vlastimil informs that all migration source pages come from
a single node. In this case, we don't need to
From: Joonsoo Kim
This patchset clean-up the migration target allocation functions.
* Changes on v5
- remove new_non_cma_page() related patches
(implementation for memalloc_nocma_{save,restore} has a critical bug that
cannot exclude CMA memory in some cases so cannot use them here. Need to
fix t
From: Joonsoo Kim
For locality, it's better to migrate the page to the same node rather than
the node of the current caller's cpu.
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
Reviewed-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 4 +++-
1 file changed, 3 inse
From: Joonsoo Kim
new_page_nodemask is a migration callback and it tries to use a common
gfp flags for the target page allocation whether it is a base page or a
THP. The later only adds GFP_TRANSHUGE to the given mask. This results
in the allocation being slightly more aggressive than necessary b
From: Joonsoo Kim
There is no difference between two migration callback functions,
alloc_huge_page_node() and alloc_huge_page_nodemask(), except
__GFP_THISNODE handling. It's redundant to have two almost similar
functions in order to handle this flag. So, this patch tries to
remove one by introdu
From: Joonsoo Kim
There is a well-defined standard migration target callback. Use it
directly.
Acked-by: Michal Hocko
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 8 ++--
mm/page_isolation.c | 10 --
2 files changed, 6 insertions(+), 12 deletio
From: Joonsoo Kim
To calculate the correct node to migrate the page for hotplug, we need
to check node id of the page. Wrapper for alloc_migration_target() exists
for this purpose.
However, Vlastimil informs that all migration source pages come from
a single node. In this case, we don't need to
From: Joonsoo Kim
There is a well-defined standard migration target callback. Use it
directly.
Signed-off-by: Joonsoo Kim
---
mm/memory-failure.c | 18 ++
1 file changed, 6 insertions(+), 12 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 609d42b6..3b
From: Joonsoo Kim
There is a well-defined standard migration target callback. Use it
directly.
Acked-by: Michal Hocko
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 8 ++--
mm/page_isolation.c | 10 --
2 files changed, 6 insertions(+), 12 deletio
From: Joonsoo Kim
There is a well-defined migration target allocation callback. Use it.
Acked-by: Michal Hocko
Acked-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/internal.h | 1 -
mm/mempolicy.c | 31 ++-
mm/migrate.c | 8 ++--
3 files changed,
From: Joonsoo Kim
This patchset clean-up the migration target allocation functions.
* Changes on v4
- use full gfp_mask
- use memalloc_nocma_{save,restore} to exclude CMA memory
- separate __GFP_RECLAIM handling for THP allocation
- remove more wrapper functions
* Changes on v3
- As Vlastimil s
From: Joonsoo Kim
For locality, it's better to migrate the page to the same node rather than
the node of the current caller's cpu.
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
Reviewed-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 4 +++-
1 file changed, 3 inse
From: Joonsoo Kim
There is a well-defined migration target allocation callback. It's mostly
similar with new_non_cma_page() except considering CMA pages.
This patch adds a CMA consideration to the standard migration target
allocation callback and use it on gup.c.
Acked-by: Vlastimil Babka
Sig
From: Joonsoo Kim
In mm/migrate.c, THP allocation for migration is called with the provided
gfp_mask | GFP_TRANSHUGE. This gfp_mask contains __GFP_RECLAIM and it
would be conflict with the intention of the GFP_TRANSHUGE.
GFP_TRANSHUGE/GFP_TRANSHUGE_LIGHT is introduced to control the reclaim
beha
From: Joonsoo Kim
There are some similar functions for migration target allocation. Since
there is no fundamental difference, it's better to keep just one rather
than keeping all variants. This patch implements base migration target
allocation function. In the following patches, variants will
From: Joonsoo Kim
new_non_cma_page() in gup.c which try to allocate migration target page
requires to allocate the new page that is not on the CMA area.
new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way
works well for THP page or normal page but not for hugetlb page.
hug
From: Joonsoo Kim
There is no difference between two migration callback functions,
alloc_huge_page_node() and alloc_huge_page_nodemask(), except
__GFP_THISNODE handling. It's redundant to have two almost similar
functions in order to handle this flag. So, this patch tries to
remove one by introdu
From: Joonsoo Kim
It's not performance sensitive function. Move it to .c. This is a
preparation step for future change.
Acked-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/linux/migrate.h | 33 +
From: Joonsoo Kim
There is a well-defined migration target allocation callback.
It's mostly similar with new_non_cma_page() except considering CMA pages.
This patch adds a CMA consideration to the standard migration target
allocation callback and use it on gup.c.
Signed-off-by: Joonsoo Kim
---
From: Joonsoo Kim
It's not performance sensitive function. Move it to .c.
This is a preparation step for future change.
Acked-by: Mike Kravetz
Acked-by: Michal Hocko
Reviewed-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
include/linux/migrate.h | 33 +
m
From: Joonsoo Kim
There is no difference between two migration callback functions,
alloc_huge_page_node() and alloc_huge_page_nodemask(), except
__GFP_THISNODE handling. This patch adds an argument, gfp_mask, on
alloc_huge_page_nodemask() and replace the callsite for
alloc_huge_page_node() with t
From: Joonsoo Kim
There are some similar functions for migration target allocation. Since
there is no fundamental difference, it's better to keep just one rather
than keeping all variants. This patch implements base migration target
allocation function. In the following patches, variants will be
From: Joonsoo Kim
There is a well-defined migration target allocation callback.
Use it.
Signed-off-by: Joonsoo Kim
---
mm/internal.h | 1 -
mm/mempolicy.c | 30 ++
mm/migrate.c | 8 ++--
3 files changed, 12 insertions(+), 27 deletions(-)
diff --git a/mm/in
From: Joonsoo Kim
new_non_cma_page() in gup.c which try to allocate migration target page
requires to allocate the new page that is not on the CMA area.
new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way
works well for THP page or normal page but not for hugetlb page.
huge
From: Joonsoo Kim
There is a well-defined standard migration target callback.
Use it directly.
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 9 +++--
mm/page_isolation.c | 11 ---
2 files changed, 7 insertions(+), 13 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_allo
From: Joonsoo Kim
For locality, it's better to migrate the page to the same node
rather than the node of the current caller's cpu.
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
Reviewed-by: Vlastimil Babka
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 4 +++-
1 file changed, 3 inse
From: Joonsoo Kim
This patchset clean-up the migration target allocation functions.
* Changes on v3
- do not introduce alloc_control for hugetlb functions
- do not change the signature of migrate_pages()
- rename alloc_control to migration_target_control
* Changes on v2
- add acked-by tags
- fi
From: Joonsoo Kim
This patch implements workingset detection for anonymous LRU.
All the infrastructure is implemented by the previous patches so this patch
just activates the workingset detection by installing/retrieving
the shadow entry.
Signed-off-by: Joonsoo Kim
---
include/linux/swap.h |
From: Joonsoo Kim
Hello,
This patchset implements workingset protection and detection on
the anonymous LRU list.
* Changes on v6
- rework to reflect a new LRU balance model
- remove memcg charge timing stuff on v5 since alternative is already
merged on mainline
- remove readahead stuff on v5 (r
From: Joonsoo Kim
In current implementation, newly created or swap-in anonymous page
is started on active list. Growing active list results in rebalancing
active/inactive list so old pages on active list are demoted to inactive
list. Hence, the page on active list isn't protected at all.
Followi
From: Joonsoo Kim
Now, workingset detection is implemented for anonymous LRU.
We don't have to worry about the misfound for workingset due to
the ratio of active/inactive. Let's restore the ratio.
Acked-by: Johannes Weiner
Signed-off-by: Joonsoo Kim
---
mm/vmscan.c | 2 +-
1 file changed, 1 i
From: Joonsoo Kim
In the following patch, workingset detection will be applied to
anonymous LRU. To prepare it, this patch adds some code to
distinguish/handle the both LRUs.
v6: do not introduce a new nonresident_age for anon LRU since
we need to use *unified* nonresident_age to implement worki
From: Joonsoo Kim
Current implementation of LRU management for anonymous page has some
problems. Most important one is that it doesn't protect the workingset,
that is, pages on the active LRU list. Although, this problem will be
fixed in the following patchset, the preparation is required and
thi
From: Joonsoo Kim
Swapcache doesn't handle the exceptional entries since there is no case
using it. In the following patch, workingset detection for anonymous
page will be implemented and it stores the shadow entries as exceptional
entries into the swapcache. So, we need to handle the exceptional
From: Johannes Weiner
After ("mm: workingset: let cache workingset challenge anon fix"), we
compare refault distances to active_file + anon. But age of the
non-resident information is only driven by the file LRU. As a result,
we may overestimate the recency of any incoming refaults and activate
t
From: Joonsoo Kim
Non-file-lru page could also be activated in mark_page_accessed()
and we need to count this activation for nonresident_age.
Note that it's better for this patch to be squashed into the patch
"mm: workingset: age nonresident information alongside anonymous pages".
Signed-off-by
From: Joonsoo Kim
This patchset fixes some problems of the patchset,
"mm: balance LRU lists based on relative thrashing", which is now merged
on the mainline.
Patch "mm: workingset: let cache workingset challenge anon fix" is
the result of discussion with Johannes. See following link.
http://lk
From: Joonsoo Kim
With synchronous IO swap device, swap-in is directly handled in fault
code. Since IO cost notation isn't added there, with synchronous IO swap
device, LRU balancing could be wrongly biased. Fix it to count it
in fault code.
Signed-off-by: Joonsoo Kim
---
mm/memory.c | 8 +
From: Joonsoo Kim
There is no need to make a function in order to call standard migration
target allocation function. Use standard one directly.
Signed-off-by: Joonsoo Kim
---
include/linux/page-isolation.h | 2 --
mm/page_alloc.c| 9 +++--
mm/page_isolation.c
From: Joonsoo Kim
It's not good practice to modify user input. Instead of using it to
build correct gfp_mask for APIs, this patch introduces another gfp_mask
field, __gfp_mask, for internal usage.
Signed-off-by: Joonsoo Kim
---
mm/hugetlb.c | 19 ++-
mm/internal.h | 2 ++
2 f
From: Joonsoo Kim
There is no reason to implement it's own function for migration
target allocation. Use standard one.
Signed-off-by: Joonsoo Kim
---
mm/gup.c | 61 ++---
1 file changed, 10 insertions(+), 51 deletions(-)
diff --git a/mm/
From: Joonsoo Kim
There is no reason to implement it's own function for migration
target allocation. Use standard one.
Signed-off-by: Joonsoo Kim
---
mm/internal.h | 3 ---
mm/mempolicy.c | 32 +++-
mm/migrate.c | 3 ++-
3 files changed, 5 insertions(+), 33 del
From: Joonsoo Kim
To prepare unifying duplicated functions in following patches, this patch
changes the interface of the migration target alloc/free functions.
Functions now use struct alloc_control as an argument.
There is no functional change.
Signed-off-by: Joonsoo Kim
---
include/linux/mi
From: Joonsoo Kim
It's not performance sensitive function. Move it to .c.
This is a preparation step for future change.
Acked-by: Mike Kravetz
Signed-off-by: Joonsoo Kim
---
include/linux/migrate.h | 33 +
mm/migrate.c| 29 ++
From: Joonsoo Kim
There are some similar functions for migration target allocation. Since
there is no fundamental difference, it's better to keep just one rather
than keeping all variants. This patch implements base migration target
allocation function. In the following patches, variants will be
From: Joonsoo Kim
This patchset clean-up the migration target allocation functions.
* Changes on v2
- add acked-by tags
- fix missing compound_head() call for the patch #3
- remove thisnode field on alloc_control and use __GFP_THISNODE directly
- fix missing __gfp_mask setup for the patch
"mm/hu
From: Joonsoo Kim
Currently, page allocation functions for migration requires some arguments.
More worse, in the following patch, more argument will be needed to unify
the similar functions. To simplify them, in this patch, unified data
structure that controls allocation behaviour is introduced.
From: Joonsoo Kim
gfp_mask handling on alloc_huge_page_(node|nodemask) is
slightly changed, from ASSIGN to OR. It's safe since caller of these
functions doesn't pass extra gfp_mask except htlb_alloc_mask().
This is a preparation step for following patches.
Signed-off-by: Joonsoo Kim
---
mm/hu
From: Joonsoo Kim
For locality, it's better to migrate the page to the same node
rather than the node of the current caller's cpu.
Acked-by: Roman Gushchin
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/page_isolati
From: Joonsoo Kim
There is a user who do not want to use CMA memory for migration. Until
now, it is implemented by caller side but it's not optimal since there
is limited information on caller. This patch implements it on callee side
to get better result.
Acked-by: Mike Kravetz
Signed-off-by: J
From: Joonsoo Kim
There is no difference between two migration callback functions,
alloc_huge_page_node() and alloc_huge_page_nodemask(), except
__GFP_THISNODE handling. This patch moves this handling to
alloc_huge_page_nodemask() and function caller. Then, remove
alloc_huge_page_node().
Signed-
From: Joonsoo Kim
It's not good practice to modify user input. Instead of using it to
build correct gfp_mask for APIs, this patch introduces another gfp_mask
field, __gfp_mask, for internal usage.
Signed-off-by: Joonsoo Kim
---
mm/hugetlb.c | 15 ---
mm/internal.h | 2 ++
2 files
From: Joonsoo Kim
There is no reason to implement it's own function for migration
target allocation. Use standard one.
Signed-off-by: Joonsoo Kim
---
mm/internal.h | 3 ---
mm/mempolicy.c | 33 -
mm/migrate.c | 4 +++-
3 files changed, 7 insertions(+), 33 d
From: Joonsoo Kim
Currently, page allocation functions for migration requires some arguments.
More worse, in the following patch, more argument will be needed to unify
the similar functions. To simplify them, in this patch, unified data
structure that controls allocation behaviour is introduced.
From: Joonsoo Kim
It's not performance sensitive function. Move it to .c.
This is a preparation step for future change.
Signed-off-by: Joonsoo Kim
---
include/linux/migrate.h | 33 +
mm/migrate.c| 29 +
2 files changed, 34
From: Joonsoo Kim
This patchset clean-up the migration target allocation functions.
Contributions of this patchset are:
1. unify two hugetlb alloc functions. As a result, one is remained.
2. make one external hugetlb alloc function to internal one.
3. unify three functions for migration target a
From: Joonsoo Kim
There is no need to make a function in order to call standard migration
target allocation function. Use standard one directly.
Signed-off-by: Joonsoo Kim
---
include/linux/page-isolation.h | 2 --
mm/page_alloc.c| 9 +++--
mm/page_isolation.c
From: Joonsoo Kim
To prepare unifying duplicated functions in following patches, this patch
changes the interface of the migration target alloc/free functions.
Functions now use struct alloc_control as an argument.
There is no functional change.
Signed-off-by: Joonsoo Kim
---
include/linux/mi
From: Joonsoo Kim
For locality, it's better to migrate the page to the same node
rather than the node of the current caller's cpu.
Signed-off-by: Joonsoo Kim
---
mm/page_isolation.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
From: Joonsoo Kim
There is a user who do not want to use CMA memory for migration. Until
now, it is implemented by caller side but it's not optimal since there
is limited information on caller. This patch implements it on callee side
to get better result.
Signed-off-by: Joonsoo Kim
---
include
From: Joonsoo Kim
There is no difference between two migration callback functions,
alloc_huge_page_node() and alloc_huge_page_nodemask(), except
__GFP_THISNODE handling. This patch adds one more field on to
the alloc_control and handles this exception.
Signed-off-by: Joonsoo Kim
---
include/li
From: Joonsoo Kim
There is no reason to implement it's own function for migration
target allocation. Use standard one.
Signed-off-by: Joonsoo Kim
---
mm/gup.c | 61 ++---
1 file changed, 10 insertions(+), 51 deletions(-)
diff --git a/mm/
From: Joonsoo Kim
There are some similar functions for migration target allocation. Since
there is no fundamental difference, it's better to keep just one rather
than keeping all variants. This patch implements base migration target
allocation function. In the following patches, variants will be
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Now, we have separate functions, PageHighMem() and Page
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Now, we have separate functions, PageHighMem() and Page
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Now, we have separate functions, PageHighMem() and Page
From: Joonsoo Kim
What we'd like to check here is whether page has direct mapping or not.
Use PageHighMem() since it is perfectly matched for this purpose.
Acked-by: Roman Gushchin
Signed-off-by: Joonsoo Kim
---
mm/page_alloc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Previous patches introduce PageHighMemZone() macro and
From: Joonsoo Kim
Implementation of PageHighMem() will be changed in following patches.
Before that, use open-code to avoid the side effect of implementation
change on PageHighMem().
Acked-by: Roman Gushchin
Signed-off-by: Joonsoo Kim
---
include/linux/migrate.h | 4 +++-
1 file changed, 3 in
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Now, we have separate functions, PageHighMem() and Page
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Now, we have separate functions, PageHighMem() and Page
From: Joonsoo Kim
Until now, PageHighMem() is used for two different cases. One is to check
if there is a direct mapping for this page or not. The other is to check
the zone of this page, that is, weather it is the highmem type zone or not.
Now, we have separate functions, PageHighMem() and Page
From: Joonsoo Kim
Changes on v2
- add "acked-by", "reviewed-by" tags
- replace PageHighMem() with use open-code, instead of using
new PageHighMemZone() macro. Related file is "include/linux/migrate.h"
Hello,
This patchset separates two use cases of PageHighMem() by introducing
PageHighMemZone()
1 - 100 of 346 matches
Mail list logo