On Tue, Jul 05, 2022 at 08:00:42PM -0700, Andrew Morton wrote:
> On Wed, 6 Jul 2022 10:47:32 +0800 Muchun Song
> wrote:
>
> > > If this wakeup is not one of these, then are there reports from the
> > > softlockup detector?
> > >
> > > Do we hav
On Tue, Jul 05, 2022 at 04:47:10PM -0700, Andrew Morton wrote:
> On Wed, 6 Jul 2022 00:38:41 +0100 Matthew Wilcox wrote:
>
> > On Tue, Jul 05, 2022 at 02:18:19PM -0700, Andrew Morton wrote:
> > > On Tue, 5 Jul 2022 20:35:32 +0800 Muchun Song
> > > wrote:
>
result will be to miss a wakeup event (like the user of
__fuse_dax_break_layouts()). Since FSDAX pages are only possible get
by GUP users, so fix GUP instead of folio_put() to lower overhead.
Fixes: d8ddc099c6b3 ("mm/gup: Add gup_put_folio()")
Suggested-by: Matthew Wilcox
Signed-off-by: M
On Mon, Jul 04, 2022 at 11:38:16AM +0100, Matthew Wilcox wrote:
> On Mon, Jul 04, 2022 at 03:40:54PM +0800, Muchun Song wrote:
> > FSDAX page refcounts are 1-based, rather than 0-based: if refcount is
> > 1, then the page is freed. The FSDAX pages can be pinned through GUP,
> &
result will be to miss a wakeup event (like the user of
__fuse_dax_break_layouts()).
Fixes: d8ddc099c6b3 ("mm/gup: Add gup_put_folio()")
Signed-off-by: Muchun Song
---
include/linux/mm.h | 22 +++---
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/include/li
tifier semantic")
There is only one caller of the follow_invalidate_pte(). So just fold it
into follow_pte() and remove it.
Signed-off-by: Muchun Song
Reviewed-by: Christoph Hellwig
---
include/linux/mm.h | 3 --
mm/memory.c| 81 ---
ue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
Reviewed-by: Christoph Hellwig
---
fs/dax.c | 99
1 file changed, 12 insertions(+), 87 deletions(-)
diff --git a/fs/dax.c b/fs/dax.
The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma. Add support for walking huge devmap
pages so that DAX can use it in the next patch.
Signed-off-by: Muchun Song
---
mm/page_vma_mapped.c | 17 +
1 file changed, 9 insertions
. This helper will be used by DAX device in the
next patch to make pfns clean.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 3 +++
mm/internal.h| 26 +
mm/rmap.c| 65 +++-
3 files changed, 74
b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song
Reviewed-by: Dan Williams
Reviewed-by: Christoph Hellwig
---
fs/dax.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/dax.c b/fs/dax.c
index 67a08a32fccb..a372304c9695 100644
---
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
Reviewed-by: Yang Shi
Reviewed-by: Dan Williams
Reviewed-by: Christoph Hellwig
---
mm/rmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletio
NSPARENT_HUGEPAGE on powerpc architecture.
- Split a new patch 4 for preparation of fixing the dax bug.
Muchun Song (6):
mm: rmap: fix cache flush on THP pages
dax: fix cache flush on PMD-mapped pages
mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
mm: pvmw: add support for wal
On Thu, Mar 31, 2022 at 11:55 PM Qian Cai wrote:
>
> On Fri, Mar 18, 2022 at 03:45:23PM +0800, Muchun Song wrote:
> > This series is based on next-20220225.
> >
> > Patch 1-2 fix a cache flush bug, because subsequent patches depend on
> > those on those changes, t
On Fri, Apr 1, 2022 at 11:44 AM Muchun Song wrote:
>
> On Thu, Mar 31, 2022 at 11:55 PM Qian Cai wrote:
> >
> > On Fri, Mar 18, 2022 at 03:45:23PM +0800, Muchun Song wrote:
> > > This series is based on next-20220225.
> > >
> > > Patch 1-2 fix a cache
On Thu, Mar 31, 2022 at 11:55 PM Qian Cai wrote:
>
> On Fri, Mar 18, 2022 at 03:45:23PM +0800, Muchun Song wrote:
> > This series is based on next-20220225.
> >
> > Patch 1-2 fix a cache flush bug, because subsequent patches depend on
> > those on those changes, t
On Wed, Mar 30, 2022 at 1:47 PM Christoph Hellwig wrote:
>
> On Tue, Mar 29, 2022 at 09:48:50PM +0800, Muchun Song wrote:
> > + * * Return the start of user virtual address at the specific offset within
>
> Double "*" here.
Thanks for pointing out this.
>
> Als
tifier semantic")
There is only one caller of the follow_invalidate_pte(). So just fold it
into follow_pte() and remove it.
Signed-off-by: Muchun Song
Reviewed-by: Christoph Hellwig
---
include/linux/mm.h | 3 --
mm/memory.c| 81 ---
ue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
Reviewed-by: Christoph Hellwig
---
fs/dax.c | 99
1 file changed, 12 insertions(+), 87 deletions(-)
diff --git a/fs/dax.c b/fs/dax.
The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma. Add support for walking huge devmap
pages so that DAX can use it in the next patch.
Signed-off-by: Muchun Song
---
mm/page_vma_mapped.c | 16
1 file changed, 8 insertions
. This helper will be used by DAX device in the
next patch to make pfns clean.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 3 +++
mm/internal.h| 26 +
mm/rmap.c| 65 +++-
3 files changed, 74
b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song
Reviewed-by: Dan Williams
Reviewed-by: Christoph Hellwig
---
fs/dax.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/dax.c b/fs/dax.c
index 67a08a32fccb..a372304c9695 100644
---
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
Reviewed-by: Yang Shi
Reviewed-by: Dan Williams
Reviewed-by: Christoph Hellwig
---
mm/rmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletio
.
- Split a new patch 4 for preparation of fixing the dax bug.
Muchun Song (6):
mm: rmap: fix cache flush on THP pages
dax: fix cache flush on PMD-mapped pages
mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
mm: pvmw: add support for walking devmap pages
dax: fix missing writeprotect
On Tue, Mar 22, 2022 at 4:37 PM Christoph Hellwig wrote:
>
> > +static void dax_entry_mkclean(struct address_space *mapping, unsigned long
> > pfn,
> > + unsigned long npfn, pgoff_t start)
> > {
> > struct vm_area_struct *vma;
> > + pgoff_t end = start + npfn
tifier semantic")
There is only one caller of the follow_invalidate_pte(). So just fold it
into follow_pte() and remove it.
Signed-off-by: Muchun Song
---
include/linux/mm.h | 3 --
mm/memory.c| 81 --
2 files changed, 23 inserti
ue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
---
fs/dax.c | 83 ++--
1 file changed, 7 insertions(+), 76 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index a372304c9695..7fd4a167
The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma. Add support for walking huge devmap
pages so that DAX can use it in the next patch.
Signed-off-by: Muchun Song
---
mm/page_vma_mapped.c | 16
1 file changed, 8 insertions
. This helper will be used by DAX device in the
next patch to make pfns clean.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 3 +++
mm/internal.h| 26 +
mm/rmap.c| 65 +++-
3 files changed, 74
b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song
Reviewed-by: Dan Williams
---
fs/dax.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/fs/dax.c b/fs/dax.c
index 67a08a32fccb..a372304c9695 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -84
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
Reviewed-by: Yang Shi
Reviewed-by: Dan Williams
---
mm/rmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rm
line in lots of places suggested by Christoph.
- Fix a compiler warning reported by kernel test robot since pmd_pfn()
is not defined when !CONFIG_TRANSPARENT_HUGEPAGE on powerpc architecture.
- Split a new patch 4 for preparation of fixing the dax bug.
Muchun Song (6):
mm: rmap: fix cache flush
On Tue, Mar 15, 2022 at 4:50 AM Dan Williams wrote:
>
> On Fri, Mar 11, 2022 at 1:06 AM Muchun Song wrote:
> >
> > On Thu, Mar 10, 2022 at 8:59 AM Dan Williams
> > wrote:
> > >
> > > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song
> > > wrote:
On Thu, Mar 10, 2022 at 8:59 AM Dan Williams wrote:
>
> On Wed, Mar 2, 2022 at 12:30 AM Muchun Song wrote:
> >
> > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > the pte entry within a DAX PMD entry during an *sync operation. This
> >
On Thu, Mar 10, 2022 at 8:06 AM Dan Williams wrote:
>
> On Wed, Mar 2, 2022 at 12:29 AM Muchun Song wrote:
> >
> > The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> > However, it does not cover the full pages in a THP except a head p
The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.
Signed-off-by: Muchun Song
---
include/linux/mm.h | 3 ---
mm/memory.c| 23 +++
2 files changed, 3 insertions
ue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
---
fs/dax.c | 83 ++--
1 file changed, 7 insertions(+), 76 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index a372304c9695..7fd4a167
The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma. Add support for walking huge devmap
pages so that DAX can use it in the next patch.
Signed-off-by: Muchun Song
---
mm/page_vma_mapped.c | 5 +++--
1 file changed, 3 insertions(+), 2
. This helper will be used by DAX device in the
next patch to make pfns clean.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 3 +++
mm/internal.h| 26 +
mm/rmap.c| 65 +++-
3 files changed, 74
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.
Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
Reviewed-by: Yang Shi
---
mm/rmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index fc46a3d7b704..72
powerpc architecture.
- Split a new patch 4 for preparation of fixing the dax bug.
Muchun Song (6):
mm: rmap: fix cache flush on THP pages
dax: fix cache flush on PMD-mapped pages
mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
mm: pvmw: add support for walking devmap pages
dax: fix
On Tue, Mar 1, 2022 at 5:26 AM Andrew Morton wrote:
>
> On Mon, 28 Feb 2022 14:35:34 +0800 Muchun Song
> wrote:
>
> > The devmap pages can not use page_vma_mapped_walk() to check if a huge
> > devmap page is mapped into a vma. Add support for walking huge devmap
>
The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.
Signed-off-by: Muchun Song
---
include/linux/mm.h | 3 ---
mm/memory.c| 23 +++
2 files changed, 3 insertions
ue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
---
fs/dax.c | 83 ++--
1 file changed, 7 insertions(+), 76 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index a372304c9695..7fd4a167
The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma. Add support for walking huge devmap
pages so that DAX can use it in the next patch.
Signed-off-by: Muchun Song
---
mm/page_vma_mapped.c | 4 ++--
1 file changed, 2 insertions(+), 2
. This helper will be used by DAX device in the
next patch to make pfns clean.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 3 +++
mm/internal.h| 26 +
mm/rmap.c| 65 +++-
3 files changed, 74
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.
Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
Reviewed-by: Yang Shi
---
mm/rmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index fc46a3d7b704..72
for preparation of fixing the dax bug.
Muchun Song (6):
mm: rmap: fix cache flush on THP pages
dax: fix cache flush on PMD-mapped pages
mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
mm: pvmw: add support for walking devmap pages
dax: fix missing writeprotect the pte entry
mm
The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.
Signed-off-by: Muchun Song
---
include/linux/mm.h | 3 ---
mm/memory.c| 23 +++
2 files changed, 3 insertions
ue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
---
fs/dax.c | 83 ++--
1 file changed, 7 insertions(+), 76 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index e031e4b6c13c..b64ac02d
. This helper will be used by DAX device in the
next patch to make pfns clean.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 3 ++
mm/internal.h| 26 ++--
mm/rmap.c| 84 +---
3 files changed, 86 insertions
into a
vma. So add support for checking if a pfn is mapped into a vma. In the next
patch, the dax will use this new feature.
Signed-off-by: Muchun Song
---
include/linux/rmap.h| 14 --
include/linux/swapops.h | 13 +++---
mm/internal.h | 28 +---
mm
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.
Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
Reviewed-by: Yang Shi
---
mm/rmap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index b0fd9dc19eba..0b
:
- Avoid the overly long line in lots of places suggested by Christoph.
- Fix a compiler warning reported by kernel test robot since pmd_pfn()
is not defined when !CONFIG_TRANSPARENT_HUGEPAGE on powerpc architecture.
- Split a new patch 4 for preparation of fixing the dax bug.
Muchun Song
On Mon, Jan 24, 2022 at 3:41 PM Christoph Hellwig wrote:
>
> On Fri, Jan 21, 2022 at 03:55:14PM +0800, Muchun Song wrote:
> > Reuse some infrastructure of page_mkclean_one() to let DAX can handle
> > similar case to fix this issue.
>
> Can you split out some of the inf
On Mon, Jan 24, 2022 at 3:36 PM Christoph Hellwig wrote:
>
> On Fri, Jan 21, 2022 at 03:55:13PM +0800, Muchun Song wrote:
> > + if (pvmw->pte && ((pvmw->flags & PVMW_PFN_WALK) ||
> > !PageHuge(pvmw->page)))
>
> Please avoid the overly
On Mon, Jan 24, 2022 at 3:34 PM Christoph Hellwig wrote:
>
> On Fri, Jan 21, 2022 at 03:55:11PM +0800, Muchun Song wrote:
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index b0fd9dc19eba..65670cb805d6 100644
&g
The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.
Signed-off-by: Muchun Song
---
include/linux/mm.h | 3 ---
mm/memory.c| 23 +++
2 files changed, 3 insertions
ar case to fix this issue.
Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song
---
fs/dax.c | 78 +---
include/linux/rmap.h | 9 ++
mm/internal.h| 27 +
into a
vma. So add support for checking if a pfn is mapped into a vma. In the next
patch, the dax will use this new feature.
Signed-off-by: Muchun Song
---
include/linux/rmap.h | 13 +--
mm/internal.h| 25 +---
mm/page_vma_mapped.c | 65
The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.
Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-
indexed caches is less.
Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use
page_vma_mapped_walk()")
Signed-off-by: Muchun Song
---
mm/rmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index b0fd9dc19eba..65670cb805d6 100644
--- a
On Tue, Apr 20, 2021 at 7:20 AM Mike Kravetz wrote:
>
> On 4/15/21 1:40 AM, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we need to allocate
> > the vmemmap pages associated with it. However, we may not be able to
> > allocate the vmemmap
On Mon, Apr 19, 2021 at 8:01 AM Waiman Long wrote:
>
> There are two issues with the current refill_obj_stock() code. First of
> all, when nr_bytes reaches over PAGE_SIZE, it calls drain_obj_stock() to
> atomically flush out remaining bytes to obj_cgroup, clear cached_objcg
> and do a obj_cgroup_p
On Sat, Apr 17, 2021 at 9:44 PM wrote:
>
> On 4/17/21 3:07 PM, Muchun Song wrote:
> > On Sat, Apr 17, 2021 at 6:41 PM Peter Enderborg
> > wrote:
> >> This adds a total used dma-buf memory. Details
> >> can be found in debugfs, however it is not for everyone
&g
On Sat, Apr 17, 2021 at 6:41 PM Peter Enderborg
wrote:
>
> This adds a total used dma-buf memory. Details
> can be found in debugfs, however it is not for everyone
> and not always available. dma-buf are indirect allocated by
> userspace. So with this value we can monitor and detect
> userspace ap
The css_set_lock is used to guard the list of inherited objcgs. So there
is no need to uncharge kernel memory under css_set_lock. Just move it
out of the lock.
Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by: Johannes Weiner
---
mm/memcontrol.c | 3
f
reclaim stack") replace pagevecs by lists of pages_to_free. So we do
not need noinline_for_stack, just remove it (let the compiler decide
whether to inline).
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
Acked-by: Michal Hocko
parameter. Just remove it to simplify the code.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
---
include/linux/memcontrol.h | 10 +-
mm/compaction.c| 2 +-
mm/memcontrol.c| 9
RITE_ONCE().
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
---
mm/memcontrol.c | 20 ++--
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index caf193088beb..c4eebe2a2914 100
"memcg = page_memcg(page) ? : root_mem_cgroup". And use lruvec_pgdat
to simplify the code. We can have a single definition for this function
that works for !CONFIG_MEMCG, CONFIG_MEMCG + mem_cgroup_disabled() and
CONFIG_MEMCG.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-b
lruvec_holds_page_lru_lock() doesn't check anything about locking and is
used to check whether the page belongs to the lruvec. So rename it to
page_matches_lruvec().
Signed-off-by: Muchun Song
Acked-by: Michal Hocko
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
---
include/
ll trigger? So it is better to fix it.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
---
mm/memcontrol.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontr
When mm is NULL, we do not need to hold rcu lock and call css_tryget for
the root memcg. And we also do not need to check !mm in every loop of
while. So bail out early when !mm.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by
mment to patch 2.
Thanks to Roman, Johannes, Shakeel and Michal's review.
Muchun Song (8):
mm: memcontrol: fix page charging in page replacement
mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm
mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec
mm:
On Sat, Apr 17, 2021 at 7:56 AM Mike Kravetz wrote:
>
> On 4/15/21 1:40 AM, Muchun Song wrote:
> > In the subsequent patch, we should allocate the vmemmap pages when
> > freeing a HugeTLB page. But update_and_free_page() can be called
> > under any context, so we cannot us
On Sat, Apr 17, 2021 at 12:08 AM Peter Enderborg
wrote:
>
> This adds a total used dma-buf memory. Details
> can be found in debugfs, however it is not for everyone
> and not always available. dma-buf are indirect allocated by
> userspace. So with this value we can monitor and detect
> userspace a
On Sat, Apr 17, 2021 at 5:10 AM Mike Kravetz wrote:
>
> On 4/15/21 1:40 AM, Muchun Song wrote:
> > Every HugeTLB has more than one struct page structure. We __know__ that
> > we only use the first 4 (__NR_USED_SUBPAGE) struct page structures
> > to store metadata asso
On Fri, Apr 16, 2021 at 11:20 PM Johannes Weiner wrote:
>
> On Fri, Apr 16, 2021 at 01:14:04PM +0800, Muchun Song wrote:
> > lruvec_holds_page_lru_lock() doesn't check anything about locking and is
> > used to check whether the page belongs to the lruvec. So rename it to
f
reclaim stack") replace pagevecs by lists of pages_to_free. So we do
not need noinline_for_stack, just remove it (let the compiler decide
whether to inline).
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
Acked-by: Michal Hocko
The css_set_lock is used to guard the list of inherited objcgs. So there
is no need to uncharge kernel memory under css_set_lock. Just move it
out of the lock.
Signed-off-by: Muchun Song
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by: Johannes Weiner
---
mm/memcontrol.c | 3
RITE_ONCE().
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
---
mm/memcontrol.c | 20 ++--
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index caf193088beb..c4eebe2a2914 100
lruvec_holds_page_lru_lock() doesn't check anything about locking and is
used to check whether the page belongs to the lruvec. So rename it to
page_matches_lruvec().
Signed-off-by: Muchun Song
---
include/linux/memcontrol.h | 7 +++
mm/vmscan.c| 2 +-
2 files chang
"memcg = page_memcg(page) ? : root_mem_cgroup". And use lruvec_pgdat
to simplify the code. We can have a single definition for this function
that works for !CONFIG_MEMCG, CONFIG_MEMCG + mem_cgroup_disabled() and
CONFIG_MEMCG.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-b
parameter. Just remove it to simplify the code.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
---
include/linux/memcontrol.h | 10 +-
mm/compaction.c| 2 +-
mm/memcontrol.c| 9
When mm is NULL, we do not need to hold rcu lock and call css_tryget for
the root memcg. And we also do not need to check !mm in every loop of
while. So bail out early when !mm.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by
Acked-by and Review-by tags.
2. Add a new patch to rename lruvec_holds_page_lru_lock to
page_matches_lruvec.
3. Add a comment to patch 2.
Thanks to Roman, Johannes, Shakeel and Michal's review.
Muchun Song (8):
mm: memcontrol: fix page charging in page replacement
mm: memcontrol:
ll trigger? So it is better to fix it.
Signed-off-by: Muchun Song
Acked-by: Johannes Weiner
Reviewed-by: Shakeel Butt
Acked-by: Roman Gushchin
Acked-by: Michal Hocko
---
mm/memcontrol.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontr
On Thu, Apr 15, 2021 at 1:49 AM Johannes Weiner wrote:
>
> On Wed, Apr 14, 2021 at 06:00:42PM +0800, Muchun Song wrote:
> > On Wed, Apr 14, 2021 at 5:44 PM Michal Hocko wrote:
> > >
> > > On Tue 13-04-21 14:51:50, Muchun Song wrote:
> > > > We alrea
The parameter of memory_hotplug.memmap_on_memory is not compatible with
hugetlb_free_vmemmap. So disable it when hugetlb_free_vmemmap is
enabled.
Signed-off-by: Muchun Song
---
Documentation/admin-guide/kernel-parameters.txt | 4
drivers/acpi/acpi_memhotplug.c | 1 +
mm
are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
Signed-off-by: Muchun Song
Acked-by: Mike Kravetz
Reviewed-by: Oscar Salvador
Reviewed-by
mapped.
Signed-off-by: Muchun Song
Reviewed-by: Oscar Salvador
Reviewed-by: Barry Song
Reviewed-by: Miaohe Lin
Tested-by: Chen Huang
Tested-by: Bodeddula Balasubramaniam
---
Documentation/admin-guide/kernel-parameters.txt | 17 +
Documentation/admin-guide/mm/hugetlbpage.rst
sleeping allocation would be too fragile and it could fail
too easily under memory pressure. GFP_ATOMIC or other modes to access
memory reserves is not used because we want to prevent consuming
reserves under heavy hugetlb freeing.
Signed-off-by: Muchun Song
---
Documentation/admin-guide/mm
allocate the vmemmap pages.
The __update_and_free_page() is where the call to allocate vmemmmap
pages will be inserted.
Signed-off-by: Muchun Song
---
mm/hugetlb.c | 73
mm/hugetlb_vmemmap.c | 12 -
mm/hugetlb_vmemmap.h | 17
means the feature is disabled. We will enable it once all
the infrastructure is there.
Signed-off-by: Muchun Song
Reviewed-by: Oscar Salvador
Tested-by: Chen Huang
Tested-by: Bodeddula Balasubramaniam
Acked-by: Michal Hocko
---
include/linux/bootmem_info.h | 28 +-
include/linux/mm.h
indexes of tail struct page. In this case, it
will be easier to add a new tail page index later.
Signed-off-by: Muchun Song
Reviewed-by: Oscar Salvador
Reviewed-by: Miaohe Lin
Tested-by: Chen Huang
Tested-by: Bodeddula Balasubramaniam
Acked-by: Michal Hocko
---
include/linux/hugetlb.h| 21
free_reserved_page() to free vmemmmap pages. The routine
register_page_bootmem_info() is used to register bootmem info.
Therefore, make sure register_page_bootmem_info is enabled if
HUGETLB_PAGE_FREE_VMEMMAP is defined.
Signed-off-by: Muchun Song
Reviewed-by: Oscar Salvador
Acked-by: Mike Kravetz
any functional change.
Signed-off-by: Muchun Song
Acked-by: Mike Kravetz
Reviewed-by: Oscar Salvador
Reviewed-by: David Hildenbrand
Reviewed-by: Miaohe Lin
Tested-by: Chen Huang
Tested-by: Bodeddula Balasubramaniam
---
arch/sparc/mm/init_64.c| 1 +
arch/x86/mm/init_64.c
1 - 100 of 1001 matches
Mail list logo