On 17.06.25 11:49, David Hildenbrand wrote:
On 16.06.25 13:58, Alistair Popple wrote:
DAX no longer requires device PTEs as it always has a ZONE_DEVICE page
associated with the PTE that can be reference counted normally. Other users
of pte_devmap are drivers that set PFN_DEV when calling
On 16.06.25 13:58, Alistair Popple wrote:
Nothing uses PFN_DEV anymore so no need to create devmap pXd's when
mapping a PFN. Instead special mappings will be created which ensures
vm_normal_page_pXd() will not return pages which don't have an
associated page. This could change behaviour slightly
On 16.06.25 13:58, Alistair Popple wrote:
DAX no longer requires device PTEs as it always has a ZONE_DEVICE page
associated with the PTE that can be reference counted normally. Other users
of pte_devmap are drivers that set PFN_DEV when calling vmf_insert_mixed()
which ensures vm_normal_page() re
On 16.06.25 13:58, Alistair Popple wrote:
The only users of pmd_devmap were device dax and fs dax. The check for
pmd_devmap() in check_pmd_state() is therefore redundant as callers
explicitly check for is_zone_device_page(), so this check can be dropped.
Looking again, is this true?
If we ret
ot removed, so do that now.
Signed-off-by: Alistair Popple
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
: Alistair Popple
Reviewed-by: Jason Gunthorpe
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
supporting it.
Signed-off-by: Alistair Popple
Acked-by: Will Deacon # arm64
Suggested-by: Chunyan Zhang
Reviewed-by: Björn Töpel
Reviewed-by: Jason Gunthorpe
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 16.06.25 13:58, Alistair Popple wrote:
It's no longer used so remove it.
Signed-off-by: Alistair Popple
Reviewed-by: Jason Gunthorpe
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
anged, 109 insertions(+), 235 deletions(-)
delete mode 100644 include/linux/pfn_t.h
Lovely
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 17.06.25 11:25, David Hildenbrand wrote:
On 16.06.25 13:58, Alistair Popple wrote:
Previously dax pages were skipped by the pagewalk code as pud_special() or
vm_normal_page{_pmd}() would be false for DAX pages. Now that dax pages are
refcounted normally that is no longer the case, so the
On 16.06.25 13:58, Alistair Popple wrote:
Previously dax pages were skipped by the pagewalk code as pud_special() or
vm_normal_page{_pmd}() would be false for DAX pages. Now that dax pages are
refcounted normally that is no longer the case, so the pagewalk code will
start returning them.
Most ca
On 16.06.25 13:58, Alistair Popple wrote:
Currently dax is the only user of pmd and pud mapped ZONE_DEVICE
pages. Therefore page walkers that want to exclude DAX pages can check
pmd_devmap or pud_devmap. However soon dax will no longer set PFN_DEV,
meaning dax pages are mapped as normal pages.
E
On 05.06.25 18:30, Dan Williams wrote:
David Hildenbrand wrote:
On 05.06.25 14:09, Jason Gunthorpe wrote:
On Wed, Jun 04, 2025 at 07:35:24PM -0700, Dan Williams wrote:
If all dax pages are special, then vm_normal_page() should never find
them and gup should fail.
...oh, but vm_normal_page_p
On 05.06.25 14:09, Jason Gunthorpe wrote:
On Wed, Jun 04, 2025 at 07:35:24PM -0700, Dan Williams wrote:
If all dax pages are special, then vm_normal_page() should never find
them and gup should fail.
...oh, but vm_normal_page_p[mu]d() is not used in the gup path, and
'special' is not set in th
On 04.06.25 23:58, Michael Kelley wrote:
From: Michael Kelley Sent: Tuesday, June 3, 2025 10:25 AM
From: David Hildenbrand Sent: Tuesday, June 3, 2025 12:55 AM
On 03.06.25 03:49, Michael Kelley wrote:
From: David Hildenbrand Sent: Monday, June 2, 2025 2:48 AM
[snip]
@@ -182,20
On 05.06.25 09:46, Christoph Hellwig wrote:
On Wed, Jun 04, 2025 at 06:59:09PM -0700, Dan Williams wrote:
+/* return normal pages backed by the page allocator */
+static inline struct page *vm_normal_gfp_pmd(struct vm_area_struct *vma,
+unsigned long a
On 03.06.25 03:49, Michael Kelley wrote:
From: David Hildenbrand Sent: Monday, June 2, 2025 2:48 AM
On 23.05.25 18:15, mhkelle...@gmail.com wrote:
From: Michael Kelley
Current defio code works only for framebuffer memory that is allocated
with vmalloc(). The code assumes that the
On 02.06.25 11:33, David Hildenbrand wrote:
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1398,10 +1398,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma,
unsigned long addr,
}
entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
- if (pfn_t_devmap(pfn
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 29.05.25 08:32, Alistair Popple wrote:
Changes from v2 of the RFC[1]:
- My ZONE_DEVICE refcount series has been merged as commit 7851bf649d42
(Patch series
"fs/dax: Fix ZONE_DEVICE page reference counts", v9.) which is included in
v6.15 so have rebased on top of that.
- No major
On 23.05.25 18:15, mhkelle...@gmail.com wrote:
From: Michael Kelley
Current defio code works only for framebuffer memory that is allocated
with vmalloc(). The code assumes that the underlying page refcount can
be used by the mm subsystem to manage each framebuffer page's lifecycle,
including fr
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1398,10 +1398,7 @@ static int insert_pfn_pmd(struct vm_area_struct *vma,
unsigned long addr,
}
entry = pmd_mkhuge(pfn_t_pmd(pfn, prot));
- if (pfn_t_devmap(pfn))
- entry = pmd_mkdevmap(entry);
- else
-
On 29.05.25 08:32, Alistair Popple wrote:
Previously dax pages were skipped by the pagewalk code as pud_special() or
vm_normal_page{_pmd}() would be false for DAX pages. Now that dax pages are
refcounted normally that is no longer the case, so add explicit checks to
skip them.
Is this really wh
On 29.05.25 08:32, Alistair Popple wrote:
Currently dax is the only user of pmd and pud mapped ZONE_DEVICE
pages. Therefore page walkers that want to exclude DAX pages can check
pmd_devmap or pud_devmap. However soon dax will no longer set PFN_DEV,
meaning dax pages are mapped as normal pages.
E
well
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 14.05.25 19:53, David Hildenbrand wrote:
On 13.05.25 19:48, Liam R. Howlett wrote:
* David Hildenbrand [250512 08:34]:
The "memramp() shrinking" scenario no longer applies, so let's remove
that now-unnecessary handling.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Mol
On 13.05.25 19:48, Liam R. Howlett wrote:
* David Hildenbrand [250512 08:34]:
The "memramp() shrinking" scenario no longer applies, so let's remove
that now-unnecessary handling.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
On 13.05.25 19:40, Liam R. Howlett wrote:
* David Hildenbrand [250512 08:34]:
Let's provide variants of track_pfn_remap() and untrack_pfn() that won't
mess with VMAs, and replace the usage in mm/memremap.c.
Add some documentation.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Mol
On 13.05.25 17:53, Liam R. Howlett wrote:
* David Hildenbrand [250512 08:34]:
On top of mm-unstable.
VM_PAT annoyed me too much and wasted too much of my time, let's clean
PAT handling up and remove VM_PAT.
This should sort out various issues with VM_PAT we discovered recently,
and
On 13.05.25 12:16, Lorenzo Stoakes wrote:
On Tue, May 13, 2025 at 11:10:45AM +0200, David Hildenbrand wrote:
On 12.05.25 18:42, Lorenzo Stoakes wrote:
On Mon, May 12, 2025 at 02:34:17PM +0200, David Hildenbrand wrote:
Let's use our new interface. In remap_pfn_range(), we'll now deci
On 12.05.25 18:49, Lorenzo Stoakes wrote:
On Mon, May 12, 2025 at 02:34:22PM +0200, David Hildenbrand wrote:
Let's just have it in a single function. The resulting function is
certainly small enough and readable.
Signed-off-by: David Hildenbrand
Nice, great bit of refactoring :) th
On 12.05.25 18:42, Lorenzo Stoakes wrote:
On Mon, May 12, 2025 at 02:34:17PM +0200, David Hildenbrand wrote:
Let's use our new interface. In remap_pfn_range(), we'll now decide
whether we have to track (full VMA covered) or only lookup the
cachemode (partial VMA covered).
Remember wh
On 12.05.25 17:43, Lorenzo Stoakes wrote:
On Mon, May 12, 2025 at 02:34:15PM +0200, David Hildenbrand wrote:
... by factoring it out from track_pfn_remap() into
pfnmap_setup_cachemode() and provide pfnmap_setup_cachemode_pfn() as
a replacement for track_pfn_insert().
For PMDs/PUDs, we keep
We can now get rid of the old interface along with get_pat_info() and
follow_phys().
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 147 --
include/linux/pgtable.h | 66
Always set to 0, so let's remove it.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 12 +++-
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/m
It's unused, so let's remove it.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 4 +---
include/trace/events/mmflags.h | 4 +---
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/inc
Let's just have it in a single function. The resulting function is
certainly small enough and readable.
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype_interval.c | 33 +-
1 file changed, 10 insertions(+), 23 deletions(-)
diff --git a/arch/x86/m
track_pfn() does not exist, let's simply refer to it as "pfnmap
tracking".
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
mm/io-mapping.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/io-mapping.c b/
track_pfn() does not exist, let's simply refer to it as "pfnmap
tracking".
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
drivers/gpu/drm/i915/i915_mm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/
The "memramp() shrinking" scenario no longer applies, so let's remove
that now-unnecessary handling.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype_interval.c | 44 --
1
ple for now.
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
include/linux/mm_inline.h | 2 +
include/linux/mm_types.h | 11 ++
mm/memory.c | 82 +++
mm/mmap.c | 5 ---
mm/mremap.c | 4
Let's provide variants of track_pfn_remap() and untrack_pfn() that won't
mess with VMAs, and replace the usage in mm/memremap.c.
Add some documentation.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memt
range.
We'll reuse pfnmap_setup_cachemode() from core MM next.
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 24 ++
include/linux/pgtable.h | 52 +--
mm/huge_memory.c | 5 ++-
need to care about this
micro-optimization.
Reviewed-by: Lorenzo Stoakes
Acked-by: Ingo Molnar # x86 bits
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 33 +++--
1 file changed, 15 insertions(+), 18 deletions(-)
diff --git a/arch/x86/mm/pat/memtyp
tch() into memtype_erase()"
* "mm/io-mapping: track_pfn() -> "pfnmap tracking""
-> Adjust to code changes in mm-unstable
David Hildenbrand (11):
x86/mm/pat: factor out setting cachemode into pgprot_set_cachemode()
mm: convert track_pfn_insert() to pfnmap_setup_cac
Obviously my series will break this but should be _fairly_ trivial to
update.
You will however have to make sure to update tools/testing/vma/* to handle
the new functions in userland testing (they need to be stubbed otu).
Hmm, seems to compile. I guess, because we won't have
"__HAVE_PFNMAP_TRA
-tracking will require hooking into vma splitting
code ... not something I am super happy about. :)
Also my god re: the 'kind of working' aspects of PAT, so frustrating.
Signed-off-by: David Hildenbrand
Generally looking good, afaict, but maybe let's get some input from Su
On 28.04.25 22:23, Lorenzo Stoakes wrote:
On Fri, Apr 25, 2025 at 10:17:13AM +0200, David Hildenbrand wrote:
The "memramp() shrinking" scenario no longer applies, so let's remove
that now-unnecessary handling.
I wonder if we could remove even more of the code here given the
On 29.04.25 15:44, Peter Xu wrote:
On Mon, Apr 28, 2025 at 10:37:49PM +0200, David Hildenbrand wrote:
On 28.04.25 18:21, Peter Xu wrote:
On Mon, Apr 28, 2025 at 04:58:46PM +0200, David Hildenbrand wrote:
What it does on PAT (only implementation so far ...) is looking up the
memory type to
On 28.04.25 18:21, Peter Xu wrote:
On Mon, Apr 28, 2025 at 04:58:46PM +0200, David Hildenbrand wrote:
What it does on PAT (only implementation so far ...) is looking up the
memory type to select the caching mode that can be use.
"sanitize" was IMHO a good fit, because we must make
On 28.04.25 21:57, Suren Baghdasaryan wrote:
On Mon, Apr 28, 2025 at 12:37 PM Lorenzo Stoakes
wrote:
On Mon, Apr 28, 2025 at 07:23:18PM +0200, David Hildenbrand wrote:
On 28.04.25 18:24, Peter Xu wrote:
On Mon, Apr 28, 2025 at 06:16:21PM +0200, David Hildenbrand wrote:
Probably due to what
On 28.04.25 22:00, Suren Baghdasaryan wrote:
On Mon, Apr 28, 2025 at 12:47 PM Lorenzo Stoakes
wrote:
+cc Suren, who has worked HEAVILY on VMA field manipulation and such :)
Suren - David is proposing adding a new field. AFAICT this does not add a
new cache line so I think we're all good.
But
On 28.04.25 21:37, Lorenzo Stoakes wrote:
On Mon, Apr 28, 2025 at 07:23:18PM +0200, David Hildenbrand wrote:
On 28.04.25 18:24, Peter Xu wrote:
On Mon, Apr 28, 2025 at 06:16:21PM +0200, David Hildenbrand wrote:
Probably due to what config you have. E.g., when I'm looking mine it
On 28.04.25 18:24, Peter Xu wrote:
On Mon, Apr 28, 2025 at 06:16:21PM +0200, David Hildenbrand wrote:
Probably due to what config you have. E.g., when I'm looking mine it's
much bigger and already consuming 256B, but it's because I enabled more
things (userfaultfd, lockdep, etc
+int pfnmap_track(unsigned long pfn, unsigned long size, pgprot_t *prot)
+{
+ const resource_size_t paddr = (resource_size_t)pfn << PAGE_SHIFT;
+
+ return reserve_pfn_range(paddr, size, prot, 0);
Nitty, but a pattern established by Liam which we've followed consistently
in VMA co
n 0;
}
@@ -1073,10 +1072,8 @@ void track_pfn_insert(struct vm_area_struct *vma,
pgprot_t *prot, pfn_t pfn)
if (!pat_enabled())
return;
- /* Set prot based on lookup */
We're losing a comment here but who cares, it's obvious what's happening.
Yeah, it's now
On 28.04.25 18:08, Peter Xu wrote:
On Fri, Apr 25, 2025 at 10:36:55PM +0200, David Hildenbrand wrote:
On 25.04.25 22:23, Peter Xu wrote:
On Fri, Apr 25, 2025 at 10:17:09AM +0200, David Hildenbrand wrote:
Let's use our new interface. In remap_pfn_range(), we'll now decide
whether
On 28.04.25 18:06, Lorenzo Stoakes wrote:
On Fri, Apr 25, 2025 at 10:17:15AM +0200, David Hildenbrand wrote:
track_pfn() does not exist, let's simply refer to it as "pfnmap
tracking".
Signed-off-by: David Hildenbrand
LGTM, so:
Reviewed-by: Lorenzo Stoakes
---
mm/i
What it does on PAT (only implementation so far ...) is looking up the
memory type to select the caching mode that can be use.
"sanitize" was IMHO a good fit, because we must make sure that we don't use
the wrong caching mode.
update/setup/... don't make that quite clear. Any other suggestion
On 25.04.25 22:23, Peter Xu wrote:
On Fri, Apr 25, 2025 at 10:17:09AM +0200, David Hildenbrand wrote:
Let's use our new interface. In remap_pfn_range(), we'll now decide
whether we have to track (full VMA covered) or only sanitize the pgprot
(partial VMA covered).
Remember what
On 25.04.25 22:00, Peter Xu wrote:
On Fri, Apr 25, 2025 at 10:17:08AM +0200, David Hildenbrand wrote:
Let's use the new, cleaner interface.
Signed-off-by: David Hildenbrand
---
mm/memremap.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/memremap.c
- track_pfn_insert(vma, &pgprot, pfn);
+ if (pfnmap_sanitize_pgprot(pfn_t_to_pfn(pfn), PAGE_SIZE, &pgprot))
+ return VM_FAULT_FALLBACK;
Would "pgtable" leak if it fails? If it's PAGE_SIZE, IIUC it won't ever
trigger, though.
Missed that comment. I can document that pgpr
On 25.04.25 21:31, Peter Xu wrote:
On Fri, Apr 25, 2025 at 10:17:06AM +0200, David Hildenbrand wrote:
... by factoring it out from track_pfn_remap().
For PMDs/PUDs, actually check the full range, and trigger a fallback
if we run into this "different memory types / cachemodes" scena
There will be some clash with [1], but nothing that cannot be sorted out
easily by moving the functions added to kernel/fork.c to wherever the vma
bits will live.
Briefly tested with some basic /dev/mem test I crafted. I want to convert
them to selftests, but that might or might not require a bit
Always set to 0, so let's remove it.
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 12 +++-
1 file changed, 3 insertions(+), 9 deletions(-)
diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
index 668ebf0065157..57e3ced4c28cb 100644
--- a/arch/x
It's unused, so let's remove it.
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 4 +---
include/trace/events/mmflags.h | 4 +---
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9b701cfbef223..a20
Let's provide variants of track_pfn_remap() and untrack_pfn() that won't
mess with VMAs, to replace the existing interface step-by-step.
Add some documentation.
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 14 ++
include/linux/pgtabl
track_pfn() does not exist, let's simply refer to it as "pfnmap
tracking".
Signed-off-by: David Hildenbrand
---
drivers/gpu/drm/i915/i915_mm.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_mm.c b/drivers/gpu/drm/i915/
track_pfn() does not exist, let's simply refer to it as "pfnmap
tracking".
Signed-off-by: David Hildenbrand
---
mm/io-mapping.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/io-mapping.c b/mm/io-mapping.c
index 01b3627999304..7266441ad0834 100644
--- a/
The "memramp() shrinking" scenario no longer applies, so let's remove
that now-unnecessary handling.
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype_interval.c | 44 --
1 file changed, 6 insertions(+), 38 deletions(-)
diff --git a/
Let's factor it out to make the code easier to grasp.
Use it also in pgprot_writecombine()/pgprot_writethrough() where
clearing the old cachemode might not be required, but given that we are
already doing a function call, no need to care about this
micro-optimization.
Signed-off-by:
We can now get rid of the old interface along with get_pat_info() and
follow_phys().
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 147 --
include/linux/pgtable.h | 66 -
2 files changed, 213 deletions(-)
diff --git a
Let's use the new, cleaner interface.
Signed-off-by: David Hildenbrand
---
mm/memremap.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/memremap.c b/mm/memremap.c
index 2aebc1b192da9..c417c843e9b1f 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -130,7 +
on in a hacky way, now we won't adjust the reservation but
leave it around until all involved VMAs are gone.
Signed-off-by: David Hildenbrand
---
include/linux/mm_inline.h | 2 +
include/linux/mm_types.h | 11 ++
kernel/fork.c | 54 --
e to
learn. Not checking each page looks wrong, though. Maybe we could
optimize the lookup internally.
Signed-off-by: David Hildenbrand
---
arch/x86/mm/pat/memtype.c | 24
include/linux/pgtable.h | 28
mm/huge_memory.c | 7 +
c: Jann Horn
Cc: Pedro Falcato
Cc: Peter Xu
David Hildenbrand (11):
x86/mm/pat: factor out setting cachemode into pgprot_set_cachemode()
mm: convert track_pfn_insert() to pfnmap_sanitize_pgprot()
x86/mm/pat: introduce pfnmap_track() and pfnmap_untrack()
mm/memremap: convert to pfnmap_track
On 01.04.25 12:13, Sumit Garg wrote:
+ MM folks to seek guidance here.
On Thu, Mar 27, 2025 at 09:07:34AM +0100, Jens Wiklander wrote:
Hi Sumit,
On Tue, Mar 25, 2025 at 8:42 AM Sumit Garg wrote:
On Wed, Mar 05, 2025 at 02:04:15PM +0100, Jens Wiklander wrote:
Add support in the OP-TEE backe
On 17.02.25 01:01, Alistair Popple wrote:
On Tue, Feb 11, 2025 at 09:33:54AM +0100, David Hildenbrand wrote:
On 11.02.25 06:00, Andrew Morton wrote:
On Mon, 10 Feb 2025 20:37:45 +0100 David Hildenbrand wrote:
The single "real" user in the tree of make_device_exclusive_rang
On 14.02.25 02:25, Alistair Popple wrote:
On Thu, Feb 13, 2025 at 12:15:58PM +0100, David Hildenbrand wrote:
On 13.02.25 12:03, Alistair Popple wrote:
On Mon, Feb 10, 2025 at 08:37:42PM +0100, David Hildenbrand wrote:
Against mm-hotfixes-stable for now.
Discussing the PageTail() call in
On 13.02.25 12:03, Alistair Popple wrote:
On Mon, Feb 10, 2025 at 08:37:42PM +0100, David Hildenbrand wrote:
Against mm-hotfixes-stable for now.
Discussing the PageTail() call in make_device_exclusive_range() with
Willy, I recently discovered [1] that device-exclusive handling does
not
On 11.02.25 06:00, Andrew Morton wrote:
On Mon, 10 Feb 2025 20:37:45 +0100 David Hildenbrand wrote:
The single "real" user in the tree of make_device_exclusive_range() always
requests making only a single address exclusive. The current implementation
is hard to fix for properly
ases, it's
likely not something that deserves a "Fixes:".
Signed-off-by: David Hildenbrand
---
mm/rmap.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index e2a543f639ce3..0f760b93fc0a2 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2435
walk() users can properly
handle device-exclusive entries.
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand
---
mm/damon/paddr.c | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
index
frequently performs
atomics to the same page. Similarly, KSM will never merge order-0 folios
that are device-exclusive.
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand
---
mm/memory.c | 17 +
mm/rmap.c | 7 ---
2 files
page_vma_mapped_walk() users can properly
handle device-exclusive entries.
Signed-off-by: David Hildenbrand
---
mm/damon/ops-common.c | 23 +--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
index d25d99cb5f2bb..86a50e8fb
y
handle device-exclusive entries.
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand
---
mm/page_idle.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/page_idle.c b/mm/page_idle.c
index 947c7c7a37289..408aaf
apped_walk() again, so this is likely a fix (unless something
else could prevent that race, but it doesn't look like). In the
future it could be handled if ever required, for now just give up and
ignore them like folio_walk would.
Fixes: b756a3b5e7ea ("mm: device exclusive memory access&qu
ice-exclusive PTE to "vanish".
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand
---
mm/rmap.c | 124 ++
1 file changed, 51 insertions(+), 73 deletions(-)
diff --git a/m
for small folios, because we'll always have
!folio_mapped() with a single device-exclusive entry. We'll adjust the
mapcount logic once all page_vma_mapped_walk() users can properly
handle device-exclusive entries.
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Si
lock, so any device-exclusive users should be properly prepared
for a device-exclusive PTE to "vanish".
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand
---
mm/rmap.c | 52 +++-
1 file changed,
t;:
device-exclusive entries adjust the mapcount, but not the refcount.
Fixes: b756a3b5e7ea ("mm: device exclusive memory access")
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
mm/page_vma_mapped.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --g
3b5e7ea ("mm: device exclusive memory access")
Signed-off-by: David Hildenbrand
---
kernel/events/uprobes.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 2ca797cbe465f..cd6105b100325 100644
--- a/kernel
ma_mapped_walk() callers correctly
* Added "mm/rmap: avoid -EBUSY from make_device_exclusive()" to fix some
hmm selftest failures I saw while testing under memory pressure
* Plenty of comment/description updates and improvements
David Hildenbrand (17):
mm/gup: reject FOLL_SPLIT_P
quot;mm: device exclusive memory access")
Reviewed-by: Alistair Popple
Cc:
Signed-off-by: David Hildenbrand
---
mm/rmap.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index c6c4d4ea29a7e..17fbfa61f7efb 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@
There is no need for the distinction anymore; let's merge the readable
and writable device-exclusive entries into a single device-exclusive
entry type.
Acked-by: Simona Vetter
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
include/linux/swap.h| 7 +++
include/
ve(page)==true and can_change_pte_writable()==true,
unless we are dealing with soft-dirty tracking or uffd-wp. But reusing
can_change_pte_writable() for now is cleaner.
Signed-off-by: David Hildenbrand
---
mm/memory.c | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/mm/memory.c b
have
negative side-effects [1].
This gets rid of the "folio_mapcount()" usage and let's us fix ordinary
rmap walks (migration/swapout) next. Spell out that messing with the
mapcount is wrong and must be fixed.
[1] https://lkml.kernel.org/r/Z5tI-cOSyzdLjoe_@phenom.ffwll.local
things.
Acked-by: Simona Vetter
Reviewed-by: Alistair Popple
Signed-off-by: David Hildenbrand
---
Documentation/mm/hmm.rst| 2 +-
Documentation/translations/zh_CN/mm/hmm.rst | 2 +-
drivers/gpu/drm/nouveau/nouveau_svm.c | 5 +-
include/linux/mmu_notifier.h
generic follow_page_mask
code")
Reviewed-by: John Hubbard
Reviewed-by: Alistair Popple
Cc:
Signed-off-by: David Hildenbrand
---
mm/gup.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/gup.c b/mm/gup.c
index 3883b307780ea..61e751baf862c 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@
On 31.01.25 18:05, Simona Vetter wrote:
On Fri, Jan 31, 2025 at 11:55:55AM +0100, David Hildenbrand wrote:
On 31.01.25 00:06, Alistair Popple wrote:
On Thu, Jan 30, 2025 at 02:03:42PM +0100, Simona Vetter wrote:
On Thu, Jan 30, 2025 at 10:58:51AM +0100, David Hildenbrand wrote:
On 30.01.25
1 - 100 of 375 matches
Mail list logo