On 17.06.25 17:43, David Hildenbrand wrote:
Let's make the kernel a bit less horrible, by removing the
linearity requirement in CoW PFNMAP mappings with
!CONFIG_ARCH_HAS_PTE_SPECIAL. In particular, stop messing with
vma->vm_pgoff in weird ways.
Simply lookup in applicable (i.e., Co
is
not available.
To avoid those failures:
But we deliberately have in tools/testing/selftests/mm/config:
CONFIG_TRANSPARENT_HUGEPAGE=y
So isn't this rather a test setup issue? Meaning, the environment is
not well prepared.
--
Cheers,
David / dhildenb
On 25.06.25 10:53, Oscar Salvador wrote:
On Tue, Jun 17, 2025 at 05:43:41PM +0200, David Hildenbrand wrote:
Let's reduce the code duplication and factor out the non-pte/pmd related
magic into vm_normal_page_pfn().
To keep it simpler, check the pfn against both zero folios. We could
opt
and configurations, while preserving the intended behavior on
kernels that support UFFD-WP.
Suggestted-by: David Hildenbrand
Signed-off-by: Li Wang
Cc: Aruna Ramakrishna
Cc: Bagas Sanjaya
Cc: Catalin Marinas
Cc: Dave Hansen
Cc: Joey Gouly
Cc: Johannes Weiner
Cc: Keith Lucas
Cc: Ryan Robe
On 25.06.25 11:20, Oscar Salvador wrote:
On Wed, Jun 25, 2025 at 10:57:39AM +0200, David Hildenbrand wrote:
I don't think that comment is required anymore -- we do exactly what
vm_normal_page() does + documents,
What the current users are is not particularly important anymore.
Or why d
On 24.06.25 03:16, Alistair Popple wrote:
On Tue, Jun 17, 2025 at 05:43:38PM +0200, David Hildenbrand wrote:
Let's convert to vmf_insert_folio_pmd().
In the unlikely case there is already something mapped, we'll now still
call trace_dax_pmd_load_hole() and return VM_FAULT_NOPAGE.
T
On 25.06.25 10:49, Lorenzo Stoakes wrote:
David, are you planning a v2 of this soon? If so I'll hold off review until
then, if not I can get stuck in when I have time?
There will probably be a v1 called "mm: vm_normal_page*()" where I drop
the problematic bit, and respin the
On 25.06.25 11:02, Oscar Salvador wrote:
On Wed, Jun 25, 2025 at 10:47:49AM +0200, David Hildenbrand wrote:
I'm still thinking about this patch here, and will likely send out the other
patches first as a v1, and come back to this one later.
Patch#12 depends on this one, but Patch#13 shou
On 25.06.25 10:20, Oscar Salvador wrote:
On Tue, Jun 17, 2025 at 05:43:37PM +0200, David Hildenbrand wrote:
Just like we do for vmf_insert_page_mkwrite() -> ... ->
insert_page_into_pte_locked(), support the huge zero folio.
It might just be me because I don't have the full context
provide additional debug info when ejecting
the current scheduler. Also, handling the event more gracefully allows
us to potentially recover the system instead of incurring additional
down time.
Suggested-by: Tejun Heo
Reviewed-by: Paul E. McKenney
Signed-off-by: David Dai
---
include/linux/sched
and configurations, while preserving the intended behavior on
kernels that support UFFD-WP.
Suggestted-by: David Hildenbrand
Signed-off-by: Li Wang
Cc: Aruna Ramakrishna
Cc: Bagas Sanjaya
Cc: Catalin Marinas
Cc: Dave Hansen
Cc: Joey Gouly
Cc: Johannes Weiner
Cc: Keith Lucas
Cc: Ryan Robe
On 20.06.25 15:27, Oscar Salvador wrote:
On Tue, Jun 17, 2025 at 05:43:34PM +0200, David Hildenbrand wrote:
Doing a pte_pfn() etc. of something that is not a present page table
entry is wrong. Let's check in all relevant cases where we want to
upgrade write permissions when inserting pfns/
On 20.06.25 20:24, Pedro Falcato wrote:
On Tue, Jun 17, 2025 at 05:43:34PM +0200, David Hildenbrand wrote:
Doing a pte_pfn() etc. of something that is not a present page table
entry is wrong. Let's check in all relevant cases where we want to
upgrade write permissions when inserting pfns/
On 20.06.25 14:50, Oscar Salvador wrote:
On Tue, Jun 17, 2025 at 05:43:32PM +0200, David Hildenbrand wrote:
In 2009, we converted a VM_BUG_ON(!pfn_valid(pfn)) to the current
highest_memmap_pfn sanity check in commit 22b31eec63e5 ("badpage:
vm_normal_page use print_bad_pte"
On 23.06.25 14:35, Lorenzo Stoakes wrote:
+cc Liam, David, Vlastimil, Jann
(it might not be obvious from get_maintainers.pl but please cc
maintainers/reviewers of the thing you are adding a test for, thanks!)
Overall I'm not in favour of us taking this patch.
There are a number of issues
l_msg("Bad address %lx\n", addr);
LGTM, the logic corresponds to the way we would handle it pre d1d86ce28d0f
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
nel versions
and configurations, while preserving existing behavior on systems that do
support UFFD-WP.
Signed-off-by: Li Wang
Cc: Aruna Ramakrishna
Cc: Bagas Sanjaya
Cc: Catalin Marinas
Cc: Dave Hansen
Cc: David Hildenbrand
Cc: Joey Gouly
Cc: Johannes Weiner
Cc: Keith Lucas
Cc: Ryan Roberts
ill typechecked developer errors can be detected faster.
>
> Signed-off-by: Thomas Weißschuh
> ---
Acked-by: David Gow
Cheers,
-- David
> lib/kunit/test.c | 8
> 1 file changed, 8 deletions(-)
>
> diff --git a/lib/kunit/test.c b/lib/kunit/test.c
> index
>
On Thu, 19 Jun 2025 at 05:37, Rae Moar wrote:
>
> On Sat, Jun 14, 2025 at 4:47 AM David Gow wrote:
> >
> > From: Ujwal Jain
> >
> > Currently, the in-kernel kunit test case timeout is 300 seconds. (There
> > is a separate timeout mechanism for the whole
On 17.06.25 18:18, David Hildenbrand wrote:
On 17.06.25 17:43, David Hildenbrand wrote:
RFC because it's based on mm-new where some things might still change
around the devmap removal stuff.
While removing support for CoW PFNMAPs is a noble goal, I am not even sure
if we can remove
On 17.06.25 17:43, David Hildenbrand wrote:
RFC because it's based on mm-new where some things might still change
around the devmap removal stuff.
While removing support for CoW PFNMAPs is a noble goal, I am not even sure
if we can remove said support for e.g., /dev/mem that easily.
In th
No longer required, let's drop it.
Signed-off-by: David Hildenbrand
---
fs/proc/task_mmu.c | 6 +++---
include/linux/mm.h | 6 ++
mm/huge_memory.c | 4 ++--
mm/memory.c| 8 +++-
mm/pagewalk.c | 2 +-
5 files changed, 11 insertions(+), 15 deletions(-)
diff --git a/fs
Cc: Dev Jain
Cc: Barry Song
Cc: Vlastimil Babka
Cc: Mike Rapoport
Cc: Suren Baghdasaryan
Cc: Michal Hocko
Cc: Jann Horn
Cc: Pedro Falcato
David Hildenbrand (14):
mm/memory: drop highest_memmap_pfn sanity check in vm_normal_page()
mm: drop highest_memmap_pfn
mm: compare pfns only if
mentation, and add a comment in the code where XEN ends
up performing the pte_mkspecial() through a hypercall. More details can
be found in commit 923b2919e2c3 ("xen/gntdev: mark userspace PTEs as
special on x86 PV guests").
Cc: David Vrabel
Signed-off-by: David Hildenbrand
---
drivers
ucing vm_normal_folio_pud() until really used.
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 1 +
mm/memory.c| 11 +++
mm/pagewalk.c | 20 ++--
3 files changed, 22 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
n().
While at it, add a check that pmd_special() is really only set where we
would expect it.
No functional change intended.
Signed-off-by: David Hildenbrand
---
mm/memory.c | 104 +++-
1 file changed, 46 insertions(+), 58 deletions(-)
diff --gi
CH_HAS_PTE_SPECIAL, but this way is certainly cleaner and
more consistent -- and doesn't really cost us anything in the cases we
really care about.
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 16 ++
mm/huge_memory.c | 16 +-
m
s -- which should be rather cheap.
Signed-off-by: David Hildenbrand
---
include/linux/huge_mm.h | 12 +++-
mm/memory.c | 2 +-
2 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 35e34e6a98a27..b260f9a1fd3f2 10
vm_normal_page().
While at it, update the doc regarding the shared zero folios.
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 5 -
mm/memory.c | 13 +
2 files changed, 13 insertions(+), 5 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 92400f3baa9ff
Just like we do for vmf_insert_page_mkwrite() -> ... ->
insert_page_into_pte_locked(), support the huge zero folio.
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 8 +---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
Let's convert to vmf_insert_folio_pmd().
In the unlikely case there is already something mapped, we'll now still
call trace_dax_pmd_load_hole() and return VM_FAULT_NOPAGE.
That should probably be fine, no need to add special cases for that.
Signed-off-by: David Hildenbrand
---
fs/
Let's clean it all further up.
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 36 +---
1 file changed, 13 insertions(+), 23 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a85e0cd455109..1ea23900b5adb 100644
--- a/mm/huge_mem
Let's clean it all further up.
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 72
1 file changed, 24 insertions(+), 48 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e52360df87d15..a85e0cd455109 100644
---
so this is more a
cleanup than a fix for something that would likely trigger in some
weird circumstances.
At some point, we should likely unify the two pte handling paths,
similar to how we did it for pmds/puds.
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 4 ++--
mm/memory.c | 4 ++-
ge_pmd(), where we don't even report a
problem at all ...
What might be better in the future is having a runtime option like
page-table-check to enable such checks dynamically on-demand. Something
for the future.
Signed-off-by: David Hildenbrand
---
mm/memory.c | 15 +++---
Now unused, so let's drop it.
Signed-off-by: David Hildenbrand
---
mm/internal.h | 2 --
mm/memory.c | 2 --
mm/mm_init.c | 3 ---
mm/nommu.c| 1 -
4 files changed, 8 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index f519eb7217c26..703871905fd6d 100644
--- a/mm/inter
grep searches are going to hit the comment blocks.
David
* Simon Horman (ho...@kernel.org) wrote:
> On Sat, Jun 14, 2025 at 12:07:31AM +0100, li...@treblig.org wrote:
> > From: "Dr. David Alan Gilbert"
> >
> > The functions:
> > vringh_abandon_kern()
> > vringh_abandon_user()
> > vringh_iov_pull
?
--
Cheers,
David / dhildenb
* Simon Horman (ho...@kernel.org) wrote:
> On Sat, Jun 14, 2025 at 12:07:31AM +0100, li...@treblig.org wrote:
> > From: "Dr. David Alan Gilbert"
> >
> > The functions:
> > vringh_abandon_kern()
> > vringh_abandon_user()
> > vringh_iov_pull
fault/base timeout to allow people with faster or slower machines to
adjust these to their use-cases.
Signed-off-by: Ujwal Jain
Co-developed-by: David Gow
Signed-off-by: David Gow
---
include/kunit/try-catch.h | 1 +
lib/kunit/kunit-test.c | 9 +--
On 13.06.25 16:00, Lorenzo Stoakes wrote:
On Fri, Jun 13, 2025 at 03:53:58PM +0200, David Hildenbrand wrote:
On 13.06.25 15:49, Oscar Salvador wrote:
On Fri, Jun 13, 2025 at 11:27:01AM +0200, David Hildenbrand wrote:
Marking PMDs that map a "normal" refcounted folios as special is
a
On 13.06.25 15:49, Oscar Salvador wrote:
On Fri, Jun 13, 2025 at 11:27:01AM +0200, David Hildenbrand wrote:
Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page(): normal (refcounted)
folios shall never have the page tab
illiams
Reviewed-by: Lorenzo Stoakes
Reviewed-by: Jason Gunthorpe
Tested-by: Dan Williams
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 19 -
mm/huge_memory.c | 52 ++
2 files changed, 47 insertions(+), 24 deletions(-)
le
"struct folio_or_pfn" structure.
Use folio_mk_pmd() to create a pmd for a folio cleanly.
Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
Reviewed-by: Jason Gunthorpe
Reviewed-by: Lorenzo Stoakes
Reviewed-by: Dan Williams
Tested-by: Dan Williams
Signed-off-
c: Suren Baghdasaryan
Cc: Michal Hocko
Cc: Zi Yan
Cc: Baolin Wang
Cc: Nico Pache
Cc: Ryan Roberts
Cc: Dev Jain
Cc: Dan Williams
Cc: Oscar Salvador
David Hildenbrand (3):
mm/huge_memory: don't ignore queried cachemode in vmf_insert_pfn_pud()
mm/huge_memory: don't mark refcount
wed-by: Dan Williams
Reviewed-by: Lorenzo Stoakes
Reviewed-by: Jason Gunthorpe
Tested-by: Dan Williams
Cc:
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d3e66136e41a3..49b98082c
On 12.06.25 18:10, Lorenzo Stoakes wrote:
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().
Fortunately, there are not that many pmd_special() check t
are disabled by the
hw/process/vma")
Reviewed-by: Zi Yan
Signed-off-by: Baolin Wang
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 12.06.25 19:08, Lorenzo Stoakes wrote:
On Thu, Jun 12, 2025 at 07:00:01PM +0200, David Hildenbrand wrote:
On 12.06.25 18:49, Lorenzo Stoakes wrote:
On Wed, Jun 11, 2025 at 02:06:54PM +0200, David Hildenbrand wrote:
Marking PUDs that map a "normal" refcounted folios as special
On 12.06.25 18:49, Lorenzo Stoakes wrote:
On Wed, Jun 11, 2025 at 02:06:54PM +0200, David Hildenbrand wrote:
Marking PUDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().
Might be worth referring to specifically which rule. I&
On 12.06.25 18:19, Lorenzo Stoakes wrote:
FWIW I did a basic build/mm self tests run locally and all looking good!
Thanks! I have another series based on this series coming up ... but
struggling to get !CONFIG_ARCH_HAS_PTE_SPECIAL tested "easily" :)
--
Cheers,
David / dhildenb
On 12.06.25 17:59, Lorenzo Stoakes wrote:
On Thu, Jun 12, 2025 at 05:36:35PM +0200, David Hildenbrand wrote:
On 12.06.25 17:28, Lorenzo Stoakes wrote:
On Wed, Jun 11, 2025 at 02:06:52PM +0200, David Hildenbrand wrote:
We setup the cache mode but ... don't forward the updated pgpr
ut the same :)
--
Cheers,
David / dhildenb
On 12.06.25 17:28, Lorenzo Stoakes wrote:
On Wed, Jun 11, 2025 at 02:06:52PM +0200, David Hildenbrand wrote:
We setup the cache mode but ... don't forward the updated pgprot to
insert_pfn_pud().
Only a problem on x86-64 PAT when mapping PFNs using PUDs that
require a special cachemode.
F
walk_page_range_debug() invoke
walk_kernel_page_table_range() internally.
We additionally make walk_page_range_debug() internal to mm.
Signed-off-by: Lorenzo Stoakes
Acked-by: Mike Rapoport (Microsoft)
Acked-by: Qi Zheng
Reviewed-by: Oscar Salvador
Reviewed-by: Suren Baghdasaryan
---
Acked-by: David
On 12.06.25 13:37, Baolin Wang wrote:
On 2025/6/12 18:08, David Hildenbrand wrote:
On 12.06.25 05:54, Baolin Wang wrote:
When running the khugepaged selftest for shmem (./khugepaged all:shmem),
Hmm, this combination is not run automatically through run_tests.sh,
right? IIUC, it only runs
="thp" run_test ./khugepaged
CATEGORY="thp" run_test ./khugepaged -s 2
+CATEGORY="thp" run_test ./khugepaged all:shmem
+
+CATEGORY="thp" run_test ./khugepaged -s 4 all:shmem
Ahh, there we have it already, nice :)
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
do we even care about setting MADV_NOHUGEPAGE at all? IIUC, we'll
almost immediately later call cleanup_area() where we munmap(), right?
--
Cheers,
David / dhildenb
On 12.06.25 09:18, David Hildenbrand wrote:
On 12.06.25 06:20, Dan Williams wrote:
Alistair Popple wrote:
On Wed, Jun 11, 2025 at 02:06:51PM +0200, David Hildenbrand wrote:
This is v2 of
"[PATCH v1 0/2] mm/huge_memory: don't mark refcounted pages special
in vmf_ins
On 12.06.25 01:08, Andrew Morton wrote:
On Wed, 11 Jun 2025 14:06:51 +0200 David Hildenbrand wrote:
While working on improving vm_normal_page() and friends, I stumbled
over this issues: refcounted "normal" pages must not be marked
using pmd_special() / pud_special().
Why is this?
On 12.06.25 06:20, Dan Williams wrote:
Alistair Popple wrote:
On Wed, Jun 11, 2025 at 02:06:51PM +0200, David Hildenbrand wrote:
This is v2 of
"[PATCH v1 0/2] mm/huge_memory: don't mark refcounted pages special
in vmf_insert_folio_*()"
Now with one additional f
On 12.06.25 04:17, Alistair Popple wrote:
On Wed, Jun 11, 2025 at 02:06:53PM +0200, David Hildenbrand wrote:
Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().
Fortunately, there are not that many pmd_special() check t
On 12.06.25 03:56, Alistair Popple wrote:
On Wed, Jun 11, 2025 at 02:06:52PM +0200, David Hildenbrand wrote:
We setup the cache mode but ... don't forward the updated pgprot to
insert_pfn_pud().
Only a problem on x86-64 PAT when mapping PFNs using PUDs that
require a special cachemode.
F
On 12.06.25 06:34, Dan Williams wrote:
David Hildenbrand wrote:
We setup the cache mode but ... don't forward the updated pgprot to
insert_pfn_pud().
Only a problem on x86-64 PAT when mapping PFNs using PUDs that
require a special cachemode.
This is only a problem if the kernel mappe
ed-off-by: Mark Brown
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 10.06.25 16:13, Mark Brown wrote:
Tweak the coding style for checking for non-zero return values.
While we're at it also remove a now redundant oring of the madvise()
return code.
Suggested-by: David Hildenbrand
Signed-off-by: Mark Brown
---
Acked-by: David Hildenbrand
--
C
On 10.06.25 16:13, Mark Brown wrote:
This prints the errno and a string decode of it.
Reported-by: David Hildenbrand
Probably not "Reported-by". Did you mean "Suggested-by" like for the others?
Signed-off-by: Mark Brown
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 10.06.25 16:13, Mark Brown wrote:
Specify that errors reported from pipe() failures are the result of
failures.
Suggested-by: David Hildenbrand
Signed-off-by: Mark Brown
---
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
29ef1 ("mm: remove vmf_insert_pfn_xxx_prot() for huge page-table
entries")
Cc:
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 7 +++
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d3e66136e41a3..49b98082c5401
On 10.06.25 16:13, Mark Brown wrote:
It is not sufficiently clear what the individual tests in the cow test
program are checking so add messages for the failure cases.
Suggested-by: David Hildenbrand
Signed-off-by: Mark Brown
---
Thanks!
Acked-by: David Hildenbrand
--
Cheers,
David
s.
Fix it just like we fixed vmf_insert_folio_pmd().
Add folio_mk_pud() to mimic what we do with folio_mk_pmd().
Fixes: dbe54153296d ("mm/huge_memory: add vmf_insert_folio_pud()")
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 19 -
mm/
olio cleanly.
Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
Signed-off-by: David Hildenbrand
---
mm/huge_memory.c | 58
1 file changed, 39 insertions(+), 19 deletions(-)
diff --git a/mm/huge_memory.c b/mm/hug
; v2:
* "mm/huge_memory: don't ignore queried cachemode in vmf_insert_pfn_pud()"
-> Added after stumbling over that
* Modified the other tests to reuse the existing function by passing a
new struct
* Renamed the patches to talk about "folios" instead of pages and adjusted
flags never appear to have been
used so also remove them. The last user of PFN_SPECIAL was removed
by 653d7825c149 ("dcssblk: mark DAX broken, remove FS_DAX_LIMITED
support").
Signed-off-by: Alistair Popple
Acked-by: David Hildenbrand
Reviewed-by: Christoph Hellwig
Reviewed-by: Jason
Hello Dmitry,
is there anything else I should adjust to get these fixes merged?
Thank you
David
On 03/05/2025 16:02, David Heidelberg wrote:
Kind ping on the series.
When the series is considered solid, it will improve Linux usability on
lower-quality touchscreen replacements (including
On 06.06.25 10:26, Oscar Salvador wrote:
On Fri, Jun 06, 2025 at 10:23:11AM +0200, David Hildenbrand wrote:
See my reply to Dan.
Yet another boolean, yuck. Passing the folio and the pfn, yuck.
(I have a strong opinion here ;) )
ok, I see it was already considered. No more questions then
ned-off-by: Mark Brown
Reported-by: Lorenzo Stoakes
Closes:
https://lkml.kernel.org/r/a76fc252-0fe3-4d4b-a9a1-4a2895c2680d@lucifer.local
Cc: David Hildenbrand
Cc: Shuah Khan
Acked-by: David Hildenbrand
--
Cheers,
David / dhildenb
On 06.06.25 10:26, Oscar Salvador wrote:
On Fri, Jun 06, 2025 at 10:23:11AM +0200, David Hildenbrand wrote:
See my reply to Dan.
Yet another boolean, yuck. Passing the folio and the pfn, yuck.
(I have a strong opinion here ;) )
ok, I see it was already considered. No more questions then
On 06.06.25 10:20, Oscar Salvador wrote:
On Tue, Jun 03, 2025 at 11:16:33PM +0200, David Hildenbrand wrote:
Marking PMDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().
Fortunately, there are not that many pmd_special() check t
Hi Dan,
On 06.06.25 01:47, Dan Williams wrote:
David Hildenbrand wrote:
Based on Linus' master.
While working on improving vm_normal_page() and friends, I stumbled
over this issues: refcounted "normal" pages must not be marked
using pmd_special() / pud_special().
Fortunatel
k on locks...)
```
I assume we don't have to dump more than pte values etc? So
pte_special() and friends are not relevant to get it right.
GUP-fast depend on CONFIG_HAVE_GUP_FAST, not sure if that would be a
concern for now.
--
Cheers,
David / dhildenb
On 05.06.25 19:19, Mark Brown wrote:
On Thu, Jun 05, 2025 at 06:55:53PM +0200, David Hildenbrand wrote:
On 05.06.25 18:42, Mark Brown wrote:
I can't remember off hand, sorry.
I assume in ... my review to patch #4?
Oh, yeah - it's there. I did look there but the "not a f
On 05.06.25 18:42, Mark Brown wrote:
On Thu, Jun 05, 2025 at 05:26:05PM +0100, Lorenzo Stoakes wrote:
On Thu, Jun 05, 2025 at 05:15:51PM +0100, Mark Brown wrote:
That's the thing with memfd being special and skipping on setup failure
that David mentioned, I've got a patch as p
IVATE file mapping ... with memfd
hugetlb (1048576 kB)
That's the thing with memfd being special and skipping on setup failure
that David mentioned, I've got a patch as part of the formatting series
I was going to send after the merge window.
@Andew, why did this series get merged already
_page_table_range() case.
--
Cheers,
David / dhildenb
different name scheme to
highlight that this is something completely different.
walk_kernel_page_table_range()
etc.
--
Cheers,
David / dhildenb
On 04.06.25 10:07, Mike Rapoport wrote:
On Wed, Jun 04, 2025 at 09:39:30AM +0200, David Hildenbrand wrote:
On 03.06.25 21:22, Lorenzo Stoakes wrote:
The walk_page_range_novma() function is rather confusing - it supports two
modes, one used often, the other used only for debugging.
The first
On 03.06.25 23:16, David Hildenbrand wrote:
Marking PUDs that map a "normal" refcounted folios as special is
against our rules documented for vm_normal_page().
Fortunately, there are not that many pud_special() check that can be
mislead and are right now rather harmless: e.g., none so
On 03.06.25 23:16, David Hildenbrand wrote:
Based on Linus' master.
While working on improving vm_normal_page() and friends, I stumbled
over this issues: refcounted "normal" pages must not be marked
using pmd_special() / pud_special().
Fortunately, so far there doesn'
uge_memory: add vmf_insert_folio_pud()")
Signed-off-by: David Hildenbrand
---
include/linux/mm.h | 15 +++
mm/huge_memory.c | 33 +++--
2 files changed, 38 insertions(+), 10 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0ef2ba0c667af..047c82
s.
Fix it by just inlining the relevant code, making the whole
pmd_none() handling cleaner. We can now use folio_mk_pmd().
While at it, make sure that a pmd that is not-none is actually present
before comparing PFNs.
Fixes: 6c88f72691f8 ("mm/huge_memory: add vmf_insert_folio_pmd()")
c: Baolin Wang
Cc: Nico Pache
Cc: Ryan Roberts
Cc: Dev Jain
Cc: Dan Williams
David Hildenbrand (2):
mm/huge_memory: don't mark refcounted pages special in
vmf_insert_folio_pmd()
mm/huge_memory: don't mark refcounted pages special in
vmf_insert_folio_pud()
include/linu
On 03.06.25 19:55, Mark Brown wrote:
On Tue, Jun 03, 2025 at 06:48:19PM +0100, Mark Brown wrote:
On Tue, Jun 03, 2025 at 06:57:38PM +0200, David Hildenbrand wrote:
I agree that printing something in case KSFT_PASS does not make sense
indeed.
But if something goes wrong (KSFT_FAIL/KSFT_SKIP
On 03.06.25 20:27, Mark Brown wrote:
On Tue, Jun 03, 2025 at 02:37:41PM +0200, David Hildenbrand wrote:
On 27.05.25 18:04, Mark Brown wrote:
+static char test_name[1024];
+
+static inline void log_test_start(const char *name, ...)
+{
+ va_list args;
+ va_start(args, name
On 03.06.25 17:22, Mark Brown wrote:
On Tue, Jun 03, 2025 at 05:06:17PM +0200, David Hildenbrand wrote:
On 03.06.25 16:58, Mark Brown wrote:
Like I said I suspect the test name is just unclear here...
I would hope we find some mechanical replacement.
E.g.,
ksft_test_result_pass(&qu
On 03.06.25 16:58, Mark Brown wrote:
On Tue, Jun 03, 2025 at 04:15:42PM +0200, David Hildenbrand wrote:
On 03.06.25 15:21, Mark Brown wrote:
} else {
- ksft_test_result_fail("Leak from parent into child\n");
Same here and in other cases below (I probably di
On 03.06.25 15:21, Mark Brown wrote:
On Tue, Jun 03, 2025 at 02:51:45PM +0200, David Hildenbrand wrote:
On 27.05.25 18:04, Mark Brown wrote:
ret = mprotect(mem, size, PROT_READ);
- ret |= mprotect(mem, size, PROT_READ|PROT_WRITE);
if (ret
ot; information especially
on the failure path?
+ if (!memcmp(smem, old, size))
+ log_test_result(KSFT_PASS);
+ else
+ log_test_result(KSFT_FAIL);
free(old);
[...]
@@ -1531,9 +1613,15 @@ static void run_with_huge_zeropage(non_anon_test_fn fn, const char *desc)
smem = (char *)(((uintptr_t)mmap_smem + pmdsize) & ~(pmdsize - 1));
ret = madvise(mem, pmdsize, MADV_HUGEPAGE);
+ if (ret != 0) {
if (ret)
+ ksft_perror("madvise()");
+ log_test_result(KSFT_FAIL);
+ goto munmap;
+ }
ret |= madvise(smem, pmdsize, MADV_HUGEPAGE);
- if (ret) {
- ksft_test_result_fail("MADV_HUGEPAGE failed\n");
+ if (ret != 0) {
if (ret) as it was
+ ksft_perror("madvise()");
+ log_test_result(KSFT_FAIL);
goto munmap;
--
Cheers,
David / dhildenb
ate the array in log_test_start() and free it in
log_test_result(). Then, we could assert more easily that we always have
a log_test_result() follow exactly one log_test_start() etc.
--
Cheers,
David / dhildenb
1 - 100 of 19737 matches
Mail list logo