On 17.07.25 21:55, Lorenzo Stoakes wrote:
On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
@@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct
*vma, unsigned long addr,
print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
return NULL;
}
-
- if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
- if (vma->vm_flags & VM_MIXEDMAP) {
- if (!pfn_valid(pfn))
- return NULL;
- goto out;
- } else {
- unsigned long off;
- off = (addr - vma->vm_start) >> PAGE_SHIFT;
- if (pfn == vma->vm_pgoff + off)
- return NULL;
- if (!is_cow_mapping(vma->vm_flags))
- return NULL;
- }
- }
-
- if (is_huge_zero_pfn(pfn))
- return NULL;
- if (unlikely(pfn > highest_memmap_pfn)) {
- print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
- return NULL;
- }
-
- /*
- * NOTE! We still have PageReserved() pages in the page tables.
- * eg. VDSO mappings can cause them to exist.
- */
-out:
- return pfn_to_page(pfn);
+ return vm_normal_page_pfn(vma, addr, pfn, pmd_val(pmd));
Hmm this seems broken, because you're now making these special on arches with
pte_special() right? But then you're invoking the not-special function?
Also for non-pte_special() arches you're kind of implying they _maybe_ could be
special.
OK sorry the diff caught me out here, you explicitly handle the pmd_special()
case here, duplicatively (yuck).
Maybe you fix this up in a later patch :)
I had that, but the conditions depend on the level, meaning: unnecessary
checks for pte/pmd/pud level.
I had a variant where I would pass "bool special" into
vm_normal_page_pfn(), but I didn't like it.
To optimize out, I would have to provide a "level" argument, and did not
convince myself yet that that is a good idea at this point.
--
Cheers,
David / dhildenb