On Thu, Jul 17, 2025 at 01:52:06PM +0200, David Hildenbrand wrote: > Just like we do for vmf_insert_page_mkwrite() -> ... -> > insert_page_into_pte_locked() with the shared zeropage, support the > huge zero folio in vmf_insert_folio_pmd(). > > When (un)mapping the huge zero folio in page tables, we neither > adjust the refcount nor the mapcount, just like for the shared zeropage. > > For now, the huge zero folio is not marked as special yet, although > vm_normal_page_pmd() really wants to treat it as special. We'll change > that next. > > Reviewed-by: Oscar Salvador <osalva...@suse.de> > Signed-off-by: David Hildenbrand <da...@redhat.com>
LGTM, so: Reviewed-by: Lorenzo Stoakes <lorenzo.stoa...@oracle.com> > --- > mm/huge_memory.c | 8 +++++--- > 1 file changed, 5 insertions(+), 3 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 849feacaf8064..db08c37b87077 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1429,9 +1429,11 @@ static vm_fault_t insert_pmd(struct vm_area_struct > *vma, unsigned long addr, > if (fop.is_folio) { > entry = folio_mk_pmd(fop.folio, vma->vm_page_prot); > > - folio_get(fop.folio); > - folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma); > - add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR); > + if (!is_huge_zero_folio(fop.folio)) { > + folio_get(fop.folio); > + folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, > vma); > + add_mm_counter(mm, mm_counter_file(fop.folio), > HPAGE_PMD_NR); > + } > } else { > entry = pmd_mkhuge(pfn_pmd(fop.pfn, prot)); > entry = pmd_mkspecial(entry); > -- > 2.50.1 >