do_huge_pmd_wp_page() splits the PMD when a COW of the entire huge page fails (e.g., can't allocate a new folio or the folio is pinned). It then returns VM_FAULT_FALLBACK so the fault can be retried at PTE granularity.
If the split fails, the PMD is still huge. Returning VM_FAULT_FALLBACK would re-enter the PTE fault path, which expects a PTE page table at the PMD entry — not a huge PMD. Return VM_FAULT_OOM on split failure, which signals the fault handler to invoke the OOM killer or return -ENOMEM to userspace. Signed-off-by: Usama Arif <[email protected]> --- mm/huge_memory.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d9fb5875fa59e..e82b8435a0b7f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2153,7 +2153,13 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf) folio_unlock(folio); spin_unlock(vmf->ptl); fallback: - __split_huge_pmd(vma, vmf->pmd, vmf->address, false); + /* + * Split failure means the PMD is still huge; returning + * VM_FAULT_FALLBACK would re-enter the PTE path with a + * huge PMD, causing incorrect behavior. + */ + if (__split_huge_pmd(vma, vmf->pmd, vmf->address, false)) + return VM_FAULT_OOM; return VM_FAULT_FALLBACK; } -- 2.47.3
