On 2/23/2024 3:04 PM, Petr Tesarik wrote: > From: Petr Tesarik <petr.tesar...@huawei-partners.com> > > If a segmentation fault is caused by accessing an address in the vmalloc > area, check that the target page is present. > > Currently, if the kernel hits a guard page in the vmalloc area, UML blindly > assumes that the fault is caused by a stale mapping and will be fixed by > flush_tlb_kernel_vm(). Unsurprisingly, if the fault is caused by accessing > a guard page, no mapping is created, and when the faulting instruction is > restarted, it will cause exactly the same fault again, effectively creating > an infinite loop.
Ping. Any comment on this fix? Petr T > > Signed-off-by: Petr Tesarik <petr.tesar...@huawei-partners.com> > --- > arch/um/kernel/trap.c | 4 ++++ > 1 file changed, 4 insertions(+) > > diff --git a/arch/um/kernel/trap.c b/arch/um/kernel/trap.c > index 6d8ae86ae978..d5b85f1bfe33 100644 > --- a/arch/um/kernel/trap.c > +++ b/arch/um/kernel/trap.c > @@ -206,11 +206,15 @@ unsigned long segv(struct faultinfo fi, unsigned long > ip, int is_user, > int err; > int is_write = FAULT_WRITE(fi); > unsigned long address = FAULT_ADDRESS(fi); > + pte_t *pte; > > if (!is_user && regs) > current->thread.segv_regs = container_of(regs, struct pt_regs, > regs); > > if (!is_user && (address >= start_vm) && (address < end_vm)) { > + pte = virt_to_pte(&init_mm, address); > + if (!pte_present(*pte)) > + page_fault_oops(regs, address, ip); > flush_tlb_kernel_vm(); > goto out; > }