On Tue, May 12, 2026 at 05:07:20PM -0400, Michael S. Tsirkin wrote: > When splitting a large buddy page, propagate the PG_zeroed flag > to each sub-page before freeing it. __free_pages_prepare clears > all flags (including PG_zeroed), so the flag must be re-set on > each fragment after the split. This ensures that the buddy merge > logic can see PG_zeroed on pages that were part of a larger > zeroed block. > > Signed-off-by: Michael S. Tsirkin <[email protected]> > Assisted-by: Claude:claude-opus-4-6 > Assisted-by: cursor-agent:GPT-5.4-xhigh
Reviewed-by: Gregory Price <[email protected]> > --- > mm/page_alloc.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 468e8bde7d34..ce43f5a3dbaa 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1523,6 +1523,7 @@ static void split_large_buddy(struct zone *zone, struct > page *page, > unsigned long pfn, int order, fpi_t fpi) > { > unsigned long end = pfn + (1 << order); > + bool zeroed = PageZeroed(page); > > VM_WARN_ON_ONCE(!IS_ALIGNED(pfn, 1 << order)); > /* Caller removed page from freelist, buddy info cleared! */ > @@ -1534,6 +1535,8 @@ static void split_large_buddy(struct zone *zone, struct > page *page, > do { > int mt = get_pfnblock_migratetype(page, pfn); > > + if (zeroed) > + __SetPageZeroed(page); > __free_one_page(page, pfn, zone, order, mt, fpi); > pfn += 1 << order; > if (pfn == end) > -- > MST >

