Hi Chen, thanks for looking at this. On Sat, 2020-12-26 at 11:35 +0800, Chen Zhou wrote: > If the memory reserved for crash dump kernel falled in ZONE_DMA32, > the devices in crash dump kernel need to use ZONE_DMA will alloc fail. > > Fix this by reserving low memory in ZONE_DMA if CONFIG_ZONE_DMA is > enabled, otherwise, reserving in ZONE_DMA32. > > Fixes: bff3b04460a8 ("arm64: mm: reserve CMA and crashkernel in ZONE_DMA32")
I'm not so sure this counts as a fix, if someone backports it it'll probably break things as it depends on the series that dynamically sizes DMA zones. > Signed-off-by: Chen Zhou <chenzho...@huawei.com> > --- Why not doing the same with CMA? You'll probably have to move the dma_contiguous_reserve() call into bootmem_init() so as to make sure that arm64_dma_phys_limit is populated. Regards, Nicolas > arch/arm64/mm/init.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 7b9809e39927..5074e945f1a6 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -85,7 +85,8 @@ static void __init reserve_crashkernel(void) > > > if (crash_base == 0) { > /* Current arm64 boot protocol requires 2MB alignment */ > - crash_base = memblock_find_in_range(0, arm64_dma32_phys_limit, > + crash_base = memblock_find_in_range(0, > + arm64_dma_phys_limit ? : arm64_dma32_phys_limit, > crash_size, SZ_2M); > if (crash_base == 0) { > pr_warn("cannot allocate crashkernel (size:0x%llx)\n",
signature.asc
Description: This is a digitally signed message part