The CMA allocation will skip allocations of single pages to save CMA resource. This requires its callers to rebound those page allocations from normal area.
So this patch moves the alloc_pages() call to the fallback routines. Signed-off-by: Nicolin Chen <nicoleots...@gmail.com> --- Changlog v1->v2: * PATCH-2: Initialized page pointer to NULL kernel/dma/remap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/dma/remap.c b/kernel/dma/remap.c index 2b750f13bc8f..c2076c6d6c17 100644 --- a/kernel/dma/remap.c +++ b/kernel/dma/remap.c @@ -109,14 +109,14 @@ int __init dma_atomic_pool_init(gfp_t gfp, pgprot_t prot) { unsigned int pool_size_order = get_order(atomic_pool_size); unsigned long nr_pages = atomic_pool_size >> PAGE_SHIFT; - struct page *page; + struct page *page = NULL; void *addr; int ret; if (dev_get_cma_area(NULL)) page = dma_alloc_from_contiguous(NULL, nr_pages, pool_size_order, false); - else + if (!page) page = alloc_pages(gfp, pool_size_order); if (!page) goto out; -- 2.17.1 _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu