In trying to add support for drm_hwcomposer to HiKey, I've needed to utilize the ION CMA heap, and I've noticed problems with allocations on newer kernels failing.
It seems back with 204f672255c2 ("ion: Use CMA APIs directly"), the ion_cma_heap code was modified to use the CMA API, but kept the arguments as buffer lengths rather then number of pages. This results in errors as we don't have enough pages in CMA to satisfy the exaggerated requests. This patch converts the ion_cma_heap CMA API usage to properly request pages. It also fixes a minor issue in the allocation where in the error path, the cma_release is called with the buffer->size value which hasn't yet been set. Cc: Laura Abbott <labb...@redhat.com> Cc: Sumit Semwal <sumit.sem...@linaro.org> Cc: Benjamin Gaignard <benjamin.gaign...@linaro.org> Cc: Archit Taneja <arch...@codeaurora.org> Cc: Greg KH <gre...@linuxfoundation.org> Cc: Daniel Vetter <dan...@ffwll.ch> Cc: Dmitry Shmidt <dimitr...@google.com> Cc: Todd Kjos <tk...@google.com> Cc: Amit Pundir <amit.pun...@linaro.org> Fixes: 204f672255c2 ("staging: android: ion: Use CMA APIs directly") Acked-by: Laura Abbott <labb...@redhat.com> Signed-off-by: John Stultz <john.stu...@linaro.org> --- v2: Fix build errors when CONFIG_CMA_ALIGNMENT isn't defined drivers/staging/android/ion/ion_cma_heap.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c index dd5545d..ff405c7 100644 --- a/drivers/staging/android/ion/ion_cma_heap.c +++ b/drivers/staging/android/ion/ion_cma_heap.c @@ -31,6 +31,12 @@ struct ion_cma_heap { #define to_cma_heap(x) container_of(x, struct ion_cma_heap, heap) +#ifdef CONFIG_CMA_ALIGNMENT + #define CMA_ALIGNMENT CONFIG_CMA_ALIGNMENT +#else + #define CMA_ALIGNMNET 8 +#endif + /* ION CMA heap operations functions */ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, unsigned long len, @@ -39,9 +45,15 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, struct ion_cma_heap *cma_heap = to_cma_heap(heap); struct sg_table *table; struct page *pages; + unsigned long size = PAGE_ALIGN(len); + unsigned long nr_pages = size >> PAGE_SHIFT; + unsigned long align = get_order(size); int ret; - pages = cma_alloc(cma_heap->cma, len, 0, GFP_KERNEL); + if (align > CMA_ALIGNMENT) + align = CMA_ALIGNMENT; + + pages = cma_alloc(cma_heap->cma, nr_pages, align, GFP_KERNEL); if (!pages) return -ENOMEM; @@ -53,7 +65,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, if (ret) goto free_mem; - sg_set_page(table->sgl, pages, len, 0); + sg_set_page(table->sgl, pages, size, 0); buffer->priv_virt = pages; buffer->sg_table = table; @@ -62,7 +74,7 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer, free_mem: kfree(table); err: - cma_release(cma_heap->cma, pages, buffer->size); + cma_release(cma_heap->cma, pages, nr_pages); return -ENOMEM; } @@ -70,9 +82,10 @@ static void ion_cma_free(struct ion_buffer *buffer) { struct ion_cma_heap *cma_heap = to_cma_heap(buffer->heap); struct page *pages = buffer->priv_virt; + unsigned long nr_pages = PAGE_ALIGN(buffer->size) >> PAGE_SHIFT; /* release memory */ - cma_release(cma_heap->cma, pages, buffer->size); + cma_release(cma_heap->cma, pages, nr_pages); /* release sg table */ sg_free_table(buffer->sg_table); kfree(buffer->sg_table); -- 2.7.4