The current cma bitmap aligned mask compute way is incorrect, it could
cause an unexpected align when using cma_alloc() if wanted align order
is bigger than cma->order_per_bit.

Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6,
when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as
expected align value, after using current computing, however, we get 0 as
cma bitmap aligned mask other than 511.

This patch fixes the cma bitmap aligned mask compute way.

Signed-off-by: Weijie Yang <weijie.y...@samsung.com>
---
 mm/cma.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/cma.c b/mm/cma.c
index c17751c..f6207ef 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma)
 
 static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order)
 {
-       return (1UL << (align_order >> cma->order_per_bit)) - 1;
+       if (align_order <= cma->order_per_bit)
+               return 0;
+       else
+               return (1UL << (align_order - cma->order_per_bit)) - 1;
 }
 
 static unsigned long cma_bitmap_maxno(struct cma *cma)
-- 
1.7.10.4


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to