For each vmalloc area, there is one guard page at the end of it.
so the vm->size = PAGE_ALIGN(offset + request size) + guard page size.

Signed-off-by: Richard Lee <superlibj8...@gmail.com>
Signed-off-by: Xiubo Li <li.xi...@freescale.com>
Cc: Nicolas Pitre <n...@linaro.org>
Cc: Santosh Shilimkar <santosh.shilim...@ti.com>
Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
---
 arch/arm/mm/ioremap.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index be69333..758e8f7 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -49,14 +49,18 @@ static struct static_vm *find_static_vm_paddr(phys_addr_t 
paddr,
        struct vm_struct *vm;
 
        list_for_each_entry(svm, &static_vmlist, list) {
+               phys_addr_t paddr_end, phys_addr_end;
+
                vm = &svm->vm;
                if (!(vm->flags & VM_ARM_STATIC_MAPPING))
                        continue;
                if ((vm->flags & VM_ARM_MTYPE_MASK) != VM_ARM_MTYPE(mtype))
                        continue;
 
-               if (vm->phys_addr > paddr ||
-                       paddr + size - 1 > vm->phys_addr + vm->size - 1)
+               /* The PAGE_SIZE here is vmalloc area's guard page */
+               phys_addr_end = vm->phys_addr + vm->size - PAGE_SIZE - 1;
+               paddr_end = paddr + size - 1;
+               if (vm->phys_addr > paddr || paddr_end > phys_addr_end)
                        continue;
 
                return svm;
-- 
1.8.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to