Hi Stefano,
On 03/03/2017 01:15 AM, Stefano Stabellini wrote:
clean_dcache_va_range and clean_and_invalidate_dcache_va_range don't
calculate the range correctly when "end" is not cacheline aligned. As a
result, the last cacheline is not skipped. Fix the issue by aligning the
start address to the cacheline size.
In addition, make the code simpler and faster in
invalidate_dcache_va_range, by removing the module operation and using
bitmasks instead.
Signed-off-by: Stefano Stabellini <sstabell...@kernel.org>
Reported-by: edgar.igles...@xilinx.com
CC: edgar.igles...@xilinx.com
---
xen/include/asm-arm/page.h | 24 +++++++++++-------------
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 86de0b6..4b46e88 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -291,24 +291,20 @@ extern size_t cacheline_bytes;
static inline int invalidate_dcache_va_range(const void *p, unsigned long size)
{
- size_t off;
const void *end = p + size;
+ size_t cacheline_mask = cacheline_bytes - 1;
dsb(sy); /* So the CPU issues all writes to the range */
- off = (unsigned long)p % cacheline_bytes;
- if ( off )
+ if ( (uintptr_t)p & cacheline_mask )
{
- p -= off;
+ p = (void *)((uintptr_t)p & ~cacheline_mask);
asm volatile (__clean_and_invalidate_dcache_one(0) : : "r" (p));
p += cacheline_bytes;
- size -= cacheline_bytes - off;
It would have been nice to explain in the commit message that you
removed the adjustment of the size because the variable is not used
later on.
With that:
Reviewed-by: Julien Grall <julien.gr...@arm.com>
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel