Hi Stefano,
On 02/03/17 19:12, Stefano Stabellini wrote:
On Thu, 2 Mar 2017, Julien Grall wrote:
On 02/03/17 08:53, Edgar E. Iglesias wrote:
On Thu, Mar 02, 2017 at 09:38:37AM +0100, Edgar E. Iglesias wrote:
On Wed, Mar 01, 2017 at 05:05:21PM -0800, Stefano Stabellini wrote:
Julien, from looking at the two diffs, this is simpler and nicer, but if
you look at xen/include/asm-arm/page.h, my patch made
clean_dcache_va_range consistent with invalidate_dcache_va_range. For
consistency, I would prefer to deal with the two functions the same way.
Although it is not a spec requirement, I also think that it is a good
idea to issue cache flushes from cacheline aligned addresses, like
invalidate_dcache_va_range does and Linux does, to make more obvious
what is going on.
invalid_dcache_va_range is split because the cache instruction differs
for the start and end if unaligned. For them you want to use clean &
invalidate rather than invalidate.
If you look at the implementation of other cache helpers in Linux (see
dcache_by_line_op in arch/arm64/include/asm/assembler.h), they will only
align start & end.
Also, the invalid_dcache_va_range is using modulo which I would rather
avoid. The modulo in this case will not be optimized by the compiler
because cacheline_bytes is not a constant.
So I still prefer to keep this function really simple.
BTW, you would also need to fix clean_and_invalidate_dcache_va_range.
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel