On 08/12/2016 12:32 PM, Vijay Kilari wrote:
On Sat, Aug 6, 2016 at 3:47 PM, Richard Henderson <r...@twiddle.net> wrote:
On 08/02/2016 03:50 PM, vijay.kil...@gmail.com wrote:
+#define VEC_PREFETCH(base, index) \
+ asm volatile ("prfm pldl1strm, [%x[a]]\n" : :
[a]"r"(&base[(index)]))
Is this not __builtin_prefetch(base + index) ?
I.e. you can defined this generically for all targets.
__builtin_prefetch() is available only in gcc 5.3 for arm64.
So? You can't really defend the position that you care about aarch64 code
quality if you're using gcc 4.x. Essentially all of the performance work has
been done for later versions.
I'll note that you're also prefetching too much, off the end of the block,
and that you're probably not prefetching far enough. You'd need to break
off the last iteration(s) of the loop.
I'll note that you're also prefetching too close. The loop operates on
8*vecsize units. In the case of aarch64, 128 byte units. Both i+32 and
128 unit is specific to thunder. I will move this to thunder
specific function
No, you misunderstand.
While it's true that thunderx is unique within other aarch64 implementations in
having a 128-byte cacheline size, the "128" I mention above has nothing to do
with that.
The loop is operating on BUFFER_FIND_NONZERO_OFFSET_UNROLL_FACTOR bytes, which
is defined above as 8 * sizeof(vector), which happens to be 128.
r~