https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114676

--- Comment #14 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to Andreas Krebbel from comment #13)
> We will go and fix PyTorch instead. Although it is not clearly documented,
> the way PyTorch uses the builtin right now is probably not what was
> intended. It is pretty clear that the element type pointer needs to alias
> vectors of the same element type, but there is no saying about aliasing
> everything.
> 
> I'm just wondering how to improve the diagnostics in our backend to catch
> this. The example below is similar to what PyTorch does today. Casting mem
> to (float*) prevents our builtin code from complaining about the type
> mismatch and by that opens the door for the much harder to debug TBAA
> problem.

We need a TBAA analyzer among sanitizers (but writing it is really hard).

> #include <vecintrin.h>
> 
> void __attribute__((noinline)) foo (int *mem)
> {
>   vec_xst ((vector float){ 1.0f, 2.0f, 3.0f, 4.0f }, 0, (float*)mem);

So use
  *(vector float __attribute__((__may_alias__)) *)mem = (vector float){ 1.0f,
2.0f, 3.0f, 4.0f };
instead?  Sure, GCC extension, not an intrinsic in that case...

Reply via email to