Artem-B wrote:

Unless HIP explicitly defines wavefront size property for the host (I do not 
think so), it would appear that it's a property of a GPU, and as such should 
not be treated as a constant on the host, because the host needs to deal with 
multiple GPU variants, with different idea of the wavefront size.

I'm not surprised that there are users who may be (mis)using the macro now, and 
who are relying on "happens to work", this is something that probably should 
not have existed. My vote is for deprecating it.

Possibly simpler way to deal with it would be to not define it in clang for the 
host compilation, and instead add it to the pre-included headers, defining it 
with an escape hatch which will need user intervention (and thus they will be 
aware that it needs to be fixed).

E.g.
```
#if !defined(__HIP_DEVICE_COMPILE__)
#if defined(I_REALLY_WANT_TO_USE_CONSTANT_WAVEFRONT_SIZE_ON_THE_HOST)
#define __AMDGCN_WAVEFRONT_SIZE__ 64
#endif
#endif
```

> Users have to put #if __HIP_DEVICE_COMPILE__ anywhere they use 
> __AMDGCN_WAVEFRONT_SIZE__. Previous experience tells us that this causes bad 
> user experience and clutters source code.

I think it's unavoidable. Users who just use the macro now either rely on 
"happens to work" or just didn't notice the problem they may already have.

Whatever depends on the wavefront size on the host end will have to be done 
conditionally, either at compile time via `__HIP_DEVICE_COMPILE__` or by 
checking the properties of the specific GPU the code is dealing with.


https://github.com/llvm/llvm-project/pull/109663
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to