================ @@ -337,9 +337,12 @@ void AMDGPUTargetInfo::getTargetDefines(const LangOptions &Opts, if (hasFastFMA()) Builder.defineMacro("FP_FAST_FMA"); - Builder.defineMacro("__AMDGCN_WAVEFRONT_SIZE__", Twine(WavefrontSize)); - // ToDo: deprecate this macro for naming consistency. - Builder.defineMacro("__AMDGCN_WAVEFRONT_SIZE", Twine(WavefrontSize)); + Builder.defineMacro("__AMDGCN_WAVEFRONT_SIZE__", Twine(WavefrontSize), + "compile-time-constant access to the wavefront size will " + "be removed in a future release"); + Builder.defineMacro("__AMDGCN_WAVEFRONT_SIZE", Twine(WavefrontSize), + "compile-time-constant access to the wavefront size will " + "be removed in a future release"); ---------------- jhuber6 wrote:
The problem is that it's not that simple, since the user can easily change the wavefront size by compiling with `-mwavefrontsize64`. I ran into these types of issues myself while working on the RPC interface for the [rpc](https://github.com/llvm/llvm-project/blob/main/libc/shared/rpc.h#L339) interface. There I pretty much just take the wavefront size as an argument to the indexing / allocation functions. Alternatively you use a template and do runtime dispatch and assume that optimizations will DCE the other code. https://github.com/llvm/llvm-project/pull/112849 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits