tra added a comment. > The mangling should be aligned with its semantics. It's fine to use u6__bf16 > if a target doesn't want to support arithmetic operations.
We (speaking for CUDA/NVPTX) do want to support math on bfloat, just didn't get to implementing it yet. NVPTX will likely start supporting arithmetics on bfloat at some point in the future. Does it mean that we'd need to change the mangling then? Or would I need to use a different type with corresponding mangling for bfloat-with-ops? Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D136919/new/ https://reviews.llvm.org/D136919 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits