https://bugs.llvm.org/show_bug.cgi?id=35249
Bug ID: 35249
Summary: [CUDA][NVPTX] Incorrect compilation of __activemask()
Product: new-bugs
Version: trunk
Hardware: PC
OS: All
Status: NEW
Severity: enhancement
Priority: P
Component: new bugs
Assignee: [email protected]
Reporter: [email protected]
CC: [email protected]
Hi,
Compiling and running the following CUDA program with Clang/LLVM versus NVCC
produces different results on a Volta GPU with CUDA-9.0. I believe there may
be a bug in LLVM.
__global__ void kernel() {
printf ("Activemask %x\n", __activemask());
if (threadIdx.x % 2 == 0)
printf ("Activemask %x\n", __activemask());
}
__activemask() returns a bitmask specifying the number of lanes in a warp that
execute the instruction synchronously. If I launch the kernel with 1
threadblock and 32 threads I expect the first printf to show all 32 lanes
active and the second to show only the even ones as active.
I get the following output when compiled with NVCC:
Activemask ffffffff
<<32 times>>
Activemask 55555555
<<16 times>>
When compiled with Clang/LLVM I get the following output:
Activemask ffffffff
<<48 times>>
__activemask() gets resolved to the LLVM intrinsic llvm.nvvm.vote.ballot(i1
true). I get the incorrect IR after 'Early CSE'. I can prevent the
optimization by marking the intrinsic as having a side-effect
(IntrHasSideEffects and removing IntrNoMem) in
include/llvm/IR/IntrinsicsNVVM.td. Is this the appropriate property? It
probably needs to be done for all vote intrinsics, at least for Volta.
Thanks.
--
You are receiving this mail because:
You are on the CC list for the bug._______________________________________________
llvm-bugs mailing list
[email protected]
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs