https://bugs.llvm.org/show_bug.cgi?id=39172

            Bug ID: 39172
           Summary: [NVPTX] Wrong results or hang on GPU when using
                    lastprivate() on SPMD construct with full runtime
           Product: OpenMP
           Version: unspecified
          Hardware: PC
                OS: Linux
            Status: NEW
          Severity: enhancement
          Priority: P
         Component: Clang Compiler Support
          Assignee: unassignedclangb...@nondot.org
          Reporter: hah...@hahnjo.de
                CC: llvm-bugs@lists.llvm.org

The following program uses an SPMD construct with lastprivate() clause and the
runtime has to be initialized because of schedule(dynamic):
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char *argv[]) {
  int last;

  #pragma omp target teams distribute parallel for map(from: last)
lastprivate(last) schedule(dynamic)
  for (int i = 0; i < 10000; i++) {
    last = i;
  }

  printf("last = %d\n", last);

  return EXIT_SUCCESS;
}

Compiled with current Clang trunk the application delivers wrong results (last
= 0) with -O0 and -O1 and hangs with -O2 and -O3. I think this is due to the
same problem: The generated code calls __kmpc_data_sharing_push_stack() to get
memory for storing "int last;" to implement lastprivate(). This works if the
runtime is uninitialized because there is a special case in libomptarget-nvptx
which returns the *same memory location for all threads* (see
https://reviews.llvm.org/D51875).
However the original contract of __kmpc_data_sharing_push_stack() was to return
*unique memory for each thread* which explains the wrong result. For higher
optimization levels I'd guess that LLVM exploits undefined behaviour somewhere
which makes the application hang?

The only solution I can think of is to introduce new entry points in
libomptarget-nvptx with the required contract: Return the same memory location
for all calling threads in the same thread block. Opinions?

-- 
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs

Reply via email to