saiislam added a comment.

In D105191#2910961 <https://reviews.llvm.org/D105191#2910961>, @ye-luo wrote:

> Do I must use llvm-ar/ranlib  or system ar/ranlib is OK?
>
> 1. existing use case breaks
>
> Use https://github.com/ye-luo/openmp-target/blob/master/tests/math/modf.cpp
> $ clang++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target=nvptx64 
> -march=sm_80 modf.cpp  # still OK
> two steps
> $ clang++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target=nvptx64 
> -march=sm_80 modf.cpp -c
> $ clang++ -fopenmp -fopenmp-targets=nvptx64 -Xopenmp-target=nvptx64 
> -march=sm_80 modf.o
> clang-14: warning: Unknown CUDA version. version.txt: 11.0.228. Assuming the 
> latest supported version 10.1 [-Wunknown-cuda-version]
> nvlink fatal   : Could not open input file '/tmp/modf-0bf89b.cubin'
> clang-14: error: nvlink command failed with exit code 1 (use -v to see 
> invocation)
>
> 2. could you make my test case working?
>
> https://github.com/ye-luo/openmp-target/tree/master/tests/link_static_fat_bin
> both compile-amd.sh and compile.sh doesn't work for me.
> linking was successful but no device code is in the executable and the run 
> fails.
>
> Does your test actually test a run?



1. System ar/ranlib is OK.
2. modf.cpp test case is breaking due to incompatibility between bundle entry 
ID formats which got introduced by my last patch D93525 
<https://reviews.llvm.org/D93525>. I think D106809 
<https://reviews.llvm.org/D106809> should fix this, but I haven't tested it on 
this case yet.
3. link_static_fat_bin is working fine. The missing component was inability of 
nvlink to take archive of cubin files as input. It requires a wrapper over 
nvlink (D108291 <https://reviews.llvm.org/D108291>). Please review.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D105191/new/

https://reviews.llvm.org/D105191

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to