gtbercea added a comment.

In https://reviews.llvm.org/D50845#1202973, @ABataev wrote:

> >> If I understand it correctly, the root cause of this exercise is that we 
> >> want to compile for GPU using plain C. CUDA avoids this issue by 
> >> separating device and host code via target attributes and clang has few 
> >> special cases to ignore inline assembly errors in the host code if we're 
> >> compiling for device. For OpenMP there's no such separation, not in the 
> >> system headers, at least.
> > 
> > Yes, that's one of the nice properties of CUDA (for the compiler). There 
> > used to be the same restriction for OpenMP where all functions used in 
> > `target` regions needed to be put in `declare target`. However that was 
> > relaxed in favor of implicitly marking all **called** functions in that TU 
> > to be `declare target`.
> >  So ideally I think Clang should determine which functions are really 
> > `declare target` (either explicit or implicit) and only run semantical 
> > analysis on them. If a function is then found to be "broken" it's perfectly 
> > desirable to error back to the user.
>
> It is not possible for OpenMP because we support implicit declare target 
> functions. Clang cannot identify whether the function is going to be used on 
> the device or not during sema analysis.


Sounds like that is a recipe for just disabling sema analysis for all implicit 
declare target functions.


Repository:
  rC Clang

https://reviews.llvm.org/D50845



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to