hfinkel added a comment.

In D71241#1782614 <https://reviews.llvm.org/D71241#1782614>, @ABataev wrote:

> In D71241#1782551 <https://reviews.llvm.org/D71241#1782551>, @hfinkel wrote:
>
> > In D71241#1779168 <https://reviews.llvm.org/D71241#1779168>, 
> > @JonChesterfield wrote:
> >
> > > Lowering in sema or in codegen seems a standard phase ordering choice. 
> > > There will be pros and cons to both.
> > >
> > > I think prior art leans towards sema. Variants are loosely equivalent to 
> > > tag dispatching or constexpr if, both handled before lowering the AST to 
> > > IR.
> >
> >
> > This is exactly right. This is just like any other kind of static overload 
> > resolution. It should be resolved in Sema and the CallExpr's DeclRefExpr 
> > should refer to the correct callee. This will make sure that tools, 
> > including static analysis tools, will correctly understand the semantics of 
> > the call (e.g., IPA will work correctly). Also, we prefer, in Clang, to 
> > generate errors and warning messages in Sema, not in CodeGen, and it is 
> > certainly plausible that errors and warnings could be generated during the 
> > selector-based resolution process.
> >
> > That having been said, Alexey is also correct that we retain the 
> > unevaluated form of the constexpr expressions, and there is an important 
> > analogy here. I believe that one way of restating Alexey's concerns about 
> > the AST representation is that, if we resolve the variant selection as we 
> > build the AST, and then we print the AST, the printed function would be the 
> > name of the selected variant and not the name of the base function. This is 
> > certainly a legitimate concern, and there are several places in Clang where 
> > we take care to preserve the spelling used for constructs that are 
> > otherwise semantically equivalent (e.g., different spellings of keywords). 
> > I can certainly imagine a tool wanting to see the base function called, and 
> > we'll want that for the AST printer regardless. We might add this 
> > information to CallExpr or make a new subclass of CallExpr (e.g., 
> > OpenMPVariantCallExpr) that has that information (the latter seems likely 
> > better so that we don't increase the size of CallExpr for an uncommon case).
> >
> > > Writing the dispatch lowering on IR should make it easier to call from a 
> > > Fortran front end. I'm in favour of minimising work done on the clang AST 
> > > on general principles.
> >
> > We need to make the best decision for Clang in Clang, regardless of how 
> > this might impact a future Fortran implementation. While the 
> > OpenMPIRBuilder will be a point of reuse between different OpenMP-enabled 
> > frontends, it need not be the only one. Moreover, Fortran will also want to 
> > do this resolution earlier for the same reason that it should be done 
> > earlier in Clang (and, for Fortran, we'll end up with inlining and other 
> > IPA at the FIR level, so it will be required to resolve the variants prior 
> > to hitting the OpenMPIRBuilder). Thus, no, doing this in CodeGen is 
> > unlikely to work for the Flang implementation.
> >
> > Also, "minimizing work done in the Clang AST on general principles", seems 
> > like an oversimplification of our general Clang design philosophy. Overload 
> > resolution in Clang is certainly a significant part of the implementation, 
> > but we wouldn't consider doing it in CodeGen. The AST should faithfully 
> > represent the semantic elements in the source code. Overload resolution, 
> > template instantiation, constexpr evaluation, etc. all are highly 
> > non-trivial, and all happen during Sema (even in cases where we might, 
> > technically speaking, be able to delay that logic until CodeGen). What we 
> > don't do in Sema are  lowering tasks (e.g., translating references into 
> > pointers or other things related to picking an underlying implementation 
> > strategy for particular constructs) and optimizations - where we do them at 
> > all - e.g., constant folding, some devirtualization, and so on are done in 
> > CodeGen. For the most part, of course, we defer optimizations to LLVM's 
> > optimizer.
> >
> > > Given we have two implementations, each at different points in the 
> > > pipeline, it might be constructive to each write down why you each choose 
> > > said point. I suspect the rationale is hidden by the implementation 
> > > details.
>
>
> Actually, early resolution will break tbe tools, not help them. It will 
> definitely break clangd, for example. The user will try to navigate to 
> originally called function `base` and instead he will be redirected to the 
> function `hst`.


Can you please be more specific? Please explain why the user would consider 
this incorrect behavior. If the point of the tool is to allow the user to 
navigate to the function actually being called, then navigating to `base` seems 
incorrect if that's not the function being called. This is just like any other 
kind of overload resolution - the user likely wants to navigate to the function 
being called.

Now the user might want an OpenMP-aware tool that understands differences 
between host and accelerator behavior, how that affects which functions are 
called, etc. The user might want this for host/device overloads in CUDA too, 
but this is really an orthogonal concern.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D71241/new/

https://reviews.llvm.org/D71241



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to