tra added a comment.

Would I be too far off the mark to summarize the situation this way?

- current default for unattributed functions is unsound for GPU back-ends, but 
is fine for most of the other back-ends.
- it's easy to unintentionally end up using/mis-optimizing unattributed 
functions for back-ends where that care about convergence.
- making functions convergent by default everywhere is correct, but is overly 
pessimistic for back-ends that do not need it. Such back-ends will need to add 
explicit attributes in order to maintain current level of optimizations. In a 
way it's the other side of the current situation -- default is not good for 
some back-ends and we have to add attributes everywhere to make things work. 
Flipping the default improves correctness-by-default, but places logistical 
burden on back- and front-ends that may not care about convergence otherwise.

Perhaps we can deal with that by providing a way to specify per-module default 
for the assumed convergence of the functions and then checking in the back-end 
(only those that do care about convergence) that the default convergence is 
explicitly set (and, perhaps, set to something specific?) via function/module 
attributes or CLI.

This way the unintentional use of vanilla IR w/o attributes with NVPTX back-end 
will produce an error complaining that default convergence is not set and we 
don't know if the IR is still sound. If necessary, the user can set appropriate 
convergence wholesale via CLI or module attribute. The burden on platforms that 
don't care about convergence will be limited to setting the default and 
applying attributes on entities that do not match the default assumption (there 
may be none of those).

One pitfall of this approach is that we may run optimizations based on wrong 
assumptions and end up miscompiling things before we know what those 
assumptions will be (e.g. opt vs llc). Perhaps we can document the default 
assumption to be nonconvergent and always set a module attribute indicating the 
assumption used. This assumption will percolate through the optimization 
pipeline. If its the wrong one we would be able to reject / warn about it in 
the passes for the back-ends that have different preference.

Would something like that be a reasonable compromise?


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D69498/new/

https://reviews.llvm.org/D69498



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to