paulkirth added a comment.

In D132186#3751985 <https://reviews.llvm.org/D132186#3751985>, @tejohnson wrote:

> I have seen a few cases where noinline was used for performance, in addition 
> to other cases like avoiding too much stack growth.

Well, I stand corrected. I'm curious about what these cases are, but in any 
case if there are cases where its done, then I agree that a diagnostic would be 
helpful.

> I've also seen it used without any comment whatsoever. So I think it would be 
> good to make it easier to identify cases where we are blocked from inlining 
> at hot callsites because of the attribute.

I wonder if there is some analysis or heuristic we could use to distinguish 
those cases? Nothing really comes to mind, but it would be nice if we had one.

> It is a little different than misexpect though in that the expect hints are 
> pretty much only for performance, so it is more useful to be able to issue a 
> strong warning that can be turned into an error if they are wrong. And also 
> there was no way to report the misuse of expects earlier, unlike inlining 
> where we already had the remarks plumbing.
>
> I haven't looked through the patch in detail, but is it possible to use your 
> changes to emit a better missed opt remark from the inliner for these cases 
> (I assume we will already emit a -Rpass-missed=inline for the noinline 
> attribute case, just not highlighting that it is hot and would have been 
> inlined for performance reasons otherwise)? I suppose one main reason for 
> adding a warning is that the missed inline remarks can be really noisy and 
> not really useful to the user vs a compiler optimization engineer doing 
> inliner/compiler tuning, and therefore a warning would make it easier to turn 
> on more widely as user feedback that can/should be addressed in user code.

Yeah, I was thinking we could emit a new remark type for this to differentiate, 
but it seems simpler more user friendly to emit some clar diagnostic directly.

I think we’re starting to accumulate a few of these diagnostics now that are 
trying to diagnose potential performance deficiencies based on profiling 
information. Originally we had prototyped a tool for misexpect based on 
libtooling that ran over the build based on the compile commands DB and 
reported everything it found.  I wonder if reviving that would be useful in 
these cases when you want to look for performance issues like this, misexpect, 
and other cases? Making ORE diagnostic output queryable through a tool may also 
be a good option, but I'm not too familiar with what already exists in that 
area.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D132186/new/

https://reviews.llvm.org/D132186

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to