vsk added a comment.

In https://reviews.llvm.org/D29369#672166, @dtzWill wrote:

> After some thought, can we discuss why this is a good idea?


The goal is to lower ubsan's compile-time + instrumentation overhead at -O0, 
since this reduces the friction of debugging a ubsan-instrumented project.

> This increases the cyclomatic complexity of code that already is difficult to 
> reason about, and seems like it's both brittle and out-of-place in 
> CGExprScalar.

Are there cleanups or ways to reorganize the code that would make this sort of 
change less complex / brittle? I'm open to taking that on.

> It really seems it would be better to let InstCombine or some other 
> analysis/transform deal with proving checks redundant instead of attempting 
> to do so on-the-fly during CodeGen.

-O1/-O2 do get rid of a lot of checks, but they also degrade the debugging 
experience, so it's not really a solution for this use case.

> Can you better motivate why this is worth these costs, or explain your use 
> case a bit more?

I have some numbers from LNT. I did a pre-patch and post-patch run at -O0 + 
-fsanitize=signed-integer-overflow,unsigned-integer-overflow. There were 4,672 
object files produced in each run. This patch brings the average object size 
down from 36,472.0 to 36,378.3 bytes (a 0.26% improvement), and the average 
number of overflow checks per object down from 66.8 to 66.2 (a 0.81% 
improvement).

I don't have reliable compile-time numbers, but not emitting IR really seems 
like a straightforward improvement over emitting/analyzing/removing it.

So, those are the benefits. IMO getting close to 1% better at reducing 
instrumentation overhead is worth the complexity cost here.


https://reviews.llvm.org/D29369



_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to