https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94274
--- Comment #3 from z.zhanghaijian at huawei dot com <z.zhanghaijian at huawei dot com> --- (In reply to Marc Glisse from comment #1) > Detecting common beginnings / endings in branches is something gcc does very > seldom. Even at -Os, for if(cond)f(b);else f(c); we need to wait until > rtl-optimizations to get a single call to f. (of course the reverse > transformation of duplicating a statement that was after the branches into > them, if it simplifies, is nice as well, and they can conflict) > I don't know if handling one such very specific case (binary operations with > a common argument) separately is a good idea when we don't even handle unary > operations. I tried to test this fold on specint2017 and found some performance gains on 500.perlbench_r. Then compared the assemble and found some improvements. For example: S_invlist_max, which is inlined by many functions, such as S__append_range_to_invlist, S_ssc_anything, Perl__invlist_invert ... invlist_inline.h: #define FROM_INTERNAL_SIZE(x) ((x)/ sizeof(UV)) S_invlist_max(inlined by S__append_range_to_invlist, S_ssc_anything, Perl__invlist_invert, ....): return SvLEN(invlist) == 0 /* This happens under _new_invlist_C_array */ ? FROM_INTERNAL_SIZE(SvCUR(invlist)) - 1 : FROM_INTERNAL_SIZE(SvLEN(invlist)) - 1; Dump tree phiopt: <bb 3> [local count: 536870911]: _46 = pretmp_112 >> 3; iftmp.1123_47 = _46 + 18446744073709551615; goto <bb 5>; [100.00%] <bb 4> [local count: 536870911]: _48 = _44 >> 3; iftmp.1123_49 = _48 + 18446744073709551615; <bb 5> [local count: 1073741823]: # iftmp.1123_50 = PHI <iftmp.1123_47(3), iftmp.1123_49(4)> Which can replaces with: <bb 3> [local count: 536870912]: <bb 4> [local count: 1073741823]: # _48 = PHI <_44(2), pretmp_112(3)> _49 = _48 >> 3; iftmp.1123_50 = _49 + 18446744073709551615; Assemble: lsr x5, x6, #3 lsr x3, x3, #3 sub x20, x5, #0x1 sub x3, x3, #0x1 csel x20, x3, x20, ne Replaces with: csel x3, x3, x4, ne lsr x3, x3, #3 sub x20, x3, #0x1 This can eliminate two instruction.