This patch is the second part of a fix for PR tree-optimization/71343, that implements Richard Biener's suggestion of using tree-ssa's value numbering instead of match.pd. The change is that when assigning a value number for the expression X<<C, we actually look-up or insert the value number for the multiplication X*(1<<C). This elegantly handles the fact that we (intentionally) don't canonicalize these as equivalent in GIMPLE, and the optimization/equivalence in PR 71343 now happens by (tree-ssa SCCVN) magic.
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap and make -k check, both with and without --target_board=unix{-32}, with no new failures. Ok for mainline? 2022-09-13 Roger Sayle <ro...@nextmovesoftware.com> gcc/ChangeLog PR tree-optimization/71343 * tree-ssa-sccvn.cc (visit_nary_op) <case LSHIFT_EXPR>: Make the value number of the expression X << C the same as the value number for the multiplication X * (1<<C). gcc/testsuite/ChangeLog PR tree-optimization/71343 * gcc.dg/pr71343-2.c: New test case. Thanks in advance, Roger -- > -----Original Message----- > From: Richard Biener <richard.guent...@gmail.com> > Sent: 08 August 2022 12:42 > To: Roger Sayle <ro...@nextmovesoftware.com> > Cc: GCC Patches <gcc-patches@gcc.gnu.org> > Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as > (X&Y)<<C. > > On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle > <ro...@nextmovesoftware.com> wrote: > > > > This patch resolves PR tree-optimization/71343, a missed-optimization > > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4. > > This requires two related (sets of) optimizations to be added to match.pd. > > > > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C, > > for many binary operators, including AND, IOR, XOR, and (if overflow > > isn't an issue) PLUS and MINUS. Likewise, the right shifts (both > > logical and arithmetic) and bit-wise logical operators can be > > simplified in a similar fashion. These all reduce the number of > > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift > operation. > > > > The second optimization reflects that the middle-end doesn't impose a > > canonical form on multiplications by powers of two, vs. left shifts, > > instead leaving these operations as specified by the programmer unless > > there's a good reason to change them. Hence, GIMPLE code may contain > > the expressions "X * 8" and "X << 3" even though these represent the > > same value/computation. The tweak to match.pd is that comparison > > operations whose operands are equivalent non-canonical expressions can > > be taught their equivalence. Hence "(X * 8) == (X << 3)" will always > > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false. > > > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap > > and make -k check, both with and without --target_board=unix{-m32}, > > with no new failures. Ok for mainline? > > +/* Shifts by constants distribute over several binary operations, > + hence (X << C) + (Y << C) can be simplified to (X + Y) << C. */ > +(for op (plus minus) > + (simplify > + (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1)) > + (if (INTEGRAL_TYPE_P (type) > + && TYPE_OVERFLOW_WRAPS (type) > + && !TYPE_SATURATING (type) > + && tree_fits_shwi_p (@1) > + && tree_to_shwi (@1) > 0 > + && tree_to_shwi (@1) < TYPE_PRECISION (type)) > > I do wonder why we need to restrict this to shifts by constants? > Any out-of-bound shift was already there, no? > > +/* Some tree expressions are intentionally non-canonical. > + We handle the comparison of the equivalent forms here. */ (for cmp > +(eq le ge) > + (simplify > + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2)) > + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)) > + && tree_fits_shwi_p (@1) > + && tree_to_shwi (@1) > 0 > + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0)) > + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2))) > + { constant_boolean_node (true, type); }))) > + > +(for cmp (ne lt gt) > + (simplify > + (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2)) > + (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)) > + && tree_fits_shwi_p (@1) > + && tree_to_shwi (@1) > 0 > + && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0)) > + && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2))) > + { constant_boolean_node (false, type); }))) > > hmm. I wonder if it makes more sense to handle this in value-numbering. > tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly > canonicalization issues but the shift vs mult could be handled there by just > performing the alternate lookup. That would also enable CSE and by means of > that of course the comparisons you do above. > > Thanks, > Richard. > > > > > 2022-08-08 Roger Sayle <ro...@nextmovesoftware.com> > > > > gcc/ChangeLog > > PR tree-optimization/71343 > > * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the > > expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators. > > (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C) > > to (X^Y)>>C for binary logical operators, AND, IOR and XOR. > > (cmp:c (lshift @0) (mult @1)): Optimize comparisons between > > shifts by integer constants and multiplications by powers of 2. > > > > gcc/testsuite/ChangeLog > > PR tree-optimization/71343 > > * gcc.dg/pr71343-1.c: New test case. > > * gcc.dg/pr71343-2.c: Likewise. > > > > > > Thanks in advance, > > Roger > > --
diff --git a/gcc/tree-ssa-sccvn.cc b/gcc/tree-ssa-sccvn.cc index 74b8d8d..2644446 100644 --- a/gcc/tree-ssa-sccvn.cc +++ b/gcc/tree-ssa-sccvn.cc @@ -5312,6 +5312,30 @@ visit_nary_op (tree lhs, gassign *stmt) } } break; + case LSHIFT_EXPR: + /* For X << C, use the value number of X * (1 << C). */ + if (INTEGRAL_TYPE_P (type)) + { + tree rhs2 = gimple_assign_rhs2 (stmt); + if (TREE_CODE (rhs2) == INTEGER_CST + && tree_fits_uhwi_p (rhs2) + && tree_to_uhwi (rhs2) < TYPE_PRECISION (type)) + { + wide_int w = wi::set_bit_in_zero (tree_to_uhwi (rhs2), + TYPE_PRECISION (type)); + gimple_match_op match_op (gimple_match_cond::UNCOND, + MULT_EXPR, type, rhs1, + wide_int_to_tree (type, w)); + result = vn_nary_build_or_lookup (&match_op); + if (result) + { + bool changed = set_ssa_val_to (lhs, result); + vn_nary_op_insert_stmt (stmt, result); + return changed; + } + } + } + break; default: break; } diff --git a/gcc/testsuite/gcc.dg/pr71343-2.c b/gcc/testsuite/gcc.dg/pr71343-2.c new file mode 100644 index 0000000..11800a9 --- /dev/null +++ b/gcc/testsuite/gcc.dg/pr71343-2.c @@ -0,0 +1,34 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -fdump-tree-optimized" } */ + +unsigned int test1(unsigned int a , unsigned int b) +{ + return (a << 2) + (b << 2) == a * 4 + b * 4; +} + +unsigned int test2(unsigned int a , unsigned int b) +{ + return (a << 2) + (b << 2) == (a + b) << 2; +} + +unsigned int test3(unsigned int a , unsigned int b) +{ + return a * 4 + b * 4 == (a + b) * 4; +} + +unsigned int test4(unsigned int a , unsigned int b) +{ + return (a + b) << 2 == (a + b) * 4; +} + +unsigned int test5(unsigned int a , unsigned int b) +{ + return (a << 2) + (b << 2) == (a + b) * 4; +} + +unsigned int test6(unsigned int a , unsigned int b) +{ + return (a + b) << 2 == a * 4 + b * 4; +} + +/* { dg-final { scan-tree-dump-times "return 1" 6 "optimized" } } */