On Sat, Aug 14, 2021 at 2:21 AM apinski--- via Gcc-patches <gcc-patches@gcc.gnu.org> wrote: > > From: Andrew Pinski <apin...@marvell.com> > > I noticed this while Richard B. fixing PR101756. > Basically min of two bools is the same as doing an "and" > and max of two bools is doing an "ior".
But that's only true for unsigned vals. For signed ones it would be the other way around and also restricted to 1-bit precision bools. For signed bools the gimple_truth_valued_p check likely is wrong as well unless 1-bit precision? IIRC Ada has non-1 bit precision BOOLEAN_TYPE nodes while fortran bools have 1-bit precision but different sizes. The vector bools are IIRC the only 'signed' bools we have and those have precisions != 1. I think we can use the fact that any non -1/0/1 value in a N-bit precision bool invokes undefined behavior though. But whether in a N-bit signed precision bool the canonical true value is -1 or 1 isn't as clear (maybe we can resort to TYPE_MIN/MAX_VALUE here, not sure). Way out would be to restrict all this to TYPE_UNSIGNED BOOLEAN_TYPE or non-BOOLEAN_TYPE? Richard. > gcc/ChangeLog: > > * match.pd: Add min/max patterns for bool types. > > gcc/testsuite/ChangeLog: > > * gcc.dg/tree-ssa/bool-12.c: New test. > --- > gcc/match.pd | 10 +++++++++ > gcc/testsuite/gcc.dg/tree-ssa/bool-12.c | 27 +++++++++++++++++++++++++ > 2 files changed, 37 insertions(+) > create mode 100644 gcc/testsuite/gcc.dg/tree-ssa/bool-12.c > > diff --git a/gcc/match.pd b/gcc/match.pd > index b1f2aaaac02..8fd60d08cfe 100644 > --- a/gcc/match.pd > +++ b/gcc/match.pd > @@ -3103,6 +3103,16 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) > && (GIMPLE || !TREE_SIDE_EFFECTS (@1))) > (cond (cmp @2 @3) @1 @0)))) > > +/* max(bool0, bool1) -> bool0 | bool1 */ > +(simplify > + (max gimple_truth_valued_p@0 gimple_truth_valued_p@1) > + (bit_ior @0 @1)) > + > +/* min(bool0, bool1) -> bool0 & bool1 */ > +(simplify > + (min gimple_truth_valued_p@0 gimple_truth_valued_p@1) > + (bit_and @0 @1)) > + > /* Simplifications of shift and rotates. */ > > (for rotate (lrotate rrotate) > diff --git a/gcc/testsuite/gcc.dg/tree-ssa/bool-12.c > b/gcc/testsuite/gcc.dg/tree-ssa/bool-12.c > new file mode 100644 > index 00000000000..2d8ad9912d3 > --- /dev/null > +++ b/gcc/testsuite/gcc.dg/tree-ssa/bool-12.c > @@ -0,0 +1,27 @@ > +/* { dg-do compile } */ > +/* { dg-options "-O1 -fdump-tree-optimized -fdump-tree-original" } */ > +#define bool _Bool > +int maxbool(bool ab, bool bb) > +{ > + int a = ab; > + int b = bb; > + int c; > + c = (a > b)?a : b; > + return c; > +} > +int minbool(bool ab, bool bb) > +{ > + int a = ab; > + int b = bb; > + int c; > + c = (a < b)?a : b; > + return c; > +} > +/* Original should have one of each MAX/MIN expressions. */ > +/* { dg-final { scan-tree-dump-times "MAX_EXPR" 1 "original" } */ > +/* { dg-final { scan-tree-dump-times "MIN_EXPR" 1 "original"} } */ > + > +/* By the time we reach optimized, the MAX and MIN expressions > + should have been removed. */ > +/* { dg-final { scan-tree-dump-times "MAX_EXPR" 0 "optimized"} } */ > +/* { dg-final { scan-tree-dump-times "MIN_EXPR" 0 "optimized"} } */ > -- > 2.27.0 >