On Wed, 1 Feb 2017, Jakub Jelinek wrote:

> On Wed, Feb 01, 2017 at 09:21:57AM +0100, Richard Biener wrote:
> > > Not sure I understand what you mean explicitly check the precision,
> > > the macro checks the precision already, and intentionally only for
> > > non-BOOLEAN_TYPE.  If you mean checking precision explicitly in the spots
> > > where the macro is used, that would be worse for the hypothetical case 
> > > when
> > > u_t_c would change in this regard, you'd have far more places to change.
> > > As for macro name, I came up e.g. with BOOLEAN_COMPATIBLE_P or
> > > BOOLEAN_COMPATIBLE_TYPE_P, perhaps those could make it clearer on what it
> > > is.
> > 
> > +/* Nonzero if TYPE represents a (scalar) boolean type or type
> > +   in the middle-end compatible with it.  */
> > +
> > +#define INTEGRAL_BOOLEAN_TYPE_P(TYPE) \
> > +  (TREE_CODE (TYPE) == BOOLEAN_TYPE            \
> > +   || ((TREE_CODE (TYPE) == INTEGER_TYPE       \
> > +       || TREE_CODE (TYPE) == ENUMERAL_TYPE)   \
> > +       && TYPE_PRECISION (TYPE) == 1           \
> > +       && TYPE_UNSIGNED (TYPE)))
> > 
> > (just to quote what you proposed).
> 
> So would it help to use
>   (TREE_CODE (TYPE) == BOOLEAN_TYPE
>    || (INTEGRAL_TYPE_P (TYPE)
>        && useless_type_conversion_p (boolean_type_node, TYPE)))
> It would be much slower than the above, but would be less dependent
> on useless_type_conversion_p details.

For the vectorizer it likely would break the larger logical type
handling?

The question is really what the vectorizer and other places are looking
for -- which isually is a 1-bit precision, eventually unsigned,
integral type.

I can't see where we'd use the variant with useless_type_conversion_p.

Richard.

Reply via email to