On 01/31/2017 11:26 AM, Jakub Jelinek wrote:
Hi!

useless_type_conversion_p says that precision 1 unsigned BOOLEAN_TYPE
conversion to/from precision 1 unsigned integral type is useless,
but apparently the vectorizer heavily relies on the BOOLEAN_TYPE vs.
unsigned integral:1 distinction (uses VECTOR_BOOLEAN_TYPE_P types
only for real BOOLEAN_TYPE).  As the conversion is useless, we can end up
as in the following testcase with e.g. binary operations which have boolean
on one side and unsigned int:1 on another.

The following patch replaces all explicit BOOLEAN_TYPE checks in the
vectorizer with a new macro that handles unsigned integral:1 the same
as BOOLEAN_TYPE.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2017-01-31  Jakub Jelinek  <ja...@redhat.com>

        PR tree-optimization/79284
        * tree.h (INTEGRAL_BOOLEAN_TYPE_P): Define.
        * tree-vect-stmts.c (vect_get_vec_def_for_operand,
        vectorizable_mask_load_store, vectorizable_operation,
        vect_is_simple_cond, get_same_sized_vectype): Use it instead
        of comparing TREE_CODE of a type against BOOLEAN_TYPE.
        * tree-vect-patterns.c (check_bool_pattern, search_type_for_mask_1,
        vect_recog_bool_pattern, vect_recog_mask_conversion_pattern): Likewise.
        * tree-vect-slp.c (vect_get_constant_vectors): Likewise.
        * tree-vect-loop.c (vect_determine_vectorization_factor): Likewise.
Didn't we determine that the Ada front end generated boolean types with a precision greater than 1? And if so, does that suggest that INTEGRAL_BOOLEAN_TYPE_P might need further adjustment (and that we likely have latent bugs in the vectorizer if it was presented with such types?)

jeff

Reply via email to