On Tue, Nov 18, 2014 at 11:36 AM, Marc Glisse <[email protected]> wrote:
> On Mon, 17 Nov 2014, Richard Biener wrote:
>
>> On Sun, Nov 16, 2014 at 6:53 PM, Marc Glisse <[email protected]> wrote:
>>>
>>> On Sun, 16 Nov 2014, Richard Biener wrote:
>>>
>>>> I think the element_mode is the way to go.
>>>
>>>
>>>
>>> The following passed bootstrap+testsuite.
>>>
>>> 2014-11-16 Marc Glisse <[email protected]>
>>>
>>> * tree.c (element_mode, integer_truep): New functions.
>>> * tree.h (element_mode, integer_truep): Declare them.
>>> * fold-const.c (negate_expr_p, fold_negate_expr,
>>> combine_comparisons,
>>> fold_cond_expr_with_comparison, fold_real_zero_addition_p,
>>> fold_comparison, fold_ternary_loc, tree_call_nonnegative_warnv_p,
>>> fold_strip_sign_ops): Use element_mode.
>>> (fold_binary_loc): Use element_mode and element_precision.
>>> * match.pd: Use integer_truep, element_mode, element_precision,
>>> VECTOR_TYPE_P and build_one_cst. Extend some transformations to
>>> vectors. Simplify A/-A.
>>
>>
>> - && ! (final_prec != GET_MODE_PRECISION (TYPE_MODE (type))
>> - && TYPE_MODE (type) == TYPE_MODE (inter_type))
>> + && ! (final_prec != element_precision (type)
>> + && element_mode (type) == element_mode (inter_type))
>>
>> isn't a 1:1 conversion - please use
>>
>> final_prec != GET_MODE_PRECISION (element_mode (type))
>>
>> your version is final_prec != final_prec.
>
>
> Good catch, I was doing those replacements too fast to be really safe :-(
>
>> The tree.c:element_mode function lacks a function comment.
>>
>> Ok with those fixed.
>
>
> I am attaching the version I committed.
>
> I'll try to replace some more TYPE_MODE during stage3...
Btw, a convenience would be to be able to write
HONOR_NANS (type)
thus effectively make HONOR_* inline functions with a machine_mode
and a type overload (and the type overload properly looking at
element types).
Richard.
> Thanks,
>
> --
> Marc Glisse
> Index: gcc/tree.c
> ===================================================================
> --- gcc/tree.c (revision 217701)
> +++ gcc/tree.c (revision 217702)
> @@ -2268,20 +2268,34 @@ integer_nonzerop (const_tree expr)
> {
> STRIP_NOPS (expr);
>
> return ((TREE_CODE (expr) == INTEGER_CST
> && !wi::eq_p (expr, 0))
> || (TREE_CODE (expr) == COMPLEX_CST
> && (integer_nonzerop (TREE_REALPART (expr))
> || integer_nonzerop (TREE_IMAGPART (expr)))));
> }
>
> +/* Return 1 if EXPR is the integer constant one. For vector,
> + return 1 if every piece is the integer constant minus one
> + (representing the value TRUE). */
> +
> +int
> +integer_truep (const_tree expr)
> +{
> + STRIP_NOPS (expr);
> +
> + if (TREE_CODE (expr) == VECTOR_CST)
> + return integer_all_onesp (expr);
> + return integer_onep (expr);
> +}
> +
> /* Return 1 if EXPR is the fixed-point constant zero. */
>
> int
> fixed_zerop (const_tree expr)
> {
> return (TREE_CODE (expr) == FIXED_CST
> && TREE_FIXED_CST (expr).data.is_zero ());
> }
>
> /* Return the power of two represented by a tree node known to be a
> @@ -12303,11 +12317,25 @@ get_base_address (tree t)
> t = TREE_OPERAND (TREE_OPERAND (t, 0), 0);
>
> /* ??? Either the alias oracle or all callers need to properly deal
> with WITH_SIZE_EXPRs before we can look through those. */
> if (TREE_CODE (t) == WITH_SIZE_EXPR)
> return NULL_TREE;
>
> return t;
> }
>
> +/* Return the machine mode of T. For vectors, returns the mode of the
> + inner type. The main use case is to feed the result to HONOR_NANS,
> + avoiding the BLKmode that a direct TYPE_MODE (T) might return. */
> +
> +machine_mode
> +element_mode (const_tree t)
> +{
> + if (!TYPE_P (t))
> + t = TREE_TYPE (t);
> + if (VECTOR_TYPE_P (t) || TREE_CODE (t) == COMPLEX_TYPE)
> + t = TREE_TYPE (t);
> + return TYPE_MODE (t);
> +}
> +
> #include "gt-tree.h"
> Index: gcc/tree.h
> ===================================================================
> --- gcc/tree.h (revision 217701)
> +++ gcc/tree.h (revision 217702)
> @@ -1557,20 +1557,22 @@ extern void protected_set_expr_location
> #define TYPE_NEXT_VARIANT(NODE) (TYPE_CHECK
> (NODE)->type_common.next_variant)
> #define TYPE_MAIN_VARIANT(NODE) (TYPE_CHECK
> (NODE)->type_common.main_variant)
> #define TYPE_CONTEXT(NODE) (TYPE_CHECK (NODE)->type_common.context)
>
> #define TYPE_MODE(NODE) \
> (VECTOR_TYPE_P (TYPE_CHECK (NODE)) \
> ? vector_type_mode (NODE) : (NODE)->type_common.mode)
> #define SET_TYPE_MODE(NODE, MODE) \
> (TYPE_CHECK (NODE)->type_common.mode = (MODE))
>
> +extern machine_mode element_mode (const_tree t);
> +
> /* The "canonical" type for this type node, which is used by frontends to
> compare the type for equality with another type. If two types are
> equal (based on the semantics of the language), then they will have
> equivalent TYPE_CANONICAL entries.
>
> As a special case, if TYPE_CANONICAL is NULL_TREE, and thus
> TYPE_STRUCTURAL_EQUALITY_P is true, then it cannot
> be used for comparison against other types. Instead, the type is
> said to require structural equality checks, described in
> TYPE_STRUCTURAL_EQUALITY_P.
> @@ -3992,20 +3994,25 @@ extern int integer_minus_onep (const_tre
> /* integer_pow2p (tree x) is nonzero is X is an integer constant with
> exactly one bit 1. */
>
> extern int integer_pow2p (const_tree);
>
> /* integer_nonzerop (tree x) is nonzero if X is an integer constant
> with a nonzero value. */
>
> extern int integer_nonzerop (const_tree);
>
> +/* integer_truep (tree x) is nonzero if X is an integer constant of value 1
> or
> + a vector where each element is an integer constant of value -1. */
> +
> +extern int integer_truep (const_tree);
> +
> extern bool cst_and_fits_in_hwi (const_tree);
> extern tree num_ending_zeros (const_tree);
>
> /* fixed_zerop (tree x) is nonzero if X is a fixed-point constant of
> value 0. */
>
> extern int fixed_zerop (const_tree);
>
> /* staticp (tree x) is nonzero if X is a reference to data allocated
> at a fixed address in memory. Returns the outermost data. */
> Index: gcc/fold-const.c
> ===================================================================
> --- gcc/fold-const.c (revision 217701)
> +++ gcc/fold-const.c (revision 217702)
> @@ -435,46 +435,46 @@ negate_expr_p (tree t)
> }
>
> case COMPLEX_EXPR:
> return negate_expr_p (TREE_OPERAND (t, 0))
> && negate_expr_p (TREE_OPERAND (t, 1));
>
> case CONJ_EXPR:
> return negate_expr_p (TREE_OPERAND (t, 0));
>
> case PLUS_EXPR:
> - if (HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (type))
> - || HONOR_SIGNED_ZEROS (TYPE_MODE (type)))
> + if (HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (type))
> + || HONOR_SIGNED_ZEROS (element_mode (type)))
> return false;
> /* -(A + B) -> (-B) - A. */
> if (negate_expr_p (TREE_OPERAND (t, 1))
> && reorder_operands_p (TREE_OPERAND (t, 0),
> TREE_OPERAND (t, 1)))
> return true;
> /* -(A + B) -> (-A) - B. */
> return negate_expr_p (TREE_OPERAND (t, 0));
>
> case MINUS_EXPR:
> /* We can't turn -(A-B) into B-A when we honor signed zeros. */
> - return !HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (type))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + return !HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (type))
> + && !HONOR_SIGNED_ZEROS (element_mode (type))
> && reorder_operands_p (TREE_OPERAND (t, 0),
> TREE_OPERAND (t, 1));
>
> case MULT_EXPR:
> if (TYPE_UNSIGNED (TREE_TYPE (t)))
> break;
>
> /* Fall through. */
>
> case RDIV_EXPR:
> - if (! HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (TREE_TYPE (t))))
> + if (! HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (TREE_TYPE (t))))
> return negate_expr_p (TREE_OPERAND (t, 1))
> || negate_expr_p (TREE_OPERAND (t, 0));
> break;
>
> case TRUNC_DIV_EXPR:
> case ROUND_DIV_EXPR:
> case EXACT_DIV_EXPR:
> /* In general we can't negate A / B, because if A is INT_MIN and
> B is 1, we may turn this into INT_MIN / -1 which is undefined
> and actually traps on some architectures. But if overflow is
> @@ -610,22 +610,22 @@ fold_negate_expr (location_t loc, tree t
> return fold_build1_loc (loc, CONJ_EXPR, type,
> fold_negate_expr (loc, TREE_OPERAND (t, 0)));
> break;
>
> case NEGATE_EXPR:
> if (!TYPE_OVERFLOW_SANITIZED (type))
> return TREE_OPERAND (t, 0);
> break;
>
> case PLUS_EXPR:
> - if (!HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (type))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (type)))
> + if (!HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (type))
> + && !HONOR_SIGNED_ZEROS (element_mode (type)))
> {
> /* -(A + B) -> (-B) - A. */
> if (negate_expr_p (TREE_OPERAND (t, 1))
> && reorder_operands_p (TREE_OPERAND (t, 0),
> TREE_OPERAND (t, 1)))
> {
> tem = negate_expr (TREE_OPERAND (t, 1));
> return fold_build2_loc (loc, MINUS_EXPR, type,
> tem, TREE_OPERAND (t, 0));
> }
> @@ -635,35 +635,35 @@ fold_negate_expr (location_t loc, tree t
> {
> tem = negate_expr (TREE_OPERAND (t, 0));
> return fold_build2_loc (loc, MINUS_EXPR, type,
> tem, TREE_OPERAND (t, 1));
> }
> }
> break;
>
> case MINUS_EXPR:
> /* - (A - B) -> B - A */
> - if (!HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (type))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + if (!HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (type))
> + && !HONOR_SIGNED_ZEROS (element_mode (type))
> && reorder_operands_p (TREE_OPERAND (t, 0), TREE_OPERAND (t, 1)))
> return fold_build2_loc (loc, MINUS_EXPR, type,
> TREE_OPERAND (t, 1), TREE_OPERAND (t, 0));
> break;
>
> case MULT_EXPR:
> if (TYPE_UNSIGNED (type))
> break;
>
> /* Fall through. */
>
> case RDIV_EXPR:
> - if (! HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (type)))
> + if (! HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (type)))
> {
> tem = TREE_OPERAND (t, 1);
> if (negate_expr_p (tem))
> return fold_build2_loc (loc, TREE_CODE (t), type,
> TREE_OPERAND (t, 0), negate_expr (tem));
> tem = TREE_OPERAND (t, 0);
> if (negate_expr_p (tem))
> return fold_build2_loc (loc, TREE_CODE (t), type,
> negate_expr (tem), TREE_OPERAND (t, 1));
> }
> @@ -2308,21 +2308,21 @@ compcode_to_comparison (enum comparison_
> and RCODE on the identical operands LL_ARG and LR_ARG. Take into
> account
> the possibility of trapping if the mode has NaNs, and return NULL_TREE
> if this makes the transformation invalid. */
>
> tree
> combine_comparisons (location_t loc,
> enum tree_code code, enum tree_code lcode,
> enum tree_code rcode, tree truth_type,
> tree ll_arg, tree lr_arg)
> {
> - bool honor_nans = HONOR_NANS (TYPE_MODE (TREE_TYPE (ll_arg)));
> + bool honor_nans = HONOR_NANS (element_mode (ll_arg));
> enum comparison_code lcompcode = comparison_to_compcode (lcode);
> enum comparison_code rcompcode = comparison_to_compcode (rcode);
> int compcode;
>
> switch (code)
> {
> case TRUTH_AND_EXPR: case TRUTH_ANDIF_EXPR:
> compcode = lcompcode & rcompcode;
> break;
>
> @@ -4574,21 +4574,21 @@ fold_cond_expr_with_comparison (location
>
> None of these transformations work for modes with signed
> zeros. If A is +/-0, the first two transformations will
> change the sign of the result (from +0 to -0, or vice
> versa). The last four will fix the sign of the result,
> even though the original expressions could be positive or
> negative, depending on the sign of A.
>
> Note that all these transformations are correct if A is
> NaN, since the two alternatives (A and -A) are also NaNs. */
> - if (!HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + if (!HONOR_SIGNED_ZEROS (element_mode (type))
> && (FLOAT_TYPE_P (TREE_TYPE (arg01))
> ? real_zerop (arg01)
> : integer_zerop (arg01))
> && ((TREE_CODE (arg2) == NEGATE_EXPR
> && operand_equal_p (TREE_OPERAND (arg2, 0), arg1, 0))
> /* In the case that A is of the form X-Y, '-A' (arg2) may
> have already been folded to Y-X, check for that. */
> || (TREE_CODE (arg1) == MINUS_EXPR
> && TREE_CODE (arg2) == MINUS_EXPR
> && operand_equal_p (TREE_OPERAND (arg1, 0),
> @@ -4632,21 +4632,21 @@ fold_cond_expr_with_comparison (location
> default:
> gcc_assert (TREE_CODE_CLASS (comp_code) == tcc_comparison);
> break;
> }
>
> /* A != 0 ? A : 0 is simply A, unless A is -0. Likewise
> A == 0 ? A : 0 is always 0 unless A is -0. Note that
> both transformations are correct when A is NaN: A != 0
> is then true, and A == 0 is false. */
>
> - if (!HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + if (!HONOR_SIGNED_ZEROS (element_mode (type))
> && integer_zerop (arg01) && integer_zerop (arg2))
> {
> if (comp_code == NE_EXPR)
> return pedantic_non_lvalue_loc (loc, fold_convert_loc (loc, type,
> arg1));
> else if (comp_code == EQ_EXPR)
> return build_zero_cst (type);
> }
>
> /* Try some transformations of A op B ? A : B.
>
> @@ -4667,21 +4667,21 @@ fold_cond_expr_with_comparison (location
> The first two transformations are correct if either A or B
> is a NaN. In the first transformation, the condition will
> be false, and B will indeed be chosen. In the case of the
> second transformation, the condition A != B will be true,
> and A will be chosen.
>
> The conversions to max() and min() are not correct if B is
> a number and A is not. The conditions in the original
> expressions will be false, so all four give B. The min()
> and max() versions would give a NaN instead. */
> - if (!HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + if (!HONOR_SIGNED_ZEROS (element_mode (type))
> && operand_equal_for_comparison_p (arg01, arg2, arg00)
> /* Avoid these transformations if the COND_EXPR may be used
> as an lvalue in the C++ front-end. PR c++/19199. */
> && (in_gimple_form
> || VECTOR_TYPE_P (type)
> || (strcmp (lang_hooks.name, "GNU C++") != 0
> && strcmp (lang_hooks.name, "GNU Objective-C++") != 0)
> || ! maybe_lvalue_p (arg1)
> || ! maybe_lvalue_p (arg2)))
> {
> @@ -4704,55 +4704,55 @@ fold_cond_expr_with_comparison (location
> case NE_EXPR:
> return pedantic_non_lvalue_loc (loc, fold_convert_loc (loc, type,
> arg1));
> case LE_EXPR:
> case LT_EXPR:
> case UNLE_EXPR:
> case UNLT_EXPR:
> /* In C++ a ?: expression can be an lvalue, so put the
> operand which will be used if they are equal first
> so that we can convert this back to the
> corresponding COND_EXPR. */
> - if (!HONOR_NANS (TYPE_MODE (TREE_TYPE (arg1))))
> + if (!HONOR_NANS (element_mode (arg1)))
> {
> comp_op0 = fold_convert_loc (loc, comp_type, comp_op0);
> comp_op1 = fold_convert_loc (loc, comp_type, comp_op1);
> tem = (comp_code == LE_EXPR || comp_code == UNLE_EXPR)
> ? fold_build2_loc (loc, MIN_EXPR, comp_type, comp_op0,
> comp_op1)
> : fold_build2_loc (loc, MIN_EXPR, comp_type,
> comp_op1, comp_op0);
> return pedantic_non_lvalue_loc (loc,
> fold_convert_loc (loc, type,
> tem));
> }
> break;
> case GE_EXPR:
> case GT_EXPR:
> case UNGE_EXPR:
> case UNGT_EXPR:
> - if (!HONOR_NANS (TYPE_MODE (TREE_TYPE (arg1))))
> + if (!HONOR_NANS (element_mode (arg1)))
> {
> comp_op0 = fold_convert_loc (loc, comp_type, comp_op0);
> comp_op1 = fold_convert_loc (loc, comp_type, comp_op1);
> tem = (comp_code == GE_EXPR || comp_code == UNGE_EXPR)
> ? fold_build2_loc (loc, MAX_EXPR, comp_type, comp_op0,
> comp_op1)
> : fold_build2_loc (loc, MAX_EXPR, comp_type,
> comp_op1, comp_op0);
> return pedantic_non_lvalue_loc (loc,
> fold_convert_loc (loc, type,
> tem));
> }
> break;
> case UNEQ_EXPR:
> - if (!HONOR_NANS (TYPE_MODE (TREE_TYPE (arg1))))
> + if (!HONOR_NANS (element_mode (arg1)))
> return pedantic_non_lvalue_loc (loc,
> fold_convert_loc (loc, type, arg2));
> break;
> case LTGT_EXPR:
> - if (!HONOR_NANS (TYPE_MODE (TREE_TYPE (arg1))))
> + if (!HONOR_NANS (element_mode (arg1)))
> return pedantic_non_lvalue_loc (loc,
> fold_convert_loc (loc, type, arg1));
> break;
> default:
> gcc_assert (TREE_CODE_CLASS (comp_code) == tcc_comparison);
> break;
> }
> }
>
> /* If this is A op C1 ? A : C2 with C1 and C2 constant integers,
> @@ -6083,40 +6083,40 @@ fold_binary_op_with_conditional_arg (loc
> X - 0 is not the same as X because 0 - 0 is -0. In other rounding
> modes, X + 0 is not the same as X because -0 + 0 is 0. */
>
> bool
> fold_real_zero_addition_p (const_tree type, const_tree addend, int negate)
> {
> if (!real_zerop (addend))
> return false;
>
> /* Don't allow the fold with -fsignaling-nans. */
> - if (HONOR_SNANS (TYPE_MODE (type)))
> + if (HONOR_SNANS (element_mode (type)))
> return false;
>
> /* Allow the fold if zeros aren't signed, or their sign isn't important.
> */
> - if (!HONOR_SIGNED_ZEROS (TYPE_MODE (type)))
> + if (!HONOR_SIGNED_ZEROS (element_mode (type)))
> return true;
>
> /* In a vector or complex, we would need to check the sign of all zeros.
> */
> if (TREE_CODE (addend) != REAL_CST)
> return false;
>
> /* Treat x + -0 as x - 0 and x - -0 as x + 0. */
> if (REAL_VALUE_MINUS_ZERO (TREE_REAL_CST (addend)))
> negate = !negate;
>
> /* The mode has signed zeros, and we have to honor their sign.
> In this situation, there is only one case we can return true for.
> X - 0 is the same as X unless rounding towards -infinity is
> supported. */
> - return negate && !HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (type));
> + return negate && !HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (type));
> }
>
> /* Subroutine of fold() that checks comparisons of built-in math
> functions against real constants.
>
> FCODE is the DECL_FUNCTION_CODE of the built-in, CODE is the comparison
> operator: EQ_EXPR, NE_EXPR, GT_EXPR, LT_EXPR, GE_EXPR or LE_EXPR. TYPE
> is the type of the result and ARG0 and ARG1 are the operands of the
> comparison. ARG1 must be a TREE_REAL_CST.
>
> @@ -9073,36 +9073,36 @@ fold_comparison (location_t loc, enum tr
> }
>
> /* Simplify comparison of something with itself. (For IEEE
> floating-point, we can only do some of these simplifications.) */
> if (operand_equal_p (arg0, arg1, 0))
> {
> switch (code)
> {
> case EQ_EXPR:
> if (! FLOAT_TYPE_P (TREE_TYPE (arg0))
> - || ! HONOR_NANS (TYPE_MODE (TREE_TYPE (arg0))))
> + || ! HONOR_NANS (element_mode (arg0)))
> return constant_boolean_node (1, type);
> break;
>
> case GE_EXPR:
> case LE_EXPR:
> if (! FLOAT_TYPE_P (TREE_TYPE (arg0))
> - || ! HONOR_NANS (TYPE_MODE (TREE_TYPE (arg0))))
> + || ! HONOR_NANS (element_mode (arg0)))
> return constant_boolean_node (1, type);
> return fold_build2_loc (loc, EQ_EXPR, type, arg0, arg1);
>
> case NE_EXPR:
> /* For NE, we can only do this simplification if integer
> or we don't honor IEEE floating point NaNs. */
> if (FLOAT_TYPE_P (TREE_TYPE (arg0))
> - && HONOR_NANS (TYPE_MODE (TREE_TYPE (arg0))))
> + && HONOR_NANS (element_mode (arg0)))
> break;
> /* ... fall through ... */
> case GT_EXPR:
> case LT_EXPR:
> return constant_boolean_node (0, type);
> default:
> gcc_unreachable ();
> }
> }
>
> @@ -9961,22 +9961,22 @@ fold_binary_loc (location_t loc,
> fold_convert_loc (loc, type,
> marg),
> fold_convert_loc (loc, type,
> parg1)));
> }
> }
> else
> {
> /* Fold __complex__ ( x, 0 ) + __complex__ ( 0, y )
> to __complex__ ( x, y ). This is not the same for SNaNs or
> if signed zeros are involved. */
> - if (!HONOR_SNANS (TYPE_MODE (TREE_TYPE (arg0)))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (arg0)))
> + if (!HONOR_SNANS (element_mode (arg0))
> + && !HONOR_SIGNED_ZEROS (element_mode (arg0))
> && COMPLEX_FLOAT_TYPE_P (TREE_TYPE (arg0)))
> {
> tree rtype = TREE_TYPE (TREE_TYPE (arg0));
> tree arg0r = fold_unary_loc (loc, REALPART_EXPR, rtype, arg0);
> tree arg0i = fold_unary_loc (loc, IMAGPART_EXPR, rtype, arg0);
> bool arg0rz = false, arg0iz = false;
> if ((arg0r && (arg0rz = real_zerop (arg0r)))
> || (arg0i && (arg0iz = real_zerop (arg0i))))
> {
> tree arg1r = fold_unary_loc (loc, REALPART_EXPR, rtype,
> arg1);
> @@ -10398,22 +10398,22 @@ fold_binary_loc (location_t loc,
> tem = fold_build2_loc (loc, BIT_XOR_EXPR, type,
> TREE_OPERAND (arg0, 0), mask1);
> return fold_build2_loc (loc, MINUS_EXPR, type, tem,
> mask1);
> }
> }
> }
>
> /* Fold __complex__ ( x, 0 ) - __complex__ ( 0, y ) to
> __complex__ ( x, -y ). This is not the same for SNaNs or if
> signed zeros are involved. */
> - if (!HONOR_SNANS (TYPE_MODE (TREE_TYPE (arg0)))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (arg0)))
> + if (!HONOR_SNANS (element_mode (arg0))
> + && !HONOR_SIGNED_ZEROS (element_mode (arg0))
> && COMPLEX_FLOAT_TYPE_P (TREE_TYPE (arg0)))
> {
> tree rtype = TREE_TYPE (TREE_TYPE (arg0));
> tree arg0r = fold_unary_loc (loc, REALPART_EXPR, rtype, arg0);
> tree arg0i = fold_unary_loc (loc, IMAGPART_EXPR, rtype, arg0);
> bool arg0rz = false, arg0iz = false;
> if ((arg0r && (arg0rz = real_zerop (arg0r)))
> || (arg0i && (arg0iz = real_zerop (arg0i))))
> {
> tree arg1r = fold_unary_loc (loc, REALPART_EXPR, rtype, arg1);
> @@ -10602,22 +10602,22 @@ fold_binary_loc (location_t loc,
> if (tem != NULL_TREE)
> {
> tem = fold_convert_loc (loc, type, tem);
> return fold_build2_loc (loc, MULT_EXPR, type, tem, tem);
> }
> }
>
> /* Fold z * +-I to __complex__ (-+__imag z, +-__real z).
> This is not the same for NaNs or if signed zeros are
> involved. */
> - if (!HONOR_NANS (TYPE_MODE (TREE_TYPE (arg0)))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (arg0)))
> + if (!HONOR_NANS (element_mode (arg0))
> + && !HONOR_SIGNED_ZEROS (element_mode (arg0))
> && COMPLEX_FLOAT_TYPE_P (TREE_TYPE (arg0))
> && TREE_CODE (arg1) == COMPLEX_CST
> && real_zerop (TREE_REALPART (arg1)))
> {
> tree rtype = TREE_TYPE (TREE_TYPE (arg0));
> if (real_onep (TREE_IMAGPART (arg1)))
> return
> fold_build2_loc (loc, COMPLEX_EXPR, type,
> negate_expr (fold_build1_loc (loc,
> IMAGPART_EXPR,
> rtype, arg0)),
> @@ -10650,21 +10650,21 @@ fold_binary_loc (location_t loc,
> /* Optimizations of root(...)*root(...). */
> if (fcode0 == fcode1 && BUILTIN_ROOT_P (fcode0))
> {
> tree rootfn, arg;
> tree arg00 = CALL_EXPR_ARG (arg0, 0);
> tree arg10 = CALL_EXPR_ARG (arg1, 0);
>
> /* Optimize sqrt(x)*sqrt(x) as x. */
> if (BUILTIN_SQRT_P (fcode0)
> && operand_equal_p (arg00, arg10, 0)
> - && ! HONOR_SNANS (TYPE_MODE (type)))
> + && ! HONOR_SNANS (element_mode (type)))
> return arg00;
>
> /* Optimize root(x)*root(y) as root(x*y). */
> rootfn = TREE_OPERAND (CALL_EXPR_FN (arg0), 0);
> arg = fold_build2_loc (loc, MULT_EXPR, type, arg00,
> arg10);
> return build_call_expr_loc (loc, rootfn, 1, arg);
> }
>
> /* Optimize expN(x)*expN(y) as expN(x+y). */
> if (fcode0 == fcode1 && BUILTIN_EXPONENT_P (fcode0))
> @@ -11298,21 +11298,21 @@ fold_binary_loc (location_t loc,
> }
> }
>
> t1 = distribute_bit_expr (loc, code, type, arg0, arg1);
> if (t1 != NULL_TREE)
> return t1;
> /* Simplify ((int)c & 0377) into (int)c, if c is unsigned char. */
> if (TREE_CODE (arg1) == INTEGER_CST && TREE_CODE (arg0) == NOP_EXPR
> && TYPE_UNSIGNED (TREE_TYPE (TREE_OPERAND (arg0, 0))))
> {
> - prec = TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg0, 0)));
> + prec = element_precision (TREE_TYPE (TREE_OPERAND (arg0, 0)));
>
> wide_int mask = wide_int::from (arg1, prec, UNSIGNED);
> if (mask == -1)
> return
> fold_convert_loc (loc, type, TREE_OPERAND (arg0, 0));
> }
>
> /* Convert (and (not arg0) (not arg1)) to (not (or (arg0) (arg1))).
>
> This results in more efficient code for machines without a NOR
> @@ -11534,42 +11534,42 @@ fold_binary_loc (location_t loc,
>
> /* Optimize sin(x)/tan(x) as cos(x) if we don't care about
> NaNs or Infinities. */
> if (((fcode0 == BUILT_IN_SIN && fcode1 == BUILT_IN_TAN)
> || (fcode0 == BUILT_IN_SINF && fcode1 == BUILT_IN_TANF)
> || (fcode0 == BUILT_IN_SINL && fcode1 == BUILT_IN_TANL)))
> {
> tree arg00 = CALL_EXPR_ARG (arg0, 0);
> tree arg01 = CALL_EXPR_ARG (arg1, 0);
>
> - if (! HONOR_NANS (TYPE_MODE (TREE_TYPE (arg00)))
> - && ! HONOR_INFINITIES (TYPE_MODE (TREE_TYPE (arg00)))
> + if (! HONOR_NANS (element_mode (arg00))
> + && ! HONOR_INFINITIES (element_mode (arg00))
> && operand_equal_p (arg00, arg01, 0))
> {
> tree cosfn = mathfn_built_in (type, BUILT_IN_COS);
>
> if (cosfn != NULL_TREE)
> return build_call_expr_loc (loc, cosfn, 1, arg00);
> }
> }
>
> /* Optimize tan(x)/sin(x) as 1.0/cos(x) if we don't care about
> NaNs or Infinities. */
> if (((fcode0 == BUILT_IN_TAN && fcode1 == BUILT_IN_SIN)
> || (fcode0 == BUILT_IN_TANF && fcode1 == BUILT_IN_SINF)
> || (fcode0 == BUILT_IN_TANL && fcode1 == BUILT_IN_SINL)))
> {
> tree arg00 = CALL_EXPR_ARG (arg0, 0);
> tree arg01 = CALL_EXPR_ARG (arg1, 0);
>
> - if (! HONOR_NANS (TYPE_MODE (TREE_TYPE (arg00)))
> - && ! HONOR_INFINITIES (TYPE_MODE (TREE_TYPE (arg00)))
> + if (! HONOR_NANS (element_mode (arg00))
> + && ! HONOR_INFINITIES (element_mode (arg00))
> && operand_equal_p (arg00, arg01, 0))
> {
> tree cosfn = mathfn_built_in (type, BUILT_IN_COS);
>
> if (cosfn != NULL_TREE)
> {
> tree tmp = build_call_expr_loc (loc, cosfn, 1, arg00);
> return fold_build2_loc (loc, RDIV_EXPR, type,
> build_real (type, dconst1),
> tmp);
> @@ -12928,21 +12928,21 @@ fold_binary_loc (location_t loc,
> return fold_build2_loc (loc, TRUTH_ANDIF_EXPR, type,
> build2 (GE_EXPR, type,
> TREE_OPERAND (arg0, 0), tem),
> build2 (LE_EXPR, type,
> TREE_OPERAND (arg0, 0), arg1));
>
> /* Convert ABS_EXPR<x> >= 0 to true. */
> strict_overflow_p = false;
> if (code == GE_EXPR
> && (integer_zerop (arg1)
> - || (! HONOR_NANS (TYPE_MODE (TREE_TYPE (arg0)))
> + || (! HONOR_NANS (element_mode (arg0))
> && real_zerop (arg1)))
> && tree_expr_nonnegative_warnv_p (arg0, &strict_overflow_p))
> {
> if (strict_overflow_p)
> fold_overflow_warning (("assuming signed overflow does not occur
> "
> "when simplifying comparison of "
> "absolute value and zero"),
> WARN_STRICT_OVERFLOW_CONDITIONAL);
> return omit_one_operand_loc (loc, type,
> constant_boolean_node (true, type),
> @@ -12980,25 +12980,25 @@ fold_binary_loc (location_t loc,
> otherwise Y might be >= # of bits in X's type and thus e.g.
> (unsigned char) (1 << Y) for Y 15 might be 0.
> If the cast is widening, then 1 << Y should have unsigned type,
> otherwise if Y is number of bits in the signed shift type minus 1,
> we can't optimize this. E.g. (unsigned long long) (1 << Y) for Y
> 31 might be 0xffffffff80000000. */
> if ((code == LT_EXPR || code == GE_EXPR)
> && TYPE_UNSIGNED (TREE_TYPE (arg0))
> && CONVERT_EXPR_P (arg1)
> && TREE_CODE (TREE_OPERAND (arg1, 0)) == LSHIFT_EXPR
> - && (TYPE_PRECISION (TREE_TYPE (arg1))
> - >= TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg1, 0))))
> + && (element_precision (TREE_TYPE (arg1))
> + >= element_precision (TREE_TYPE (TREE_OPERAND (arg1, 0))))
> && (TYPE_UNSIGNED (TREE_TYPE (TREE_OPERAND (arg1, 0)))
> - || (TYPE_PRECISION (TREE_TYPE (arg1))
> - == TYPE_PRECISION (TREE_TYPE (TREE_OPERAND (arg1, 0)))))
> + || (element_precision (TREE_TYPE (arg1))
> + == element_precision (TREE_TYPE (TREE_OPERAND (arg1,
> 0)))))
> && integer_onep (TREE_OPERAND (TREE_OPERAND (arg1, 0), 0)))
> {
> tem = build2 (RSHIFT_EXPR, TREE_TYPE (arg0), arg0,
> TREE_OPERAND (TREE_OPERAND (arg1, 0), 1));
> return build2_loc (loc, code == LT_EXPR ? EQ_EXPR : NE_EXPR, type,
> fold_convert_loc (loc, TREE_TYPE (arg0), tem),
> build_zero_cst (TREE_TYPE (arg0)));
> }
>
> return NULL_TREE;
> @@ -13315,32 +13315,32 @@ fold_ternary_loc (location_t loc, enum t
>
> /* If we have A op B ? A : C, we may be able to convert this to a
> simpler expression, depending on the operation and the values
> of B and C. Signed zeros prevent all of these transformations,
> for reasons given above each one.
>
> Also try swapping the arguments and inverting the conditional. */
> if (COMPARISON_CLASS_P (arg0)
> && operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
> arg1, TREE_OPERAND (arg0, 1))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (arg1))))
> + && !HONOR_SIGNED_ZEROS (element_mode (arg1)))
> {
> tem = fold_cond_expr_with_comparison (loc, type, arg0, op1, op2);
> if (tem)
> return tem;
> }
>
> if (COMPARISON_CLASS_P (arg0)
> && operand_equal_for_comparison_p (TREE_OPERAND (arg0, 0),
> op2,
> TREE_OPERAND (arg0, 1))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (TREE_TYPE (op2))))
> + && !HONOR_SIGNED_ZEROS (element_mode (op2)))
> {
> location_t loc0 = expr_location_or (arg0, loc);
> tem = fold_invert_truthvalue (loc0, arg0);
> if (tem && COMPARISON_CLASS_P (tem))
> {
> tem = fold_cond_expr_with_comparison (loc, type, tem, op2,
> op1);
> if (tem)
> return tem;
> }
> }
> @@ -14827,21 +14827,21 @@ tree_call_nonnegative_warnv_p (tree type
> CASE_INT_FN (BUILT_IN_POPCOUNT):
> CASE_INT_FN (BUILT_IN_CLZ):
> CASE_INT_FN (BUILT_IN_CLRSB):
> case BUILT_IN_BSWAP32:
> case BUILT_IN_BSWAP64:
> /* Always true. */
> return true;
>
> CASE_FLT_FN (BUILT_IN_SQRT):
> /* sqrt(-0.0) is -0.0. */
> - if (!HONOR_SIGNED_ZEROS (TYPE_MODE (type)))
> + if (!HONOR_SIGNED_ZEROS (element_mode (type)))
> return true;
> return tree_expr_nonnegative_warnv_p (arg0,
> strict_overflow_p);
>
> CASE_FLT_FN (BUILT_IN_ASINH):
> CASE_FLT_FN (BUILT_IN_ATAN):
> CASE_FLT_FN (BUILT_IN_ATANH):
> CASE_FLT_FN (BUILT_IN_CBRT):
> CASE_FLT_FN (BUILT_IN_CEIL):
> CASE_FLT_FN (BUILT_IN_ERF):
> @@ -16093,21 +16093,21 @@ fold_strip_sign_ops (tree exp)
>
> switch (TREE_CODE (exp))
> {
> case ABS_EXPR:
> case NEGATE_EXPR:
> arg0 = fold_strip_sign_ops (TREE_OPERAND (exp, 0));
> return arg0 ? arg0 : TREE_OPERAND (exp, 0);
>
> case MULT_EXPR:
> case RDIV_EXPR:
> - if (HONOR_SIGN_DEPENDENT_ROUNDING (TYPE_MODE (TREE_TYPE (exp))))
> + if (HONOR_SIGN_DEPENDENT_ROUNDING (element_mode (exp)))
> return NULL_TREE;
> arg0 = fold_strip_sign_ops (TREE_OPERAND (exp, 0));
> arg1 = fold_strip_sign_ops (TREE_OPERAND (exp, 1));
> if (arg0 != NULL_TREE || arg1 != NULL_TREE)
> return fold_build2_loc (loc, TREE_CODE (exp), TREE_TYPE (exp),
> arg0 ? arg0 : TREE_OPERAND (exp, 0),
> arg1 ? arg1 : TREE_OPERAND (exp, 1));
> break;
>
> case COMPOUND_EXPR:
> Index: gcc/ChangeLog
> ===================================================================
> --- gcc/ChangeLog (revision 217701)
> +++ gcc/ChangeLog (revision 217702)
> @@ -1,10 +1,23 @@
> +2014-11-18 Marc Glisse <[email protected]>
> +
> + * tree.c (element_mode, integer_truep): New functions.
> + * tree.h (element_mode, integer_truep): Declare them.
> + * fold-const.c (negate_expr_p, fold_negate_expr,
> combine_comparisons,
> + fold_cond_expr_with_comparison, fold_real_zero_addition_p,
> + fold_comparison, fold_ternary_loc, tree_call_nonnegative_warnv_p,
> + fold_strip_sign_ops): Use element_mode.
> + (fold_binary_loc): Use element_mode and element_precision.
> + * match.pd: Use integer_truep, element_mode, element_precision,
> + VECTOR_TYPE_P and build_one_cst. Extend some transformations to
> + vectors. Simplify A/-A.
> +
> 2014-11-18 Kyrylo Tkachov <[email protected]>
>
> * config/arm/arm.md (unaligned_loaddi): Use std::swap instead of
> manual swapping implementation.
> (movcond_addsi): Likewise.
> * config/arm/arm.c (arm_canonicalize_comparison): Likewise.
> (arm_select_dominance_cc_mode): Likewise.
> (arm_reload_out_hi): Likewise.
> (gen_operands_ldrd_strd): Likewise.
> (output_move_double): Likewise.
> Index: gcc/match.pd
> ===================================================================
> --- gcc/match.pd (revision 217701)
> +++ gcc/match.pd (revision 217702)
> @@ -19,21 +19,21 @@ FITNESS FOR A PARTICULAR PURPOSE. See t
> for more details.
>
> You should have received a copy of the GNU General Public License
> along with GCC; see the file COPYING3. If not see
> <http://www.gnu.org/licenses/>. */
>
>
> /* Generic tree predicates we inherit. */
> (define_predicates
> integer_onep integer_zerop integer_all_onesp integer_minus_onep
> - integer_each_onep
> + integer_each_onep integer_truep
> real_zerop real_onep real_minus_onep
> CONSTANT_CLASS_P
> tree_expr_nonnegative_p)
>
> /* Operator lists. */
> (define_operator_list tcc_comparison
> lt le eq ne ge gt unordered ordered unlt unle ungt unge uneq
> ltgt)
> (define_operator_list inverted_tcc_comparison
> ge gt ne eq lt le ordered unordered ge gt le lt ltgt
> uneq)
> (define_operator_list inverted_tcc_comparison_with_nans
> @@ -66,102 +66,104 @@ along with GCC; see the file COPYING3.
> (if (fold_real_zero_addition_p (type, @1, 1))
> (non_lvalue @0)))
>
> /* Simplify x - x.
> This is unsafe for certain floats even in non-IEEE formats.
> In IEEE, it is unsafe because it does wrong for NaNs.
> Also note that operand_equal_p is always false if an operand
> is volatile. */
> (simplify
> (minus @0 @0)
> - (if (!FLOAT_TYPE_P (type) || !HONOR_NANS (TYPE_MODE (type)))
> + (if (!FLOAT_TYPE_P (type) || !HONOR_NANS (element_mode (type)))
> { build_zero_cst (type); }))
>
> (simplify
> (mult @0 integer_zerop@1)
> @1)
>
> /* Maybe fold x * 0 to 0. The expressions aren't the same
> when x is NaN, since x * 0 is also NaN. Nor are they the
> same in modes with signed zeros, since multiplying a
> negative value by 0 gives -0, not +0. */
> (simplify
> (mult @0 real_zerop@1)
> - (if (!HONOR_NANS (TYPE_MODE (type))
> - && !HONOR_SIGNED_ZEROS (TYPE_MODE (type)))
> + (if (!HONOR_NANS (element_mode (type))
> + && !HONOR_SIGNED_ZEROS (element_mode (type)))
> @1))
>
> /* In IEEE floating point, x*1 is not equivalent to x for snans.
> Likewise for complex arithmetic with signed zeros. */
> (simplify
> (mult @0 real_onep)
> - (if (!HONOR_SNANS (TYPE_MODE (type))
> - && (!HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + (if (!HONOR_SNANS (element_mode (type))
> + && (!HONOR_SIGNED_ZEROS (element_mode (type))
> || !COMPLEX_FLOAT_TYPE_P (type)))
> (non_lvalue @0)))
>
> /* Transform x * -1.0 into -x. */
> (simplify
> (mult @0 real_minus_onep)
> - (if (!HONOR_SNANS (TYPE_MODE (type))
> - && (!HONOR_SIGNED_ZEROS (TYPE_MODE (type))
> + (if (!HONOR_SNANS (element_mode (type))
> + && (!HONOR_SIGNED_ZEROS (element_mode (type))
> || !COMPLEX_FLOAT_TYPE_P (type)))
> (negate @0)))
>
> /* Make sure to preserve divisions by zero. This is the reason why
> we don't simplify x / x to 1 or 0 / x to 0. */
> (for op (mult trunc_div ceil_div floor_div round_div exact_div)
> (simplify
> (op @0 integer_onep)
> (non_lvalue @0)))
>
> /* X / -1 is -X. */
> (for div (trunc_div ceil_div floor_div round_div exact_div)
> (simplify
> - (div @0 INTEGER_CST@1)
> - (if (!TYPE_UNSIGNED (type)
> - && wi::eq_p (@1, -1))
> + (div @0 integer_minus_onep@1)
> + (if (!TYPE_UNSIGNED (type))
> (negate @0))))
>
> /* For unsigned integral types, FLOOR_DIV_EXPR is the same as
> TRUNC_DIV_EXPR. Rewrite into the latter in this case. */
> (simplify
> (floor_div @0 @1)
> - (if (INTEGRAL_TYPE_P (type) && TYPE_UNSIGNED (type))
> + (if ((INTEGRAL_TYPE_P (type) || VECTOR_INTEGER_TYPE_P (type))
> + && TYPE_UNSIGNED (type))
> (trunc_div @0 @1)))
>
> /* Optimize A / A to 1.0 if we don't care about
> - NaNs or Infinities. Skip the transformation
> - for non-real operands. */
> + NaNs or Infinities. */
> (simplify
> (rdiv @0 @0)
> - (if (SCALAR_FLOAT_TYPE_P (type)
> - && ! HONOR_NANS (TYPE_MODE (type))
> - && ! HONOR_INFINITIES (TYPE_MODE (type)))
> - { build_real (type, dconst1); })
> - /* The complex version of the above A / A optimization. */
> - (if (COMPLEX_FLOAT_TYPE_P (type)
> - && ! HONOR_NANS (TYPE_MODE (TREE_TYPE (type)))
> - && ! HONOR_INFINITIES (TYPE_MODE (TREE_TYPE (type))))
> - { build_complex (type, build_real (TREE_TYPE (type), dconst1),
> - build_real (TREE_TYPE (type), dconst0)); }))
> + (if (FLOAT_TYPE_P (type)
> + && ! HONOR_NANS (element_mode (type))
> + && ! HONOR_INFINITIES (element_mode (type)))
> + { build_one_cst (type); }))
> +
> +/* Optimize -A / A to -1.0 if we don't care about
> + NaNs or Infinities. */
> +(simplify
> + (rdiv:c @0 (negate @0))
> + (if (FLOAT_TYPE_P (type)
> + && ! HONOR_NANS (element_mode (type))
> + && ! HONOR_INFINITIES (element_mode (type)))
> + { build_minus_one_cst (type); }))
>
> /* In IEEE floating point, x/1 is not equivalent to x for snans. */
> (simplify
> (rdiv @0 real_onep)
> - (if (!HONOR_SNANS (TYPE_MODE (type)))
> + (if (!HONOR_SNANS (element_mode (type)))
> (non_lvalue @0)))
>
> /* In IEEE floating point, x/-1 is not equivalent to -x for snans. */
> (simplify
> (rdiv @0 real_minus_onep)
> - (if (!HONOR_SNANS (TYPE_MODE (type)))
> + (if (!HONOR_SNANS (element_mode (type)))
> (negate @0)))
>
> /* If ARG1 is a constant, we can convert this to a multiply by the
> reciprocal. This does not have the same rounding properties,
> so only do this if -freciprocal-math. We can actually
> always safely do it if ARG1 is a power of two, but it's hard to
> tell if it is or not in a portable manner. */
> (for cst (REAL_CST COMPLEX_CST VECTOR_CST)
> (simplify
> (rdiv @0 cst@1)
> @@ -185,23 +187,22 @@ along with GCC; see the file COPYING3.
> (mod integer_zerop@0 @1)
> /* But not for 0 % 0 so that we can get the proper warnings and errors.
> */
> (if (!integer_zerop (@1))
> @0))
> /* X % 1 is always zero. */
> (simplify
> (mod @0 integer_onep)
> { build_zero_cst (type); })
> /* X % -1 is zero. */
> (simplify
> - (mod @0 INTEGER_CST@1)
> - (if (!TYPE_UNSIGNED (type)
> - && wi::eq_p (@1, -1))
> + (mod @0 integer_minus_onep@1)
> + (if (!TYPE_UNSIGNED (type))
> { build_zero_cst (type); })))
>
> /* X % -C is the same as X % C. */
> (simplify
> (trunc_mod @0 INTEGER_CST@1)
> (if (TYPE_SIGN (type) == SIGNED
> && !TREE_OVERFLOW (@1)
> && wi::neg_p (@1)
> && !TYPE_OVERFLOW_TRAPS (type)
> /* Avoid this transformation if C is INT_MIN, i.e. C == -C. */
> @@ -302,28 +303,25 @@ along with GCC; see the file COPYING3.
> (if (INTEGRAL_TYPE_P (type) && TYPE_PRECISION (type) == 1)))
> (for op (tcc_comparison truth_and truth_andif truth_or truth_orif
> truth_xor)
> (match truth_valued_p
> (op @0 @1)))
> (match truth_valued_p
> (truth_not @0))
>
> (match (logical_inverted_value @0)
> (bit_not truth_valued_p@0))
> (match (logical_inverted_value @0)
> - (eq @0 integer_zerop)
> - (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)))))
> + (eq @0 integer_zerop))
> (match (logical_inverted_value @0)
> - (ne truth_valued_p@0 integer_onep)
> - (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)))))
> + (ne truth_valued_p@0 integer_truep))
> (match (logical_inverted_value @0)
> - (bit_xor truth_valued_p@0 integer_onep)
> - (if (INTEGRAL_TYPE_P (TREE_TYPE (@0)))))
> + (bit_xor truth_valued_p@0 integer_truep))
>
> /* X & !X -> 0. */
> (simplify
> (bit_and:c @0 (logical_inverted_value @0))
> { build_zero_cst (type); })
> /* X | !X and X ^ !X -> 1, , if X is truth-valued. */
> (for op (bit_ior bit_xor)
> (simplify
> (op:c truth_valued_p@0 (logical_inverted_value @0))
> { constant_boolean_node (true, type); }))
> @@ -486,21 +484,21 @@ along with GCC; see the file COPYING3.
> /* ~A + 1 -> -A */
> (simplify
> (plus (bit_not @0) integer_each_onep)
> (negate @0))
>
> /* (T)(P + A) - (T)P -> (T) A */
> (for add (plus pointer_plus)
> (simplify
> (minus (convert (add @0 @1))
> (convert @0))
> - (if (TYPE_PRECISION (type) <= TYPE_PRECISION (TREE_TYPE (@1))
> + (if (element_precision (type) <= element_precision (TREE_TYPE (@1))
> /* For integer types, if A has a smaller type
> than T the result depends on the possible
> overflow in P + A.
> E.g. T=size_t, A=(unsigned)429497295, P>0.
> However, if an overflow in P + A would cause
> undefined behavior, we can assume that there
> is no overflow. */
> || (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> && TYPE_OVERFLOW_UNDEFINED (TREE_TYPE (@0)))
> /* For pointer types, if the conversion of A to the
> @@ -619,33 +617,33 @@ along with GCC; see the file COPYING3.
> (for icvt (convert float)
> (simplify
> (ocvt (icvt@1 @0))
> (with
> {
> tree inside_type = TREE_TYPE (@0);
> tree inter_type = TREE_TYPE (@1);
> int inside_int = INTEGRAL_TYPE_P (inside_type);
> int inside_ptr = POINTER_TYPE_P (inside_type);
> int inside_float = FLOAT_TYPE_P (inside_type);
> - int inside_vec = TREE_CODE (inside_type) == VECTOR_TYPE;
> + int inside_vec = VECTOR_TYPE_P (inside_type);
> unsigned int inside_prec = TYPE_PRECISION (inside_type);
> int inside_unsignedp = TYPE_UNSIGNED (inside_type);
> int inter_int = INTEGRAL_TYPE_P (inter_type);
> int inter_ptr = POINTER_TYPE_P (inter_type);
> int inter_float = FLOAT_TYPE_P (inter_type);
> - int inter_vec = TREE_CODE (inter_type) == VECTOR_TYPE;
> + int inter_vec = VECTOR_TYPE_P (inter_type);
> unsigned int inter_prec = TYPE_PRECISION (inter_type);
> int inter_unsignedp = TYPE_UNSIGNED (inter_type);
> int final_int = INTEGRAL_TYPE_P (type);
> int final_ptr = POINTER_TYPE_P (type);
> int final_float = FLOAT_TYPE_P (type);
> - int final_vec = TREE_CODE (type) == VECTOR_TYPE;
> + int final_vec = VECTOR_TYPE_P (type);
> unsigned int final_prec = TYPE_PRECISION (type);
> int final_unsignedp = TYPE_UNSIGNED (type);
> }
> /* In addition to the cases of two conversions in a row
> handled below, if we are converting something to its own
> type via an object of identical or wider precision, neither
> conversion is needed. */
> (if (((GIMPLE && useless_type_conversion_p (type, inside_type))
> || (GENERIC
> && TYPE_MAIN_VARIANT (type) == TYPE_MAIN_VARIANT
> (inside_type)))
> @@ -659,22 +657,22 @@ along with GCC; see the file COPYING3.
> former is wider than the latter and doesn't change the signedness
> (for integers). Avoid this if the final type is a pointer since
> then we sometimes need the middle conversion. Likewise if the
> final type has a precision not equal to the size of its mode. */
> (if (((inter_int && inside_int)
> || (inter_float && inside_float)
> || (inter_vec && inside_vec))
> && inter_prec >= inside_prec
> && (inter_float || inter_vec
> || inter_unsignedp == inside_unsignedp)
> - && ! (final_prec != GET_MODE_PRECISION (TYPE_MODE (type))
> - && TYPE_MODE (type) == TYPE_MODE (inter_type))
> + && ! (final_prec != GET_MODE_PRECISION (element_mode (type))
> + && element_mode (type) == element_mode (inter_type))
> && ! final_ptr
> && (! final_vec || inter_prec == inside_prec))
> (ocvt @0))
>
> /* If we have a sign-extension of a zero-extended value, we can
> replace that by a single zero-extension. Likewise if the
> final conversion does not change precision we can drop the
> intermediate conversion. */
> (if (inside_int && inter_int && final_int
> && ((inside_prec < inter_prec && inter_prec < final_prec
> @@ -838,26 +836,26 @@ along with GCC; see the file COPYING3.
> (simplify
> (cnd @0 (cnd @0 @1 @2) @3)
> (cnd @0 @1 @3))
> (simplify
> (cnd @0 @1 (cnd @0 @2 @3))
> (cnd @0 @1 @3))
>
> /* A ? B : B -> B. */
> (simplify
> (cnd @0 @1 @1)
> - @1))
> + @1)
>
> -/* !A ? B : C -> A ? C : B. */
> -(simplify
> - (cond (logical_inverted_value truth_valued_p@0) @1 @2)
> - (cond @0 @2 @1))
> + /* !A ? B : C -> A ? C : B. */
> + (simplify
> + (cnd (logical_inverted_value truth_valued_p@0) @1 @2)
> + (cnd @0 @2 @1)))
>
>
> /* Simplifications of comparisons. */
>
> /* We can simplify a logical negation of a comparison to the
> inverted comparison. As we cannot compute an expression
> operator using invert_tree_comparison we have to simulate
> that with expression code iteration. */
> (for cmp (tcc_comparison)
> icmp (inverted_tcc_comparison)
> @@ -869,24 +867,23 @@ along with GCC; see the file COPYING3.
> For now implement what forward_propagate_comparison did. */
> (simplify
> (bit_not (cmp @0 @1))
> (if (VECTOR_TYPE_P (type)
> || (INTEGRAL_TYPE_P (type) && TYPE_PRECISION (type) == 1))
> /* Comparison inversion may be impossible for trapping math,
> invert_tree_comparison will tell us. But we can't use
> a computed operator in the replacement tree thus we have
> to play the trick below. */
> (with { enum tree_code ic = invert_tree_comparison
> - (cmp, HONOR_NANS (TYPE_MODE (TREE_TYPE (@0)))); }
> + (cmp, HONOR_NANS (element_mode (@0))); }
> (if (ic == icmp)
> (icmp @0 @1))
> (if (ic == ncmp)
> (ncmp @0 @1)))))
> (simplify
> - (bit_xor (cmp @0 @1) integer_onep)
> - (if (INTEGRAL_TYPE_P (type))
> - (with { enum tree_code ic = invert_tree_comparison
> - (cmp, HONOR_NANS (TYPE_MODE (TREE_TYPE (@0)))); }
> - (if (ic == icmp)
> - (icmp @0 @1))
> - (if (ic == ncmp)
> - (ncmp @0 @1))))))
> + (bit_xor (cmp @0 @1) integer_truep)
> + (with { enum tree_code ic = invert_tree_comparison
> + (cmp, HONOR_NANS (element_mode (@0))); }
> + (if (ic == icmp)
> + (icmp @0 @1))
> + (if (ic == ncmp)
> + (ncmp @0 @1)))))
>