On Wed, Jul 13, 2011 at 2:15 PM, Kai Tietz <ktiet...@googlemail.com> wrote:
> 2011/7/13 Richard Guenther <richard.guent...@gmail.com>:
>> On Wed, Jul 13, 2011 at 1:08 PM, Kai Tietz <ktiet...@googlemail.com> wrote:
>>> 2011/7/13 Richard Guenther <richard.guent...@gmail.com>:
>>>> On Wed, Jul 13, 2011 at 11:04 AM, Kai Tietz <ktiet...@googlemail.com> 
>>>> wrote:
>>>>> Sorrty, the TRUTH_NOT_EXPR isn't here the point at all. The underlying
>>>>> issue is that fold-const re-inttroduces TRUTH_AND/OR and co.
>>>>
>>>> I'm very sure it doesn't re-constrct TRUTH_ variants out of thin air
>>>> when you present it with BIT_ variants as input.
>>>
>>> Well, look into fold-const's fold_binary_loc function and see
>>>
>>>  /* ARG0 is the first operand of EXPR, and ARG1 is the second operand.
>>>
>>>     First check for cases where an arithmetic operation is applied to a
>>>     compound, conditional, or comparison operation.  Push the arithmetic
>>>     operation inside the compound or conditional to see if any folding
>>>     can then be done.  Convert comparison to conditional for this purpose.
>>>     The also optimizes non-constant cases that used to be done in
>>>     expand_expr.
>>>
>>>     Before we do that, see if this is a BIT_AND_EXPR or a BIT_IOR_EXPR,
>>>     one of the operands is a comparison and the other is a comparison, a
>>>     BIT_AND_EXPR with the constant 1, or a truth value.  In that case, the
>>>     code below would make the expression more complex.  Change it to a
>>>     TRUTH_{AND,OR}_EXPR.  Likewise, convert a similar NE_EXPR to
>>>     TRUTH_XOR_EXPR and an EQ_EXPR to the inversion of a TRUTH_XOR_EXPR.  */
>>>
>>>  if ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR
>>>       || code == EQ_EXPR || code == NE_EXPR)
>>>      && ((truth_value_p (TREE_CODE (arg0))
>>>           && (truth_value_p (TREE_CODE (arg1))
>>>               || (TREE_CODE (arg1) == BIT_AND_EXPR
>>>                   && integer_onep (TREE_OPERAND (arg1, 1)))))
>>>          || (truth_value_p (TREE_CODE (arg1))
>>>              && (truth_value_p (TREE_CODE (arg0))
>>>                  || (TREE_CODE (arg0) == BIT_AND_EXPR
>>>                      && integer_onep (TREE_OPERAND (arg0, 1)))))))
>>>    {
>>>      tem = fold_build2_loc (loc, code == BIT_AND_EXPR ? TRUTH_AND_EXPR
>>>                         : code == BIT_IOR_EXPR ? TRUTH_OR_EXPR
>>>                         : TRUTH_XOR_EXPR,
>>>                         boolean_type_node,
>>>                         fold_convert_loc (loc, boolean_type_node, arg0),
>>>                         fold_convert_loc (loc, boolean_type_node, arg1));
>>>
>>>      if (code == EQ_EXPR)
>>>        tem = invert_truthvalue_loc (loc, tem);
>>>
>>>      return fold_convert_loc (loc, type, tem);
>>>    }
>>>
>>> Here unconditionally TRUTH_AND/TRUTH_OR gets introduced, if operands
>>> are of kind truth.  This is btw the point, why you see that those
>>> cases are handled.  But as soon as this part is turned off for BIT_-
>>> IOR/AND, we need to do the folding for 1-bit precision case explicit.
>>
>> First of all this checks for a quite complex pattern - where do we pass
>> such complex pattern from the gimple level to fold?  For the EQ/NE_EXPR
>> case forwprop probably might be able to feed it that, but then how does
>> it go wrong?  The above could also simply be guarded by !in_gimple_form.
>>
>> Richard.
>
> See reassoc pass as example

I don't see where.

> and this hacky maybe_fold_and_comparisons
> / maybe_fold_or_comparisons functions.

Yes, that case is known - but unless the folding result is sane we
drop it anyway.

>  As indeed we want still be
> able to do comparison foldings without getting back an TRUTH-op.

Well, if we get back one we have canonicalize_cond_expr_cond to
make it sane again.

> Additionally we have a lot of passes - like vectorizer - which are
> happily try to build new condition on tree-level.  This is another

That's true and you promised to clean up remaining TRUTH_*
consumers / producers when doing the canonicalization to BIT_*.
At the moment they get back to BIT_* via force_gimple_operand*
which is probably even fine.

> place I saw issues and tree-cfg failures. And last but not least those
> truth-ops might be reintroduced in gimple_fold, as soon as we see
> bitwise-ops on one-bit precision integral type as truth_value.

Well, I already said that one-bit-precision as truth_value is wrong.

Relying on fold-const.c here with its very strict type rules always
was a stretch.  If it turns out there are now cases that we want to
do differently then we finally have to bite the bullet and do it.
There's always two options - duplicate it properly in fold_stmt
(or forwprop in case it needs to lookup defs), or remove it from fold
and make sure we fold things using fold_stmt at gimplifcation time.

Richard.

> Kai
>

Reply via email to