------- Comment #7 from amylaar at gcc dot gnu dot org 2009-09-02 12:27 -------
(In reply to comment #0)
> 3.1. For my needs, I have patched loop-invariant.c by adding the following
> function to explicitly record all CLOBBER references from INSN that match USE
> references:
> static void record_clobber_uses_workaround(rtx insn, struct invariant *inv,
> struct df_ref *use)
> {
> rtx pat = PATTERN(insn);
> if (GET_CODE(pat) == PARALLEL)
> {
> int len = XVECLEN(pat, 0);
> int idx = 0;
>
> for (idx = 0; idx < len; idx++)
> {
> rtx subexp = XVECEXP(pat, 0, idx);
> if (GET_CODE(subexp) == CLOBBER)
> {
> if (XEXP (subexp, 0) == *DF_REF_REAL_LOC
> (use))
> record_use (inv->def, &XEXP (subexp,
> 0), DF_REF_INSN (use));
> }
> }
> }
> }
> The record_uses() function got patched accordingly:
> if (inv)
> + {
> record_use (inv->def, DF_REF_REAL_LOC (use), DF_REF_INSN (use));
> + record_clobber_uses_workaround(insn, inv, use);
> + }
This change is incorrect because a register ceases to be a loop invariant
if it is clobbered.
> 3.2. I would recommend re-analyzing the more common case, when a single
> instruction uses and defines the same register, and decide, whether loop
> optimizations in loop-invariant.c should move such register. The bug does not
> seem to affect most architectures, however, specific INSN patterns, as
> described above, do trigger it. Maybe, a common function returning the list of
> all "match_dup"-ed references for a given INSN and an operand to detect such
> cases in a uniform way.
That would not be general enough. You could also have an insn predicate which
requires specific correlations and/or properties of the operands.
Using validate_change, as I did in my patch, also covers these contingencies.
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=41188