https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90231

--- Comment #8 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
(In reply to bin cheng from comment #7)
> The orignal iv needs to be represented in debug bind stmt is:
>  64 IV struct:
>  65   SSA_NAME:     i_18
>  66   Type: int
>  67   Base: 0
>  68   Step: 1
>  69   Biv:  Y
>  70   Overflowness wrto loop niter: No-overflow
> 
> While the possible candidate is:
> 185 Candidate 8:
> 186   Var befor: ivtmp.11
> 187   Var after: ivtmp.11
> 188   Incr POS: before exit test
> 189   IV struct:
> 190     Type:       unsigned long
> 191     Base:       (unsigned long) dst_10(D)
> 192     Step:       4
> 193     Object:     (void *) dst_10(D)
> 194     Biv:        N
> 195     Overflowness wrto loop niter:       Overflow
> 
> Strictly speaking, with above information, we can't compute i_18 using
> ivtmp.11 correctly in all cases, because ivtmp.11 could overflow.  Of
> course, the overflow-ness in this case could be improved, thus solve the
> problem.  Or there is another method: we can do the computation anyway, it
> may give wrong value in some cases, but we are in debug stmt, value which is
> correct in most cases is better than optimized away, sensible?

I don't understand what kind of overflow you are worried about.
If the original IV didn't overflow, why would (int) ((unsigned long) ivtmp.11 -
(unsigned long) dst_10) ever not be a valid replacement for whatever was in
i_18?  Do you have an example where it would result in wrong-debug?
And yes, wrong-debug is worse than <optimized away>, but we could do things
like use a COND_EXPR in the debug_bind expression, if some condition is true,
use some expression how to compute the value, otherwise signal <optimized
away>.
But I don't understand why it would be needed in this case.

Reply via email to