https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121894

--- Comment #5 from qinzhao at gcc dot gnu.org ---
(In reply to Jakub Jelinek from comment #0)
> Consider
> struct S { int a, b, c, d; };
> void bar (int, int, int, int);
> 
> void
> foo ()
> {
>   S s;
>   s.a = 1;
>   s.c = 2;
>   s.d = 3;
>   s.a++;
>   s.c++;
>   s.d++;
>   bar (s.a, s.b, s.c, s.d);
> }
>

when adding one more reference to s.b in the source code, such as:
> void
> foo ()
> {
>   S s;
>   s.a = 1;
>   s.c = 2;
>   s.d = 3;
>   s.a++;
    s.b++;
>   s.c++;
>   s.d++;
>   bar (s.a, s.b, s.c, s.d);
> }

the SRA decides to scalarize s.b, and then delete the call to .DEFERRED_INIT to
the whole object, the final optimized code:

  s$b_7 = .DEFERRED_INIT (4, 1, &"s"[0]);
  _2 = s$b_7 + 1;
  bar (2, _2, 3, 4);

So, the question is, whether the SRA analyze correctly when s.b++ is not in the
source code?

Reply via email to