https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
--- Comment #8 from Richard Biener ---
*** Bug 63679 has been marked as a duplicate of this bug. ***
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
Richard Biener changed:
What|Removed |Added
Status|NEW |RESOLVED
Resolution|---
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
--- Comment #6 from Richard Biener ---
Author: rguenth
Date: Thu Nov 20 08:40:52 2014
New Revision: 217827
URL: https://gcc.gnu.org/viewcvs?rev=217827&root=gcc&view=rev
Log:
2014-11-20 Richard Biener
PR tree-optimization/63677
* tre
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
--- Comment #5 from Richard Biener ---
With the patch from PR63864 we still don't optimize:
:
vect_cst_.12_23 = { 0, 1, 2, 3 };
vect_cst_.11_32 = { 4, 5, 6, 7 };
vectp.14_2 = &a[0];
MEM[(int *)&a] = { 0, 1, 2, 3 };
vectp.14_21 = &a[0
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
--- Comment #4 from Richard Biener ---
(In reply to Jakub Jelinek from comment #3)
> The problem is that the loop is first vectorized, then several passes later
> slp vectorizes the initialization, so after some cleanups we have e.g. in
> cddce2:
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
Jakub Jelinek changed:
What|Removed |Added
Status|UNCONFIRMED |NEW
Last reconfirmed|
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
--- Comment #2 from Andrew Pinski ---
I have seen this also.
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63677
--- Comment #1 from Tejas Belagod ---
There is similar behaviour on aarch64. So, it doesn't look like a backend
issue.