https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90307
Richard Biener <rguenth at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- Keywords| |missed-optimization Status|UNCONFIRMED |NEW Last reconfirmed| |2019-05-02 Component|c++ |tree-optimization Ever confirmed|0 |1 --- Comment #2 from Richard Biener <rguenth at gcc dot gnu.org> --- I think SRA split this according to external_type layout even though accesses are using internal_type layout. For the full remat of the aggregate, that is, the scalarization itself is triggered by the filed accesses later. Doesn't look very clever overall. @@ -24,7 +38,13 @@ s.u.internal.size = 5; MEM[(struct &)&x] ={v} {CLOBBER}; x.u.internal = MEM[(const struct sstring &)&s].u.internal; + x$u$external$str_24 = MEM[(union contents *)&s]; + x$u$external$size_64 = MEM[(union contents *)&s + 8B]; + x$u$internal$size_65 = MEM[(union contents *)&s + 15B]; MEM[(struct &)&D.2505] ={v} {CLOBBER}; + MEM[(union contents *)&x] = x$u$external$str_24; + MEM[(union contents *)&x + 8B] = x$u$external$size_64; + MEM[(union contents *)&x + 15B] = x$u$internal$size_65; D.2505.u.internal = MEM[(const struct sstring &)&x].u.internal; use (&D.2505);