https://gcc.gnu.org/bugzilla/show_bug.cgi?id=121894
--- Comment #4 from Jakub Jelinek <jakub at gcc dot gnu.org> --- (In reply to Richard Biener from comment #3) > So > > s = .DEFERRED_INIT (16, 1, &"s"[0]); > _1 = s.b; > > is OK to "CSE" to > > _1 = .DEFERRED_INIT (4, 1, <the-val>); I think so and SRA is clearly doing that already, though just for the members that are overwritten before the first use. > ? But this is likely more costly, so only SRA should do this? > Is it only -ftrivial-auto-var-init=pattern that is an issue? .DEFERRED_INIT with non-SSA_NAME lhs is expanded as memset (memset 0 for -ftrivial-auto-var-init=zero and 0xfe for -fauto-var-init=pattern). .DEFERRED_INIT with SSA_NAME lhs is expanded as SSA_NAME = build_zero_cst (type) for -fauto-var-init=zero (or if SSA_NAME is bool) and as a constant VCEd from 0xfefefefe... otherwise. So, for SRA if it allows to SRA the whole object (and so s = .DEFERRED_INIT will go away) I think it is clearly a win, whether it is a win in other optimizations remains to be seen (but until late uninit we just don't want to fold it into the constants and propagate, because that would kill some needed warnings).