https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101523
--- Comment #24 from Segher Boessenkool <segher at gcc dot gnu.org> --- (In reply to Andreas Krebbel from comment #21) > Wouldn't it in this particular case be possible to recognize already in > try_combine that separating the move out of the parallel cannot lead to > additional optimization opportunities? To me it looks like we are just > recreating the situation we had before merging the INSNs into a parallel. Is > there a situation where this could lead to any improvement in the end? It might be possible. It's not trivial at all though, esp. if you consider other patterns, other targets, everything. Anything that grossly reduces what we try will not fly. This testcase is very degenerate, if we can recognise something about that and make combine handle that better, that could be done. Or I'll do my proposed "do not try more than 40 billion things" patch. As it is now, combine only ever reconsiders anything if it *did* make changes. So, if you see it reconsidering things a lot, you also see it making a lot of changes. And all those changes make for materially better generated code (that is tested by combine always, before making changes). Changing things so combine makes fewer changes directly means you want it to optimise less well.