https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103997
--- Comment #7 from avieira at gcc dot gnu.org --- Hmm thinking out loud here. As vector sizes (or ISAs) change vectorization strategies could indeed change. Best that I can think of is things like rounding, where you might need to do operations in higher precision, and some targets could potentially support instructions that widen, round and narrow again in the same instruction at some size + ISA combination and not in other, which means some would have a 'higher' element size mode in there where others don't. But that assumes the vectorizer would represent such 'widen + round + narrow' instructions in a single pattern, hiding the 'higher precision' elements. Which as far as I know don't exist right now. There may be other cases I can't think of ofc. We could always be even more conservative and only skip if the highest possible element size for the current vector size + ISA would lead to a mode with NUNITS greater or equal to the current vector mode. Or ... just never skip a mode, I don't have a good feeling for how much that would cost compile time wise though.