https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101876
--- Comment #3 from Marius Hillenbrand <mhillen at linux dot ibm.com> --- The issue is caused by inconsistent alignment of vector_types between the types (a) expected or returned by builtin functions and (b) the typedef in the example code. In the failing cases, there's a mismatch of 16-Byte and 8-Byte (compliant with the ABI) alignment. In the successful cases, either both use 16-Byte or 8-Byte alignment. When gcc starts, s390_init_builtins defines builtin functions with their corresponding type information. That includes vector builtins. While creating the builtin types for these builtins, when gcc starts with -march < z13, then s390_vector_alignment defers to default_vector_alignment, which results in natural alignment. That results in 16-Byte alignment of the 16-Byte vectors. Once the pragma switched to an arch level with VX support (TARGET_VX and TARGET_VX_ABI are true), then s390_vector_alignment selects 8-Byte alignment in accordance with the vector ABI. The C++ example (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101877) ICEs because two structurally identical vector types have different canonical types (as a result of the different alignment?!).