https://gcc.gnu.org/bugzilla/show_bug.cgi?id=103850
--- Comment #5 from Richard Biener <rguenth at gcc dot gnu.org> --- Note the issue can be reproduced without -ffast-math as well where the functions are nearly identical so I fear you are running into some micro-architectural hazard. Maybe .L3: vmovapd %ymm2, %ymm0 vmovapd %ymm3, %ymm1 .L2: vbroadcastsd (%rsi), %ymm2 vbroadcastsd 8(%rsi), %ymm4 addq $32, %rdx addq $16, %rsi vmovapd %ymm2, %ymm3 vfmadd132pd -32(%rsp), %ymm4, %ymm2 vfmadd132pd %ymm15, %ymm4, %ymm3 vbroadcastsd -24(%rdx), %ymm4 vfmadd132pd %ymm1, %ymm14, %ymm3 vmovapd %ymm1, %ymm14 vfmadd231pd %ymm1, %ymm4, %ymm11 vfmadd231pd %ymm0, %ymm4, %ymm7 vbroadcastsd -8(%rdx), %ymm4 vfmadd132pd %ymm0, %ymm13, %ymm2 vbroadcastsd -32(%rdx), %ymm13 vfmadd231pd %ymm1, %ymm4, %ymm9 vfmadd231pd %ymm0, %ymm4, %ymm5 vfmadd231pd %ymm1, %ymm13, %ymm12 vfmadd231pd %ymm0, %ymm13, %ymm8 vbroadcastsd -16(%rdx), %ymm13 vfmadd231pd %ymm1, %ymm13, %ymm10 vfmadd231pd %ymm0, %ymm13, %ymm6 vmovapd %ymm0, %ymm13 cmpq %rdx, %rax jne .L3 is easier to handle since there's only one data dependence on the addq $32, %rdx of the followup loads but for (slow) .L8: vmovapd %ymm2, %ymm0 vmovapd %ymm3, %ymm1 .L7: vbroadcastsd 8(%rdx), %ymm2 vbroadcastsd (%rdx), %ymm3 addq $16, %rsi addq $32, %rdx vbroadcastsd -8(%rsi), %ymm4 vfmadd231pd %ymm1, %ymm2, %ymm11 vfmadd231pd %ymm0, %ymm2, %ymm7 vbroadcastsd -8(%rdx), %ymm2 vfmadd231pd %ymm1, %ymm3, %ymm12 vfmadd231pd %ymm0, %ymm3, %ymm8 vbroadcastsd -16(%rdx), %ymm3 vfmadd231pd %ymm1, %ymm2, %ymm9 vfmadd231pd %ymm0, %ymm2, %ymm5 vbroadcastsd -16(%rsi), %ymm2 vfmadd231pd %ymm1, %ymm3, %ymm10 vfmadd231pd %ymm0, %ymm3, %ymm6 vmovapd %ymm2, %ymm3 vfmadd132pd -32(%rsp), %ymm4, %ymm2 vfmadd132pd %ymm15, %ymm4, %ymm3 vfmadd132pd %ymm1, %ymm14, %ymm3 vmovapd %ymm1, %ymm14 vfmadd132pd %ymm0, %ymm13, %ymm2 vmovapd %ymm0, %ymm13 cmpq %rsi, %rax jne .L8 both increments impose dependences on the following loads.