================ @@ -159,6 +159,20 @@ AMDGPU Support X86 Support ^^^^^^^^^^^ +- The MMX vector intrinsic functions from ``*mmintrin.h`` which + operate on `__m64` vectors, such as ``_mm_add_pi8``, have been + reimplemented to use the SSE2 instruction-set and XMM registers + unconditionally. These intrinsics are therefore *no longer + supported* if MMX is enabled without SSE2 -- either from targeting + CPUs from the Pentium-MMX through the Pentium 3, or via explicitly + via passing arguments such as ``-mmmx -mno-sse2``. ---------------- RKSimon wrote:
"or explicitly via passing arguments" ? https://github.com/llvm/llvm-project/pull/96540 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits