https://bugs.llvm.org/show_bug.cgi?id=47262
Bug ID: 47262
Summary: vpermq+vpshufb intrinsics pessimized into
2vpshufb+vpermq+blend
Product: libraries
Version: trunk
Hardware: PC
OS: Linux
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedb...@nondot.org
Reporter: tellowkrin...@gmail.com
CC: craig.top...@gmail.com, llvm-bugs@lists.llvm.org,
llvm-...@redking.me.uk, spatel+l...@rotateright.com
Godbolt link: https://gcc.godbolt.org/z/PdheW9
The following code, when compiled with `clang -march=haswell`:
#include <immintrin.h>
__m256 test(__m256 in) {
__m128 mask = _mm_setr_epi8(0, 4, 8, 12, 1, 5, 9, 13, 2, 6, 10, 14, 3, 7,
11, 15);
__m256 tmp = _mm256_permute4x64_epi64(in, _MM_SHUFFLE(3, 1, 2, 0));
return _mm256_shuffle_epi8(tmp, _mm256_broadcastsi128_si256(mask));
}
unexpectedly produces the following assembly:
vpshufb ymm1, ymm0, ymmword ptr [rip + .shuffleMask1]
vpermq ymm0, ymm0, 78 # ymm0 = ymm0[2,3,0,1]
vpshufb ymm0, ymm0, ymmword ptr [rip + .shuffleMask2]
vmovdqa ymm2, ymmword ptr [rip + .blendMask]
vpblendvb ymm0, ymm0, ymm1, ymm2
compared to the expected:
vpermq ymm0, ymm0, 216 # ymm0 = ymm0[0,2,1,3]
vpshufb ymm0, ymm0, ymmword ptr [rip + .shuffleMask]
The cause appears to be instruction selection merging the two shuffles, and
then failing to figure out a reasonable lowering, falling back to the final
generic "shuffle the two lanes separately and blend them together"
In general, I think any cross-lane byte shuffle where the number of elements
crossing lanes is divisible by 4 would be better lowered as a vpshufb to
gather, vperm[d|q] to cross lanes, and a final vpshufb to scatter
--
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs