https://bugs.llvm.org/show_bug.cgi?id=45378
Bug ID: 45378
Summary: [X86] recent codegen regression for vector load and
reduce
Product: libraries
Version: trunk
Hardware: PC
OS: Linux
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedb...@nondot.org
Reporter: fedor.v.serg...@gmail.com
CC: craig.top...@gmail.com, llvm-bugs@lists.llvm.org,
llvm-...@redking.me.uk, spatel+l...@rotateright.com
The following IR testcase started producing suboptimal codegen on recent X86
hardware (haswell+):
] cat load-and-reduce.ll
declare i64 @llvm.experimental.vector.reduce.or.v2i64(<2 x i64>)
declare void @TrapFunc(i64)
define void @parseHeaders(i64 * %ptr) {
%vptr = bitcast i64 * %ptr to <2 x i64> *
%vload = load <2 x i64>, <2 x i64> * %vptr, align 8
%vreduce = call i64 @llvm.experimental.vector.reduce.or.v2i64(<2 x i64>
%vload)
%vcheck = icmp eq i64 %vreduce, 0
br i1 %vcheck, label %ret, label %trap
trap:
%v2 = extractelement <2 x i64> %vload, i32 1
call void @TrapFunc(i64 %v2)
ret void
ret:
ret void
}
] bin/llc -filetype=asm load-and-reduce.ll -mtriple=x86_64-unknown-linux-gnu
-mcpu=skylake -o - | grep -A8 movdq
vmovdqu (%rdi), %xmm0
vpbroadcastq 8(%rdi), %xmm1
vpor %xmm1, %xmm0, %xmm1
vmovq %xmm1, %rax
testq %rax, %rax
je .LBB0_2
# %bb.1: # %trap
vpextrq $1, %xmm0, %rdi
callq TrapFunc
]
There are two loads here, which does not seem to be efficient solution here.
Note, that for ivybridge- it generates load+shuffle instead of a load+load,
which looks like a better codegen:
] bin/llc -filetype=asm load-and-reduce.ll -mtriple=x86_64-unknown-linux-gnu
-mcpu=ivybridge -o - | grep -A8 movdq
vmovdqu (%rdi), %xmm0
vpshufd $78, %xmm0, %xmm1 # xmm1 = xmm0[2,3,0,1]
vpor %xmm1, %xmm0, %xmm1
vmovq %xmm1, %rax
testq %rax, %rax
je .LBB0_2
# %bb.1: # %trap
vpextrq $1, %xmm0, %rdi
callq TrapFunc
]
This appears to be a rather recent change in behavior, caused by
commit e48b536be66b60b05fa77b85258e6cf2ec417220
Author: Sanjay Patel <spa...@rotateright.com>
Date: Sun Feb 16 10:32:56 2020 -0500
[x86] form broadcast of scalar memop even with >1 use
----
Before that commit all X86 targets generate the same load+shuffle sequence.
--
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs