https://bugs.llvm.org/show_bug.cgi?id=40359
Bug ID: 40359
Summary: [X86][SSE] Memory fold scalar unary ops with zero
register passthrough
Product: libraries
Version: trunk
Hardware: PC
OS: Windows NT
Status: NEW
Severity: enhancement
Priority: P
Component: Backend: X86
Assignee: unassignedb...@nondot.org
Reporter: llvm-...@redking.me.uk
CC: andrea.dibia...@gmail.com, craig.top...@gmail.com,
llvm-bugs@lists.llvm.org, llvm-...@redking.me.uk,
spatel+l...@rotateright.com
https://godbolt.org/z/uPs8BV
We currently do this:
vmovss (%rdi), %xmm0 # xmm0 = mem[0],zero,zero,zero
vmovss (%rsi), %xmm1 # xmm1 = mem[0],zero,zero,zero
vsqrtss %xmm0, %xmm0, %xmm0
vsqrtss %xmm1, %xmm1, %xmm1
vaddss %xmm1, %xmm0, %xmm0
retq
but we can reduce register pressure when we have multiple uses of the zero by
doing this instead, and even when we don't reuse the register there's no
regression.
vxorps %xmm1, %xmm1, %xmm1
vsqrtss (%rdi), %xmm1, %xmm0
vsqrtss (%rsi), %xmm1, %xmm1
vaddss %xmm1, %xmm0, %xmm0
retq
This is really about AVX encoded instructions but I can't see any reason not to
do this on older SSE instructions as well.
--
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
llvm-bugs mailing list
llvm-bugs@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-bugs