Changes in directory llvm/lib/Target/X86:
README.txt updated: 1.32 -> 1.33 --- Log message: Another high-prio selection performance bug --- Diffs of the changes: (+46 -0) README.txt | 46 ++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 46 insertions(+) Index: llvm/lib/Target/X86/README.txt diff -u llvm/lib/Target/X86/README.txt:1.32 llvm/lib/Target/X86/README.txt:1.33 --- llvm/lib/Target/X86/README.txt:1.32 Mon Jan 30 18:45:37 2006 +++ llvm/lib/Target/X86/README.txt Mon Jan 30 20:10:06 2006 @@ -314,3 +314,49 @@ Think about doing i64 math in SSE regs. +//===---------------------------------------------------------------------===// + +The DAG Isel doesn't fold the loads into the adds in this testcase. The +pattern selector does. This is because the chain value of the load gets +selected first, and the loads aren't checking to see if they are only used by +and add. + +.ll: + +int %test(int* %x, int* %y, int* %z) { + %X = load int* %x + %Y = load int* %y + %Z = load int* %z + %a = add int %X, %Y + %b = add int %a, %Z + ret int %b +} + +dag isel: + +_test: + movl 4(%esp), %eax + movl (%eax), %eax + movl 8(%esp), %ecx + movl (%ecx), %ecx + addl %ecx, %eax + movl 12(%esp), %ecx + movl (%ecx), %ecx + addl %ecx, %eax + ret + +pattern isel: + +_test: + movl 12(%esp), %ecx + movl 4(%esp), %edx + movl 8(%esp), %eax + movl (%eax), %eax + addl (%edx), %eax + addl (%ecx), %eax + ret + +This is bad for register pressure, though the dag isel is producing a +better schedule. :) + + _______________________________________________ llvm-commits mailing list llvm-commits@cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/llvm-commits