On Sat, Feb 21, 2015 at 10:31:50AM +0100, Ingo Molnar wrote:
> So it would be nice to test this on at least one reasonably old (but
> not uncomfortably old - say 5 years old) system, to get a feel for
> what kind of performance impact it has there.

Yeah, this is exactly what Andy and I were talking about yesterday on
IRC. So let's measure our favourite workload - the kernel build! :-) My
assumption is that libc uses SSE for memcpy and thus the FPU will be
used. (I'll trace FPU-specific PMCs later to confirm).

Machine is an AMD F10h which should be 5-10 years old depending on what
you're looking at (uarch, revision, ...).

Numbers look great to me in the sense that we have a very small
improvement and the rest stays the same. Which would mean, killing lazy
FPU does not bring slowdown, if no improvement, but will bring a huuge
improvement in code quality and the handling of the FPU state by getting
rid of the lazyness...

IPC is the same, branch misses are *down* a bit, cache misses go up a
bit probably because we're shuffling FPU state more often to mem, page
faults go down and runtime goes down by half a second:

plain 3.19:
==========

perf stat -a -e 
task-clock,cycles,instructions,branch-misses,cache-misses,faults,context-switches,migrations
 --repeat 10 --sync --pre ~/bin/pre-build-kernel.sh make -s -j12

 Performance counter stats for 'system wide' (10 runs):

    1408897.576594      task-clock (msec)         #    6.003 CPUs utilized      
      ( +-  0.15% ) [100.00%]
 3,137,565,760,188      cycles                    #    2.227 GHz                
      ( +-  0.02% ) [100.00%]
 2,849,228,161,721      instructions              #    0.91  insns per cycle    
      ( +-  0.00% ) [100.00%]
    32,391,188,891      branch-misses             #   22.990 M/sec              
      ( +-  0.02% ) [100.00%]
    27,879,813,595      cache-misses              #   19.788 M/sec              
      ( +-  0.01% )
        27,195,402      faults                    #    0.019 M/sec              
      ( +-  0.01% ) [100.00%]
         1,293,241      context-switches          #    0.918 K/sec              
      ( +-  0.09% ) [100.00%]
            69,548      migrations                #    0.049 K/sec              
      ( +-  0.22% )

     234.681331200 seconds time elapsed                                         
 ( +-  0.15% )


eagerfpu=ENABLE
===============

 Performance counter stats for 'system wide' (10 runs):

    1405208.771580      task-clock (msec)         #    6.003 CPUs utilized      
      ( +-  0.19% ) [100.00%]
 3,137,381,829,748      cycles                    #    2.233 GHz                
      ( +-  0.03% ) [100.00%]
 2,849,059,336,718      instructions              #    0.91  insns per cycle    
      ( +-  0.00% ) [100.00%]
    32,380,999,636      branch-misses             #   23.044 M/sec              
      ( +-  0.02% ) [100.00%]
    27,884,281,327      cache-misses              #   19.844 M/sec              
      ( +-  0.01% )
        27,193,985      faults                    #    0.019 M/sec              
      ( +-  0.01% ) [100.00%]
         1,293,300      context-switches          #    0.920 K/sec              
      ( +-  0.08% ) [100.00%]
            69,791      migrations                #    0.050 K/sec              
      ( +-  0.18% )

     234.066525648 seconds time elapsed                                         
 ( +-  0.19% )


-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to