"Aneesh Kumar K.V" <aneesh.ku...@linux.vnet.ibm.com> writes:
> Michael Ellerman <m...@ellerman.id.au> writes: > .... .... > With patch: > sys: 0m11.3258 > > ie, a -0.7% impact > > If that impact is high we could possibly put that tracepoint within #ifdef > CONFIG_DEBUG_VM ? Since the ebizzy runs results were not stable, I did a micro benchmark to measure this and I noticed that results observed are within the run variance of the test. I made sure we don't have context-switches between the runs. If I try to get large number of page-faults, we end up with context switches. for ex: We get without patch -------------------------------- root@qemu-pr-host trace-fault]# bash run Performance counter stats for './a.out 3000 300': 643 page-faults # 0.089 M/sec 7.236562 task-clock (msec) # 0.928 CPUs utilized 2,179,213 stalled-cycles-frontend # 0.00% frontend cycles idle 17,174,367 stalled-cycles-backend # 0.00% backend cycles idle 0 context-switches # 0.000 K/sec 0.007794658 seconds time elapsed [root@qemu-pr-host trace-fault]# And with-patch: --------------- [root@qemu-pr-host trace-fault]# bash run Performance counter stats for './a.out 3000 300': 643 page-faults # 0.089 M/sec 7.233746 task-clock (msec) # 0.921 CPUs utilized 0 context-switches # 0.000 K/sec 0.007854876 seconds time elapsed Performance counter stats for './a.out 3000 300': 643 page-faults # 0.087 M/sec 649 powerpc:hash_fault # 0.087 M/sec 7.430376 task-clock (msec) # 0.938 CPUs utilized 2,347,174 stalled-cycles-frontend # 0.00% frontend cycles idle 17,524,282 stalled-cycles-backend # 0.00% backend cycles idle 0 context-switches # 0.000 K/sec 0.007920284 seconds time elapsed [root@qemu-pr-host trace-fault]# _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev