FYI, we noticed the below changes on git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched/core commit 638476007d13534b2ed4134bf0279ef44071140b ("sched/fair: Fix the dealing with decay_count in __synchronize_entity_decay()")
It appears that your patch cause CPU usage increased a little, so that unixbench score increased. testbox/testcase/testparams: nhm-white/unixbench/spawn 7f1a169b88f513e3 638476007d13534b2ed4134bf0 ---------------- -------------------------- %stddev %change %stddev \ | \ 268908 ± 0% +4.6% 281320 ± 0% unixbench.time.involuntary_context_switches 1.147e+08 ± 0% +1.5% 1.164e+08 ± 0% unixbench.time.minor_page_faults 3607 ± 0% +1.1% 3648 ± 0% unixbench.score 324 ± 0% +1.2% 328 ± 0% unixbench.time.percent_of_cpu_this_job_got 633 ± 0% +1.2% 641 ± 0% unixbench.time.system_time 54100 ± 3% +51.0% 81705 ± 21% sched_debug.cpu#4.ttwu_local 65678 ± 7% +49.5% 98187 ± 16% sched_debug.cpu#6.ttwu_local 5624 ± 31% +52.1% 8553 ± 11% sched_debug.cpu#6.curr->pid 58 ± 20% +37.6% 80 ± 7% sched_debug.cfs_rq[3]:/.runnable_load_avg 61 ± 10% +22.5% 74 ± 14% sched_debug.cpu#2.cpu_load[3] 218479 ± 5% +15.4% 252190 ± 5% sched_debug.cpu#6.ttwu_count 102 ± 17% -19.3% 82 ± 10% sched_debug.cpu#4.cpu_load[2] 423974 ± 4% +15.6% 490072 ± 5% sched_debug.cpu#5.avg_idle 62 ± 4% +16.9% 72 ± 9% sched_debug.cpu#2.cpu_load[4] 152298 ± 2% +9.7% 167037 ± 3% sched_debug.cfs_rq[5]:/.min_vruntime 156795 ± 1% +11.3% 174544 ± 1% sched_debug.cfs_rq[7]:/.min_vruntime 155535 ± 3% +11.8% 173813 ± 2% sched_debug.cfs_rq[6]:/.min_vruntime 9231 ± 0% +1.3% 9348 ± 0% vmstat.system.in nhm-white: Nehalem Memory: 6G unixbench.time.system_time 643 ++---------------------------------------------------------------O----+ 642 ++ O O O O O O O O O O O O O O O O O O O O O O O O O | 641 ++ O O O O | 640 ++ | | | 639 ++ | 638 ++ | 637 ++ | | | 636 ++ | 635 ++ .* | *.*.. .*..* + .*.. .*.*.. * | 634 ++ * *..*.*..* * + | 633 ++--------------------------------*-----------------------------------+ unixbench.time.involuntary_context_switches 286000 ++-----------------------------------------------------------------+ 284000 ++ O | | O O | 282000 ++ O O O 280000 O+O O O O O O O O O O O O | | O O O O O O O O | 278000 ++ O O O | 276000 ++ | 274000 ++ | | | 272000 ++ | 270000 ++ .*.*. *.*.. .* | *.*. *..*. .* : *.*.*. | 268000 ++ * + : | 266000 ++----------------*--*---------------------------------------------+ kmsg.tsc:Fast_TSC_calibration_failed 1 ++-------------------------------------O--------O---------------------+ | | | | 0.8 ++ | | | | | 0.6 ++ | | | 0.4 ++ | | | | | 0.2 ++ | | | | | 0 O+O--O-O--O-O-O--O-O--O-O--O-O-O--O-O----O-O--O----O-O--O-O-O--O-O--O-O [*] bisect-good sample [O] bisect-bad sample To reproduce: apt-get install ruby ruby-oj git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git cd lkp-tests bin/setup-local job.yaml # the job file attached in this email bin/run-local job.yaml Disclaimer: Results have been estimated based on internal Intel analysis and are provided for informational purposes only. Any difference in system hardware or software design or configuration may affect actual performance. Thanks, Huang, Ying
--- testcase: unixbench default-monitors: wait: pre-test uptime: iostat: vmstat: numa-numastat: numa-vmstat: numa-meminfo: proc-vmstat: proc-stat: meminfo: slabinfo: interrupts: lock_stat: latency_stats: softirqs: bdi_dev_mapping: diskstats: nfsstat: cpuidle: cpufreq-stats: turbostat: pmeter: sched_debug: interval: 10 default_watchdogs: watch-oom: watchdog: cpufreq_governor: commit: f876f0cff6d460f03cd937104314f366dfa92261 model: Nehalem memory: 6G hdd_partitions: "/dev/disk/by-id/ata-Hitachi_HDT721050SLA360_STH3D7ME0YEM5Y-part5 /dev/disk/by-id/ata-Hitachi_HDT721050SLA360_STH3D7ME0YEM5Y-part6 /dev/disk/by-id/ata-Hitachi_HDT721050SLA360_STH3D7ME0YEM5Y-part7 /dev/disk/by-id/ata-Hitachi_HDT721050SLA360_STH3D7ME0YEM5Y-part8" swap_partitions: rootfs_partition: "/dev/disk/by-id/ata-Hitachi_HDT721050SLA360_STH3D7ME0YEM5Y-part2" netconsole_port: 6647 unixbench: test: spawn testbox: nhm-white tbox_group: nhm-white kconfig: x86_64-rhel enqueue_time: 2015-02-07 10:33:58.062824131 +08:00 head_commit: f876f0cff6d460f03cd937104314f366dfa92261 base_commit: e36f014edff70fc02b3d3d79cead1d58f289332e branch: linux-devel/devel-hourly-2015020522 kernel: "/kernel/x86_64-rhel/f876f0cff6d460f03cd937104314f366dfa92261/vmlinuz-3.19.0-rc7-gf876f0c" user: lkp queue: cyclic rootfs: debian-x86_64-2015-02-07.cgz result_root: "/result/nhm-white/unixbench/spawn/debian-x86_64-2015-02-07.cgz/x86_64-rhel/f876f0cff6d460f03cd937104314f366dfa92261/0" job_file: "/lkp/scheduled/nhm-white/cyclic_unixbench-spawn-x86_64-rhel-HEAD-f876f0cff6d460f03cd937104314f366dfa92261-0-20150207-2034-iuwy40.yaml" dequeue_time: 2015-02-07 17:02:49.045648302 +08:00 nr_cpu: "$(nproc)" job_state: finished loadavg: 4.52 1.97 0.74 1/151 24204 start_time: '1423299793' end_time: '1423299991' version: "/lkp/lkp/.src-20150207-030212"
./Run spawn
_______________________________________________ LKP mailing list l...@linux.intel.com