On 03/13, Thorsten Leemhuis wrote:
> @Chao Yu/@Jaegeuk Kim: I'm considering to add this to the regressions
> report for 4.11; or is there a reason why it shouldn't be considered a
> regression? Ciao, Thorsten

Hi,

I'm planning to submit f2fs updates for 4.11-rcX including a patch which
resolves this issue as well, as I expect.

https://lkml.org/lkml/2017/3/7/813

Thanks,

> 
> On 08.03.2017 02:21, kernel test robot wrote:
> > 
> > Greeting,
> > 
> > We noticed a -33.7 regression of aim7.jobs-per-min due to commit:
> > 
> > commit: 4ac912427c4214d8031d9ad6fbc3bc75e71512df ("f2fs: introduce free nid 
> > bitmap")
> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
> > 
> > in testcase: aim7
> > on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 
> > 384G memory
> > with following parameters:
> > 
> >     disk: 1BRD_48G
> >     fs: f2fs
> >     test: disk_wrt
> >     load: 3000
> >     cpufreq_governor: performance
> > 
> > test-description: AIM7 is a traditional UNIX system level benchmark suite 
> > which is used to test and measure the performance of multiuser system.
> > test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
> > 
> > 
> > 
> > Details are as below:
> > -------------------------------------------------------------------------------------------------->
> > 
> > 
> > To reproduce:
> > 
> >         git clone 
> > git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
> >         cd lkp-tests
> >         bin/lkp install job.yaml  # job file is attached in this email
> >         bin/lkp run     job.yaml
> > 
> > testcase/path_params/tbox_group/run: 
> > aim7/1BRD_48G-f2fs-disk_wrt-3000-performance/lkp-ivb-ep01
> > 
> > ced2c7ea8e99b467  4ac912427c4214d8031d9ad6fb
> > ----------------  --------------------------
> >          %stddev      change         %stddev
> >              \          |                \
> >     117419 ±  1%     -33.7%      77863 ±  0%  aim7.jobs-per-min
> >     153.78 ±  1%     +50.6%     231.63 ±  0%  aim7.time.elapsed_time
> >     153.78 ±  1%     +50.6%     231.63 ±  0%  aim7.time.elapsed_time.max
> >     805644 ±  3%     +11.3%     896604 ±  0%  
> > aim7.time.involuntary_context_switches
> >       5408 ±  1%     +15.4%       6240 ±  0%  aim7.time.system_time
> >    5066069 ±  0%     +10.5%    5600256 ±  9%  meminfo.DirectMap2M
> >     135538 ±  8%     -41.9%      78738 ±  8%  meminfo.Dirty
> >     980.67 ± 16%     -67.8%     315.50 ± 12%  meminfo.Writeback
> >      71322 ± 10%     -44.0%      39953 ±  1%  numa-meminfo.node0.Dirty
> >      11158 ± 18%     -27.1%       8132 ±  0%  numa-meminfo.node0.Mapped
> >      56776 ±  6%     -32.5%      38309 ±  0%  numa-meminfo.node1.Dirty
> >       9684 ± 22%     +30.9%      12676 ±  0%  numa-meminfo.node1.Mapped
> >       6069 ± 57%     -78.1%       1328 ± 18%  softirqs.NET_RX
> >     619333 ±  4%      +8.0%     669152 ±  3%  softirqs.RCU
> >     128030 ±  2%     +33.3%     170724 ±  0%  softirqs.SCHED
> >    2331994 ±  1%     +15.3%    2688290 ±  0%  softirqs.TIMER
> >       7701 ±  1%     -35.7%       4948 ±  3%  vmstat.io.bo
> >      64.67 ±  2%     -39.7%      39.00 ±  2%  vmstat.procs.b
> >     333.33 ±  7%     -48.5%     171.50 ±  2%  vmstat.procs.r
> >      17530 ±  1%     -23.4%      13425 ±  1%  vmstat.system.cs
> >      47642 ±  1%      -5.3%      45100 ±  1%  vmstat.system.in
> >      33522 ±  4%     -43.1%      19068 ±  0%  proc-vmstat.nr_dirty
> >     236.00 ± 14%     -66.1%      80.00 ±  3%  proc-vmstat.nr_writeback
> >      33907 ±  4%     -43.3%      19222 ±  0%  
> > proc-vmstat.nr_zone_write_pending
> >      28194 ± 10%     +10.4%      31131 ±  6%  proc-vmstat.pgactivate
> >     746402 ±  2%     +24.6%     929960 ±  3%  proc-vmstat.pgfault
> >     153.78 ±  1%     +50.6%     231.63 ±  0%  time.elapsed_time
> >     153.78 ±  1%     +50.6%     231.63 ±  0%  time.elapsed_time.max
> >     805644 ±  3%     +11.3%     896604 ±  0%  
> > time.involuntary_context_switches
> >       3524 ±  0%     -23.4%       2701 ±  0%  
> > time.percent_of_cpu_this_job_got
> >       5408 ±  1%     +15.4%       6240 ±  0%  time.system_time
> >      12.19 ±  1%     +36.7%      16.66 ±  0%  time.user_time
> >   48260939 ±  3%     +12.1%   54110616 ±  2%  cpuidle.C1-IVT.time
> >   33149237 ±  5%     +52.6%   50597349 ±  1%  cpuidle.C1E-IVT.time
> >      89642 ±  4%     +52.8%     136976 ±  0%  cpuidle.C1E-IVT.usage
> >   13534795 ±  6%    +276.3%   50934566 ± 55%  cpuidle.C3-IVT.time
> >      42893 ±  6%    +138.8%     102439 ± 30%  cpuidle.C3-IVT.usage
> >  6.431e+08 ±  2%    +390.1%  3.152e+09 ± 10%  cpuidle.C6-IVT.time
> >     802009 ±  2%    +375.3%    3811880 ± 10%  cpuidle.C6-IVT.usage
> >    1535987 ±  4%    +156.3%    3936830 ±  4%  cpuidle.POLL.time
> >      88.14 ±  0%     -24.9%      66.17 ±  3%  turbostat.%Busy
> >       2659 ±  0%     -44.7%       1471 ±  3%  turbostat.Avg_MHz
> >       3016 ±  0%     -26.3%       2224 ±  0%  turbostat.Bzy_MHz
> >       5.20 ±  5%    +127.0%      11.80 ±  2%  turbostat.CPU%c1
> > 
> > 
> > 
> >                                 perf-stat.page-faults
> > 
> >    1e+06 
> > ++-----------------------------------------------------------------+
> >   900000 O+O O   O O  O       O            O O                              
> > |
> >          |     O        O O O   O O O O O          O                        
> > |
> >   800000 ++              .*.*.                .*. .*.         .*..          
> > |
> >   700000 ++*.*.*.*.*..*.*     *.*.*.*.*.*..*.*   *   *.*.*.*.*    *.*.*.*   
> > *
> >          | :                                                            :   
> > |
> >   600000 ++:                                                            :   
> > |
> >   500000 ++                                                              : 
> > :|
> >   400000 ++                                                              : 
> > :|
> >          |:                                                              : 
> > :|
> >   300000 ++                                                              : 
> > :|
> >   200000 ++                                                              : 
> > :|
> >          |                                                                : 
> > |
> >   100000 ++                                                               : 
> > |
> >        0 
> > *+------------------------------------O-O------------------------*-+
> > 
> > 
> >                                perf-stat.minor-faults
> > 
> >    1e+06 
> > ++-----------------------------------------------------------------+
> >   900000 O+O O   O O  O       O            O O                              
> > |
> >          |     O        O O O   O O O O O          O                        
> > |
> >   800000 ++              .*.*.                .*. .*.         .*..          
> > |
> >   700000 ++*.*.*.*.*..*.*     *.*.*.*.*.*..*.*   *   *.*.*.*.*    *.*.*.*   
> > *
> >          | :                                                            :   
> > |
> >   600000 ++:                                                            :   
> > |
> >   500000 ++                                                              : 
> > :|
> >   400000 ++                                                              : 
> > :|
> >          |:                                                              : 
> > :|
> >   300000 ++                                                              : 
> > :|
> >   200000 ++                                                              : 
> > :|
> >          |                                                                : 
> > |
> >   100000 ++                                                               : 
> > |
> >        0 
> > *+------------------------------------O-O------------------------*-+
> > 
> > 
> >                                   aim7.jobs-per-min
> > 
> >   140000 
> > ++-----------------------------------------------------------------+
> >          | *.     .*..     .*.*.*.*.*.*.*..*.*.*.*.*.*.                     
> > |
> >   120000 ++: *.*.*    *.*.*                            *.*.*.*.*..*.*.*.    
> > *
> >          | :                                                            *   
> > |
> >   100000 ++:                                                            :   
> > |
> >          |:                                                             :   
> > |
> >    80000 O+O O O O O  O O O O O O O O O O  O O     O                     : 
> > :|
> >          |:                                                              : 
> > :|
> >    60000 ++                                                              : 
> > :|
> >          |:                                                              : 
> > :|
> >    40000 ++                                                              : 
> > :|
> >          |                                                               : 
> > :|
> >    20000 ++                                                               : 
> > |
> >          |                                                                : 
> > |
> >        0 
> > *+------------------------------------O-O------------------------*-+
> > 
> > 
> > 
> >     [*] bisect-good sample
> >     [O] bisect-bad  sample
> > 
> > 
> > Disclaimer:
> > Results have been estimated based on internal Intel analysis and are 
> > provided
> > for informational purposes only. Any difference in system hardware or 
> > software
> > design or configuration may affect actual performance.
> > 
> > 
> > Thanks,
> > Xiaolong
> > 

Reply via email to