On Mon, Aug 05, 2019 at 04:09:15PM -0400, Phil Auld wrote:
> Hi,
> 
> On Fri, Aug 02, 2019 at 11:37:15AM -0400 Julien Desfossez wrote:
> > We tested both Aaron's and Tim's patches and here are our results.
> > 
> > Test setup:
> > - 2 1-thread sysbench, one running the cpu benchmark, the other one the
> >   mem benchmark
> > - both started at the same time
> > - both are pinned on the same core (2 hardware threads)
> > - 10 30-seconds runs
> > - test script: https://paste.debian.net/plainh/834cf45c
> > - only showing the CPU events/sec (higher is better)
> > - tested 4 tag configurations:
> >   - no tag
> >   - sysbench mem untagged, sysbench cpu tagged
> >   - sysbench mem tagged, sysbench cpu untagged
> >   - both tagged with a different tag
> > - "Alone" is the sysbench CPU running alone on the core, no tag
> > - "nosmt" is both sysbench pinned on the same hardware thread, no tag
> > - "Tim's full patchset + sched" is an experiment with Tim's patchset
> >   combined with Aaron's "hack patch" to get rid of the remaining deep
> >   idle cases
> > - In all test cases, both tasks can run simultaneously (which was not
> >   the case without those patches), but the standard deviation is a
> >   pretty good indicator of the fairness/consistency.
> > 
> > No tag
> > ------
> > Test                            Average     Stdev
> > Alone                           1306.90     0.94
> > nosmt                           649.95      1.44
> > Aaron's full patchset:          828.15      32.45
> > Aaron's first 2 patches:        832.12      36.53
> > Aaron's 3rd patch alone:        864.21      3.68
> > Tim's full patchset:            852.50      4.11
> > Tim's full patchset + sched:    852.59      8.25
> > 
> > Sysbench mem untagged, sysbench cpu tagged
> > ------------------------------------------
> > Test                            Average     Stdev
> > Alone                           1306.90     0.94
> > nosmt                           649.95      1.44
> > Aaron's full patchset:          586.06      1.77
> > Aaron's first 2 patches:        630.08      47.30
> > Aaron's 3rd patch alone:        1086.65     246.54
> > Tim's full patchset:            852.50      4.11
> > Tim's full patchset + sched:    390.49      15.76
> > 
> > Sysbench mem tagged, sysbench cpu untagged
> > ------------------------------------------
> > Test                            Average     Stdev
> > Alone                           1306.90     0.94
> > nosmt                           649.95      1.44
> > Aaron's full patchset:          583.77      3.52
> > Aaron's first 2 patches:        513.63      63.09
> > Aaron's 3rd patch alone:        1171.23     3.35
> > Tim's full patchset:            564.04      58.05
> > Tim's full patchset + sched:    1026.16     49.43
> > 
> > Both sysbench tagged
> > --------------------
> > Test                            Average     Stdev
> > Alone                           1306.90     0.94
> > nosmt                           649.95      1.44
> > Aaron's full patchset:          582.15      3.75
> > Aaron's first 2 patches:        561.07      91.61
> > Aaron's 3rd patch alone:        638.49      231.06
> > Tim's full patchset:            679.43      70.07
> > Tim's full patchset + sched:    664.34      210.14
> > 
> 
> Sorry if I'm missing something obvious here but with only 2 processes 
> of interest shouldn't one tagged and one untagged be about the same
> as both tagged?  

It should.

> In both cases the 2 sysbenches should not be running on the core at 
> the same time. 

Agree.

> There will be times when oher non-related threads could share the core
> with the untagged one. Is that enough to account for this difference?

What difference do you mean?

Thanks,
Aaron

> > So in terms of fairness, Aaron's full patchset is the most consistent, but 
> > only
> > Tim's patchset performs better than nosmt in some conditions.
> > 
> > Of course, this is one of the worst case scenario, as soon as we have
> > multithreaded applications on overcommitted systems, core scheduling 
> > performs
> > better than nosmt.
> > 
> > Thanks,
> > 
> > Julien
> 
> -- 

Reply via email to