On 29/03/2019 19:16, Dario Faggioli wrote: > Even if I've only skimmed through it... cool series! :-D > > On Fri, 2019-03-29 at 16:08 +0100, Juergen Gross wrote: >> >> I have done some very basic performance testing: on a 4 cpu system >> (2 cores with 2 threads each) I did a "make -j 4" for building the >> Xen >> hypervisor. With This test has been run on dom0, once with no other >> guest active and once with another guest with 4 vcpus running the >> same >> test. The results are (always elapsed time, system time, user time): >> >> sched_granularity=thread, no other guest: 116.10 177.65 207.84 >> sched_granularity=core, no other guest: 114.04 175.47 207.45 >> sched_granularity=thread, other guest: 202.30 334.21 384.63 >> sched_granularity=core, other guest: 207.24 293.04 371.37 >> > So, just to be sure I'm reading this properly, > "sched_granularity=thread" means no co-scheduling of any sort is in > effect, right? Basically the patch series is applied, but "not used", > correct?
Yes. > If yes, these are interesting, and promising, numbers. :-) > >> All tests have been performed with credit2, the other schedulers are >> untested up to now. >> > Just as an heads up for people (as Juergen knows this already :-D), I'm > planning to run some performance evaluation of this patches. > > I've got an 8 CPUs system (4 cores, 2 threads each, no-NUMA) and an 16 > CPUs system (2 sockets/NUMA nodes, 4 cores each, 2 threads each) on > which I should be able to get some bench suite running relatively easy > and (hopefully) quick. > > I'm planning to evaluate: > - vanilla (i.e., without this series), SMT enabled in BIOS > - vanilla (i.e., without this series), SMT disabled in BIOS > - patched (i.e., with this series), granularity=thread > - patched (i.e., with this series), granularity=core > > I'll do start with no overcommitment, and then move to 2x > overcommitment (as you did above). Thanks, I appreciate that! Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel