On 2/18/19 9:49 AM, Linus Torvalds wrote:
On Mon, Feb 18, 2019 at 9:40 AM Peter Zijlstra <pet...@infradead.org> wrote:
However; whichever way around you turn this cookie; it is expensive and nasty.
Do you (or anybody else) have numbers for real loads?
Because performance is all that matters. If performance is bad, then
it's pointless, since just turning off SMT is the answer.
Linus
I tested 2 Oracle DB instances running OLTP on a 2 socket 44 cores system.
This is on baremetal, no virtualization. In all cases I put each DB
instance in separate cpu cgroup. Following are the avg throughput numbers
of the 2 instances. %stdev is the standard deviation between the 2
instances.
Baseline = build w/o CONFIG_SCHED_CORE
core_sched = build w/ CONFIG_SCHED_CORE
HT_disable = offlined sibling HT with baseline
Users Baseline %stdev core_sched %stdev HT_disable %stdev
16 997768 3.28 808193(-19%) 34 1053888(+5.6%) 2.9
24 1157314 9.4 974555(-15.8%) 40.5 1197904(+3.5%) 4.6
32 1693644 6.4 1237195(-27%) 42.8 1308180(-22.8%) 5.3
The regressions are substantial. Also noticed one of the DB instances was
having much less throughput than the other with core scheduling which
brought down the avg and also reflected in the very high %stdev. Disabling
HT has effect at 32 users but still better than core scheduling both in
terms of avg and %stdev. There are some issue with the DB setup for which
I couldn't go beyond 32 users.