On 2020/8/13 7:08, Joel Fernandes wrote: > On Wed, Aug 12, 2020 at 10:01:24AM +0800, Li, Aubrey wrote: >> Hi Joel, >> >> On 2020/8/10 0:44, Joel Fernandes wrote: >>> Hi Aubrey, >>> >>> Apologies for replying late as I was still looking into the details. >>> >>> On Wed, Aug 05, 2020 at 11:57:20AM +0800, Li, Aubrey wrote: >>> [...] >>>> +/* >>>> + * Core scheduling policy: >>>> + * - CORE_SCHED_DISABLED: core scheduling is disabled. >>>> + * - CORE_COOKIE_MATCH: tasks with same cookie can run >>>> + * on the same core concurrently. >>>> + * - CORE_COOKIE_TRUST: trusted task can run with kernel >>>> thread on the same core concurrently. >>>> + * - CORE_COOKIE_LONELY: tasks with cookie can run only >>>> + * with idle thread on the same core. >>>> + */ >>>> +enum coresched_policy { >>>> + CORE_SCHED_DISABLED, >>>> + CORE_SCHED_COOKIE_MATCH, >>>> + CORE_SCHED_COOKIE_TRUST, >>>> + CORE_SCHED_COOKIE_LONELY, >>>> +}; >>>> >>>> We can set policy to CORE_COOKIE_TRUST of uperf cgroup and fix this kind >>>> of performance regression. Not sure if this sounds attractive? >>> >>> Instead of this, I think it can be something simpler IMHO: >>> >>> 1. Consider all cookie-0 task as trusted. (Even right now, if you apply the >>> core-scheduling patchset, such tasks will share a core and sniff on each >>> other. So let us not pretend that such tasks are not trusted). >>> >>> 2. All kernel threads and idle task would have a cookie 0 (so that will >>> cover >>> ksoftirqd reported in your original issue). >>> >>> 3. Add a config option (CONFIG_SCHED_CORE_DEFAULT_TASKS_UNTRUSTED). Default >>> enable it. Setting this option would tag all tasks that are forked from a >>> cookie-0 task with their own cookie. Later on, such tasks can be added to >>> a group. This cover's PeterZ's ask about having 'default untrusted'). >>> (Users like ChromeOS that don't want to userspace system processes to be >>> tagged can disable this option so such tasks will be cookie-0). >>> >>> 4. Allow prctl/cgroup interfaces to create groups of tasks and override the >>> above behaviors. >> >> How does uperf in a cgroup work with ksoftirqd? Are you suggesting I set >> uperf's >> cookie to be cookie-0 via prctl? > > Yes, but let me try to understand better. There are 2 problems here I think: > > 1. ksoftirqd getting idled when HT is turned on, because uperf is sharing a > core with it: This should not be any worse than SMT OFF, because even SMT OFF > would also reduce ksoftirqd's CPU time just core sched is doing. Sure > core-scheduling adds some overhead with IPIs but such a huge drop of perf is > strange. Peter any thoughts on that? > > 2. Interface: To solve the performance problem, you are saying you want uperf > to share a core with ksoftirqd so that it is not forced into idle. Why not > just keep uperf out of the cgroup?
I guess this is unacceptable for who runs their apps in container and vm. Thanks, -Aubrey > Then it will have cookie 0 and be able to > share core with kernel threads. About user-user isolation that you need, if > you tag any "untrusted" threads by adding it to CGroup, then there will > automatically isolated from uperf while allowing uperf to share CPU with > kernel threads. > > Please let me know your thoughts and thanks, > > - Joel > >> >> Thanks, >> -Aubrey >>> >>> 5. Document everything clearly so the semantics are clear both to the >>> developers of core scheduling and to system administrators. >>> >>> Note that, with the concept of "system trusted cookie", we can also do >>> optimizations like: >>> 1. Disable STIBP when switching into trusted tasks. >>> 2. Disable L1D flushing / verw stuff for L1TF/MDS issues, when switching >>> into >>> trusted tasks. >>> >>> At least #1 seems to be biting enabling HT on ChromeOS right now, and one >>> other engineer requested I do something like #2 already. >>> >>> Once we get full-syscall isolation working, threads belonging to a process >>> can also share a core so those can just share a core with the task-group >>> leader. >>> >>>>> Is the uperf throughput worse with SMT+core-scheduling versus no-SMT ? >>>> >>>> This is a good question, from the data we measured by uperf, >>>> SMT+core-scheduling is 28.2% worse than no-SMT, :( >>> >>> This is worrying for sure. :-(. We ought to debug/profile it more to see >>> what >>> is causing the overhead. Me/Vineeth added it as a topic for LPC as well. >>> >>> Any other thoughts from others on this? >>> >>> thanks, >>> >>> - Joel >>> >>> >>>>> thanks, >>>>> >>>>> - Joel >>>>> PS: I am planning to write a patch behind a CONFIG option that tags >>>>> all processes (default untrusted) so everything gets a cookie which >>>>> some folks said was how they wanted (have a whitelist instead of >>>>> blacklist). >>>>> >>>> >>