On Fri, Jan 12, 2018 at 02:22:58PM -0500, Luiz Capitulino wrote: > On Thu, 4 Jan 2018 05:25:36 +0100 > Frederic Weisbecker <frede...@kernel.org> wrote: > > > When a CPU runs in full dynticks mode, a 1Hz tick remains in order to > > keep the scheduler stats alive. However this residual tick is a burden > > for bare metal tasks that can't stand any interruption at all, or want > > to minimize them. > > > > Adding the boot parameter "isolcpus=nohz_offload" will now outsource > > these scheduler ticks to the global workqueue so that a housekeeping CPU > > handles that tick remotely. > > > > Note it's still up to the user to affine the global workqueues to the > > housekeeping CPUs through /sys/devices/virtual/workqueue/cpumask or > > domains isolation. > > > > Signed-off-by: Frederic Weisbecker <frede...@kernel.org> > > Cc: Chris Metcalf <cmetc...@mellanox.com> > > Cc: Christoph Lameter <c...@linux.com> > > Cc: Luiz Capitulino <lcapitul...@redhat.com> > > Cc: Mike Galbraith <efa...@gmx.de> > > Cc: Paul E. McKenney <paul...@linux.vnet.ibm.com> > > Cc: Peter Zijlstra <pet...@infradead.org> > > Cc: Rik van Riel <r...@redhat.com> > > Cc: Thomas Gleixner <t...@linutronix.de> > > Cc: Wanpeng Li <kernel...@gmail.com> > > Cc: Ingo Molnar <mi...@kernel.org> > > --- > > kernel/sched/core.c | 88 > > ++++++++++++++++++++++++++++++++++++++++++++++-- > > kernel/sched/isolation.c | 4 +++ > > kernel/sched/sched.h | 2 ++ > > 3 files changed, 91 insertions(+), 3 deletions(-) > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index d72d0e9..b964890 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -3052,9 +3052,14 @@ void scheduler_tick(void) > > */ > > u64 scheduler_tick_max_deferment(void) > > { > > - struct rq *rq = this_rq(); > > - unsigned long next, now = READ_ONCE(jiffies); > > + struct rq *rq; > > + unsigned long next, now; > > > > + if (!housekeeping_cpu(smp_processor_id(), HK_FLAG_TICK_SCHED)) > > + return ktime_to_ns(KTIME_MAX); > > + > > + rq = this_rq(); > > + now = READ_ONCE(jiffies); > > next = rq->last_sched_tick + HZ; > > > > if (time_before_eq(next, now)) > > @@ -3062,7 +3067,82 @@ u64 scheduler_tick_max_deferment(void) > > > > return jiffies_to_nsecs(next - now); > > } > > -#endif > > + > > +struct tick_work { > > + int cpu; > > + struct delayed_work work; > > +}; > > + > > +static struct tick_work __percpu *tick_work_cpu; > > + > > +static void sched_tick_remote(struct work_struct *work) > > +{ > > + struct delayed_work *dwork = to_delayed_work(work); > > + struct tick_work *twork = container_of(dwork, struct tick_work, work); > > + int cpu = twork->cpu; > > + struct rq *rq = cpu_rq(cpu); > > + struct rq_flags rf; > > + > > + /* > > + * Handle the tick only if it appears the remote CPU is running > > + * in full dynticks mode. The check is racy by nature, but > > + * missing a tick or having one too much is no big deal. > > + */ > > + if (!idle_cpu(cpu) && tick_nohz_tick_stopped_cpu(cpu)) { > > + rq_lock_irq(rq, &rf); > > + update_rq_clock(rq); > > + rq->curr->sched_class->task_tick(rq, rq->curr, 0); > > + rq_unlock_irq(rq, &rf); > > + } > > OK, so this executes task_tick() remotely. What about account_process_tick()? > Don't we need it as well?
Nope, tasks in nohz_full mode have their special accounting that doesn't rely on the tick. > > In particular, when I run a hog application on a nohz_full core configured > with tick offload, I can see in top that the CPU usage goes from 100% > to idle for a few seconds every couple of seconds. Could this be related? > > Also, in my testing I'm sometimes seeing the tick. Sometimes at 10 or > 20 seconds interval. Is this expected? I'll dig deeper next week. That's expected, see the changelog: the offload is not affine by default. You need to either also isolate the domains: isolcpus=nohz_offload,domain or tweak the workqueue cpumask through: /sys/devices/virtual/workqueue/cpumask Thanks.