On Mon, 2012-10-29 at 16:27 -0400, Steven Rostedt wrote:
> plain text document attachment
> (0009-nohz-cpuset-Restart-tick-when-nohz-flag-is-cleared-o.patch)
> From: Frederic Weisbecker <fweis...@gmail.com>
> 
> Issue an IPI to restart the tick on a CPU that belongs
> to a cpuset when its nohz flag gets cleared.
> 
> Signed-off-by: Frederic Weisbecker <fweis...@gmail.com>
> Cc: Alessio Igor Bogani <abog...@kernel.org>
> Cc: Andrew Morton <a...@linux-foundation.org>
> Cc: Avi Kivity <a...@redhat.com>
> Cc: Chris Metcalf <cmetc...@tilera.com>
> Cc: Christoph Lameter <c...@linux.com>
> Cc: Daniel Lezcano <daniel.lezc...@linaro.org>
> Cc: Geoff Levand <ge...@infradead.org>
> Cc: Gilad Ben Yossef <gi...@benyossef.com>
> Cc: Hakan Akkan <hakanak...@gmail.com>
> Cc: Ingo Molnar <mi...@kernel.org>
> Cc: Kevin Hilman <khil...@ti.com>
> Cc: Max Krasnyansky <m...@qualcomm.com>
> Cc: Paul E. McKenney <paul...@linux.vnet.ibm.com>
> Cc: Peter Zijlstra <pet...@infradead.org>
> Cc: Stephen Hemminger <shemmin...@vyatta.com>
> Cc: Steven Rostedt <rost...@goodmis.org>
> Cc: Sven-Thorsten Dietrich <thebigcorporat...@gmail.com>
> Cc: Thomas Gleixner <t...@linutronix.de>
> ---
>  include/linux/cpuset.h   |    2 ++
>  kernel/cpuset.c          |   25 +++++++++++++++++++++++--
>  kernel/time/tick-sched.c |    8 ++++++++
>  3 files changed, 33 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 7e7eb41..631968b 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -260,6 +260,8 @@ static inline bool cpuset_adaptive_nohz(void)
>        */
>       return cpuset_cpu_adaptive_nohz(smp_processor_id());
>  }
> +
> +extern void cpuset_exit_nohz_interrupt(void *unused);
>  #else
>  static inline bool cpuset_cpu_adaptive_nohz(int cpu) { return false; }
>  static inline bool cpuset_adaptive_nohz(void) { return false; }
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index 6319d8e..1b67e5b 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1200,6 +1200,14 @@ static void cpuset_change_flag(struct task_struct *tsk,
>  
>  DEFINE_PER_CPU(atomic_t, cpu_adaptive_nohz_ref);
>  
> +static void cpu_exit_nohz(int cpu)
> +{
> +     preempt_disable();
> +     smp_call_function_single(cpu, cpuset_exit_nohz_interrupt,
> +                              NULL, true);
> +     preempt_enable();
> +}
> +
>  static void update_nohz_cpus(struct cpuset *old_cs, struct cpuset *cs)
>  {
>       int cpu;
> @@ -1211,9 +1219,22 @@ static void update_nohz_cpus(struct cpuset *old_cs, 
> struct cpuset *cs)
>       for_each_cpu(cpu, cs->cpus_allowed) {
>               atomic_t *ref = &per_cpu(cpu_adaptive_nohz_ref, cpu);
>               if (is_adaptive_nohz(cs))
> -                     atomic_inc(ref);
> +                     val = atomic_inc_return(ref);
>               else
> -                     atomic_dec(ref);
> +                     val = atomic_dec_return(ref);
> +
> +             if (!val) {
> +                     /*
> +                      * The update to cpu_adaptive_nohz_ref must be
> +                      * visible right away. So that once we restart the tick
> +                      * from the IPI, it won't be stopped again due to cache
> +                      * update lag.
> +                      * FIXME: We probably need more to ensure this value is 
> really
> +                      * visible right away.

What more do you want? stomp_machine()??

> +                      */
> +                     smp_mb();

The atomic_inc_return() and atomic_dec_return() already imply a
smp_mb().

Later patches change this code, so I wont dwell on this patch too much.


> +                     cpu_exit_nohz(cpu);
> +             }
>       }
>  }
>  #else
> diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> index 0a5e650..de7de68 100644
> --- a/kernel/time/tick-sched.c
> +++ b/kernel/time/tick-sched.c
> @@ -884,6 +884,14 @@ void tick_nohz_check_adaptive(void)
>       }
>  }
>  
> +void cpuset_exit_nohz_interrupt(void *unused)
> +{
> +     struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);
> +
> +     if (ts->tick_stopped && !is_idle_task(current))
> +             tick_nohz_restart_adaptive();

BTW, what a confusing name. "restart_adaptive()"? It sounds like we are
going to restart the adaptive code, like restarting NOHZ.

-- Steve

> +}
> +
>  void tick_nohz_post_schedule(void)
>  {
>       struct tick_sched *ts = &__get_cpu_var(tick_cpu_sched);


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to