On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +static u64 tmigr_set_cpu_inactive(struct tmigr_group *group,
> + struct tmigr_group *child,
> + struct tmigr_event *evt,
> + unsigned int cpu
On Wed, Apr 19, 2017 at 11:43:01AM +0200, Thomas Gleixner wrote:
> On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> > > +done:
> > > + raw_spin_unlock(&group->lock);
> > > + return nextevt;
> > > +}
> >
> > Would it be very onerous to rewrite that into regular loops? That avoids
> > us having to think
On Wed, Apr 19, 2017 at 11:44:45AM +0200, Peter Zijlstra wrote:
> On Wed, Apr 19, 2017 at 11:09:14AM +0200, Peter Zijlstra wrote:
>
> > Would it be very onerous to rewrite that into regular loops? That avoids
> > us having to think (and worry) about blowing our stack.
>
> void walk_groups(bool (*
On Wed, Apr 19, 2017 at 11:09:14AM +0200, Peter Zijlstra wrote:
> Would it be very onerous to rewrite that into regular loops? That avoids
> us having to think (and worry) about blowing our stack.
void walk_groups(bool (*up)(void *), void (*down)(void *), void *data)
{
struct tmigr_cpu *t
On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> > +done:
> > + raw_spin_unlock(&group->lock);
> > + return nextevt;
> > +}
>
> Would it be very onerous to rewrite that into regular loops? That avoids
> us having to think (and worry) about blowing our stack.
The issue is that this is walking a hi
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +#ifdef CONFIG_SMP
> +static u64
> +tick_tmigr_idle(struct tick_sched *ts, u64 next_global, u64 next_local)
> +{
> + ts->tmigr_idle = 1;
> +
> + /*
> + * If next_global is after next_local, event does not have to
> +
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +static void __tmigr_handle_remote(struct tmigr_group *group, unsigned int
> cpu,
> + u64 now, unsigned long jif, bool walkup)
> +{
> + struct timerqueue_node *tmr;
> + struct tmigr_group *pare
On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> On Wed, Apr 19, 2017 at 10:31:08AM +0200, Thomas Gleixner wrote:
> > On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> > > On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
>
> > > > + }
> > > > + /* Allocate and set up a new group
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> + raw_spin_lock_nested(&parent->lock, parent->level);
> + raw_spin_lock_nested(&group->lock, group->level);
> + raw_spin_lock_nested(&group->lock, group->level);
An not a comment on the locking order and why thi
On Wed, Apr 19, 2017 at 10:31:08AM +0200, Thomas Gleixner wrote:
> On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> > On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> > > + }
> > > + /* Allocate and set up a new group */
> > > + group = kzalloc_node(sizeof(*group), GFP_KERNEL, node);
On Wed, 19 Apr 2017, Peter Zijlstra wrote:
> On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> > +static struct tmigr_group *tmigr_get_group(unsigned int node, unsigned int
> > lvl)
> > +{
> > + struct tmigr_group *group;
> > +
> > + /* Try to attach to an exisiting group firs
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +static struct tmigr_group *tmigr_get_group(unsigned int node, unsigned int
> lvl)
> +{
> + struct tmigr_group *group;
> +
> + /* Try to attach to an exisiting group first */
> + list_for_each_entry(group, &tmigr_level_
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +struct tmigr_cpu {
> + raw_spinlock_t lock;
> + boolonline;
> + struct tmigr_event cpuevt;
> + struct tmigr_group *tmgroup;
> +};
My pet hatred; bool in composite types.
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +struct tmigr_group {
> + raw_spinlock_t lock;
> + unsigned intactive;
> + unsigned intmigrator;
> + struct timerqueue_head events;
> + struct tmigr_event groupevt;
> +
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> @@ -1689,11 +1708,16 @@ static void run_timer_base(int index, bo
> */
> static __latent_entropy void run_timer_softirq(struct softirq_action *h)
> {
> + struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_LOCAL]);
Does
On Tue, Apr 18, 2017 at 01:11:10PM +0200, Thomas Gleixner wrote:
> +++ b/kernel/time/timer.c
> @@ -185,6 +186,10 @@ EXPORT_SYMBOL(jiffies_64);
> #define WHEEL_SIZE (LVL_SIZE * LVL_DEPTH)
>
> #ifdef CONFIG_NO_HZ_COMMON
> +/*
> + * If multiple bases need to be locked, use the base ordering for
Placing timers at enqueue time on a target CPU based on dubious heuristics
does not make any sense:
1) Most timer wheel timers are canceled or rearmed before they expire.
2) The heuristics to predict which CPU will be busy when the timer expires
are wrong by definition.
So we waste preciou
17 matches
Mail list logo