On Fri, 6 Nov 2015 15:49:29 -0800
Jacob Pan wrote:
> > Check the softirq stuff before calling throttle ?
>
> yes, played with it but it seems there are other cases causing pending
> softirq in idle in addition to throttle. I still haven't figure it
> out, this problem only shows up in heavy ir
On Fri, 6 Nov 2015 08:45:10 +0100
Peter Zijlstra wrote:
> On Thu, Nov 05, 2015 at 03:36:25PM -0800, Jacob Pan wrote:
>
> > I did some testing with the code below, it shows random
> > [ 150.442597] NOHZ: local_softirq_pending 02
> > [ 153.032673] NOHZ: local_softirq_pending 202
> > [ 153.20378
On Thu, Nov 05, 2015 at 03:36:25PM -0800, Jacob Pan wrote:
> I did some testing with the code below, it shows random
> [ 150.442597] NOHZ: local_softirq_pending 02
> [ 153.032673] NOHZ: local_softirq_pending 202
> [ 153.203785] NOHZ: local_softirq_pending 202
> [ 153.206486] NOHZ: local_softir
On Thu, 5 Nov 2015 14:59:52 +0100
Peter Zijlstra wrote:
> On Tue, Nov 03, 2015 at 02:31:20PM +0100, Peter Zijlstra wrote:
> > > @@ -5136,6 +5148,16 @@ pick_next_task_fair(struct rq *rq, struct
> > > task_struct *prev) struct task_struct *p;
> > > int new_tasks;
> > >
> > > +#ifdef CONFIG_CFS_
On Thu, 5 Nov 2015 11:27:32 -0800
Jacob Pan wrote:
> On Thu, 5 Nov 2015 11:09:22 +0100
> Peter Zijlstra wrote:
>
> > > Before:
> > > CPU0 __||| || |___| || || |_
> > > CPU1 _||| || |___| || |___
> > >
> > > After:
> > >
> > > CPU0 __||| || |___|
On Thu, 5 Nov 2015 11:09:22 +0100
Peter Zijlstra wrote:
> > Before:
> > CPU0 __||| || |___| || || |_
> > CPU1 _||| || |___| || |___
> >
> > After:
> >
> > CPU0 __||| || |___| || || |_
> > CPU1 __||| || |___| || |___
> >
> > Th
On Thu, 5 Nov 2015, Peter Zijlstra wrote:
> On Thu, Nov 05, 2015 at 07:28:50AM -0800, Arjan van de Ven wrote:
> > well we have this as a driver right now that does not touch hot paths,
> > but it seems you and tglx also hate that approach with a passion
>
> The current code is/was broken, but
On Thu, 5 Nov 2015, Arjan van de Ven wrote:
> On 11/5/2015 6:33 AM, Peter Zijlstra wrote:
>
> > It just grates at me a bit that we have to touch hot paths for such
> scenarios :/
>
> well we have this as a driver right now that does not touch hot paths,
> but it seems you and tglx also hate that
On Thu, Nov 05, 2015 at 07:28:50AM -0800, Arjan van de Ven wrote:
> well we have this as a driver right now that does not touch hot paths,
> but it seems you and tglx also hate that approach with a passion
The current code is/was broken, but when I tried fixing it, tglx
objected to the entire
On 11/5/2015 7:32 AM, Jacob Pan wrote:
On Thu, 5 Nov 2015 15:33:32 +0100
Peter Zijlstra wrote:
On Thu, Nov 05, 2015 at 06:22:58AM -0800, Arjan van de Ven wrote:
On 11/5/2015 2:09 AM, Peter Zijlstra wrote:
I can see such a scheme having a fairly big impact on latency,
esp. with forced idlene
On Thu, 5 Nov 2015 15:33:32 +0100
Peter Zijlstra wrote:
> On Thu, Nov 05, 2015 at 06:22:58AM -0800, Arjan van de Ven wrote:
> > On 11/5/2015 2:09 AM, Peter Zijlstra wrote:
> >
> > >I can see such a scheme having a fairly big impact on latency,
> > >esp. with forced idleness such as this. That's
On 11/5/2015 6:33 AM, Peter Zijlstra wrote:
On Thu, Nov 05, 2015 at 06:22:58AM -0800, Arjan van de Ven wrote:
On 11/5/2015 2:09 AM, Peter Zijlstra wrote:
I can see such a scheme having a fairly big impact on latency, esp. with
forced idleness such as this. That's not going to be popular for ma
On Thu, Nov 05, 2015 at 03:33:32PM +0100, Peter Zijlstra wrote:
> > idle injection is a last ditch effort in thermal management
It just grates at me a bit that we have to touch hot paths for such
scenarios :/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body o
On Thu, Nov 05, 2015 at 06:22:58AM -0800, Arjan van de Ven wrote:
> On 11/5/2015 2:09 AM, Peter Zijlstra wrote:
>
> >I can see such a scheme having a fairly big impact on latency, esp. with
> >forced idleness such as this. That's not going to be popular for many
> >workloads.
>
> idle injection i
On 11/5/2015 2:09 AM, Peter Zijlstra wrote:
I can see such a scheme having a fairly big impact on latency, esp. with
forced idleness such as this. That's not going to be popular for many
workloads.
idle injection is a last ditch effort in thermal management, before
this gets used the hardware
On Tue, Nov 03, 2015 at 02:31:20PM +0100, Peter Zijlstra wrote:
> > @@ -5136,6 +5148,16 @@ pick_next_task_fair(struct rq *rq, struct
> > task_struct *prev)
> > struct task_struct *p;
> > int new_tasks;
> >
> > +#ifdef CONFIG_CFS_IDLE_INJECT
> > + if (cfs_rq->force_throttled &&
> > +
On Tue, Nov 03, 2015 at 08:45:01AM -0800, Jacob Pan wrote:
> Fair enough, I will give that a try, I guess it would be costly
> and hard to scale if we were to dequeue/enqueue every se for every
> period of injection plus locking. Let me get some data first.
Yeah, don't go dequeue/enqueue everythin
On Tue, 3 Nov 2015 14:31:20 +0100
Peter Zijlstra wrote:
> > @@ -5136,6 +5148,16 @@ pick_next_task_fair(struct rq *rq, struct
> > task_struct *prev) struct task_struct *p;
> > int new_tasks;
> >
> > +#ifdef CONFIG_CFS_IDLE_INJECT
> > + if (cfs_rq->force_throttled &&
> > + !idle_c
On Tue, 3 Nov 2015 14:31:20 +0100
Peter Zijlstra wrote:
> > @@ -5136,6 +5148,16 @@ pick_next_task_fair(struct rq *rq, struct
> > task_struct *prev) struct task_struct *p;
> > int new_tasks;
> >
> > +#ifdef CONFIG_CFS_IDLE_INJECT
> > + if (cfs_rq->force_throttled &&
> > + !idle_c
> @@ -5136,6 +5148,16 @@ pick_next_task_fair(struct rq *rq, struct task_struct
> *prev)
> struct task_struct *p;
> int new_tasks;
>
> +#ifdef CONFIG_CFS_IDLE_INJECT
> + if (cfs_rq->force_throttled &&
> + !idle_cpu(cpu_of(rq)) &&
> + !unlikely(local_softirq
With increasingly constrained power and thermal budget, it's often
necessary to cap power via throttling. Throttling individual CPUs
or devices at random times can help power capping but may not be
optimal in terms of energy efficiency.
In general, the optimal solution in terms of energy efficienc
21 matches
Mail list logo