I was analyzing LPC 2020 discussion regarding Latency-nice interface and
have below points to initiate further discussion:
1. There was consensus that having interface like "Latency-nice" to
provide scheduler hints about task latency requirement can be very useful.
2. There are two use-case regar
Hi Vincent,
On 7/20/20 8:50 PM, Vincent Guittot wrote:
> Hi Parth,
>
> On Thu, 9 Jul 2020 at 14:09, Parth Shah wrote:
>>
>>> A) Name:
>>
>> Small background task packing
>>
>>> B) Target behaviour:
>>
>> All fair task wakeup follo
> A) Name:
Small background task packing
> B) Target behaviour:
All fair task wakeup follows a procedure of finding an idle CPU and
waking the task on this idle CPU. There are two major wakeup paths:
1. Slow-path: Wake up the task on an idle CPU which is in the shallowest
idle states by searchin
On 5/13/20 3:11 PM, Parth Shah wrote:
>
>
> On 5/11/20 4:43 PM, Dietmar Eggemann wrote:
>> On 28/02/2020 10:07, Parth Shah wrote:
>>> Introduce the latency_nice attribute to sched_attr and provide a
>>> mechanism to change the value with the use of sched_
On 5/11/20 4:43 PM, Dietmar Eggemann wrote:
> On 28/02/2020 10:07, Parth Shah wrote:
>> Introduce the latency_nice attribute to sched_attr and provide a
>> mechanism to change the value with the use of sched_setattr/sched_getattr
>> syscall.
>>
>> Also add new
On 5/9/20 8:09 AM, Pavan Kondeti wrote:
> On Fri, May 08, 2020 at 04:45:16PM +0530, Parth Shah wrote:
>> Hi Pavan,
>>
>> Thanks for going through this patch-set.
>>
>> On 5/8/20 2:03 PM, Pavan Kondeti wrote:
>>> Hi Parth,
>>>
>>>
On 5/8/20 2:10 PM, Pavan Kondeti wrote:
> On Thu, May 07, 2020 at 07:07:20PM +0530, Parth Shah wrote:
>> The "nr_lat_sensitive" per_cpu variable provides hints on the possible
>> number of latency-sensitive tasks occupying the CPU. This hints further
>> helps in
Hi Pavan,
On 5/8/20 2:06 PM, Pavan Kondeti wrote:
> On Thu, May 07, 2020 at 07:07:22PM +0530, Parth Shah wrote:
>> Restrict the call to deeper idle states when the given CPU has been set for
>> the least latency requirements
>>
>> Signed-off-by: Parth Shah
>>
Hi Pavan,
Thanks for going through this patch-set.
On 5/8/20 2:03 PM, Pavan Kondeti wrote:
> Hi Parth,
>
> On Thu, May 07, 2020 at 07:07:21PM +0530, Parth Shah wrote:
>> Monitor tasks at:
>> 1. wake_up_new_task() - forked tasks
>>
>> 2. set_task_cpu() - task
counter upon re-marking the task
with >-20 latency_nice task.
4. finish_task_switch() - dying task
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 30 --
kernel/sched/sched.h | 5 +
2 files changed, 33 insertions(+), 2 deletions(-)
diff --git a/kernel/sc
negative value of nr_lat_sensitive value is found.
Signed-off-by: Parth Shah
---
kernel/sched/idle.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 85d72a6e2521..7aa0775e69c0 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched
Restrict the call to deeper idle states when the given CPU has been set for
the least latency requirements
Signed-off-by: Parth Shah
---
kernel/sched/idle.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index b743bf38f08f
Avg. Energy (Watts) | 9.8 | 29.6 (+302%)| 27.7 (+282%) |
+-+--+-+--+
*trans. completed = Total transactions processed (Higher is better)
Parth Shah (4):
sched/core: Introduce per_cpu counter to track latency sensiti
The "nr_lat_sensitive" per_cpu variable provides hints on the possible
number of latency-sensitive tasks occupying the CPU. This hints further
helps in inhibiting the CPUIDLE governor from calling deeper IDLE states
(next patches includes this).
Signed-off-by: Parth Shah
---
kernel/sc
On 10/16/19 5:26 PM, Vincent Guittot wrote:
> On Wed, 16 Oct 2019 at 09:21, Parth Shah wrote:
>>
>>
>>
>> On 9/19/19 1:03 PM, Vincent Guittot wrote:
>>
>> [...]
>>
>>> Signed-off-by: V
On 9/19/19 1:03 PM, Vincent Guittot wrote:
[...]
> Signed-off-by: Vincent Guittot
> ---
> kernel/sched/fair.c | 585
> ++--
> 1 file changed, 380 insertions(+), 205 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> ind
On 9/19/19 1:03 PM, Vincent Guittot wrote:
> Several wrong task placement have been raised with the current load
> balance algorithm but their fixes are not always straight forward and
> end up with using biased values to force migrations. A cleanup and rework
> of the load balance will help to
On 10/9/19 7:56 PM, Dietmar Eggemann wrote:
> On 09/10/2019 10:57, Parth Shah wrote:
>
> [...]
>
>>> On 07/10/2019 18:53, Parth Shah wrote:
>>>>
>>>>
>>>> On 10/7/19 5:49 PM, Vincent Guittot wrote:
>>>>> On Mon, 7 Oct 2
On 10/9/19 5:04 PM, Vincent Guittot wrote:
> On Wed, 9 Oct 2019 at 11:23, Parth Shah wrote:
>>
>>
>>
>> On 10/8/19 6:58 PM, Hillf Danton wrote:
>>>
>>> On Mon, 7 Oct 2019 14:00:49 +0530 Parth Shah wrote:
>>>> +/*
>>>> +
On 10/8/19 6:58 PM, Hillf Danton wrote:
>
> On Mon, 7 Oct 2019 14:00:49 +0530 Parth Shah wrote:
>> +/*
>> + * Try to find a non idle core in the system based on few heuristics:
>> + * - Keep track of overutilized (>80% util) and busy (>12.5% util) CPUs
>>
On 10/8/19 10:22 PM, Dietmar Eggemann wrote:
> [- Quentin Perret ]
> [+ Quentin Perret ]
>
> See commit c193a3ffc282 ("mailmap: Update email address for Quentin Perret")
>
noted. thanks for notifying me.
> On 07/10/2019 18:53, Parth Shah wrote:
>>
>>
On 10/8/19 9:50 PM, Vincent Guittot wrote:
> On Mon, 7 Oct 2019 at 18:54, Parth Shah wrote:
>>
>>
>>
>> On 10/7/19 5:49 PM, Vincent Guittot wrote:
>>> On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote:
>>>>
>>>> The algorithm finds the
On 10/7/19 5:49 PM, Vincent Guittot wrote:
> On Mon, 7 Oct 2019 at 10:31, Parth Shah wrote:
>>
>> The algorithm finds the first non idle core in the system and tries to
>> place a task in the idle CPU in the chosen core. To maintain
>> cache hotness, work of findin
On 10/2/19 9:41 PM, David Laight wrote:
> From: Parth Shah
>> Sent: 30 September 2019 11:44
> ...
>> 5> Separating AVX512 tasks and latency sensitive tasks on separate cores
>> ( -Tim Chen )
>>
Use the get/put methods to add/remove the use of TurboSched support, such
that the feature is turned on only in the presence of atleast one
classified small bckground task.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched
interface from the userspace which uses sched_setattr syscall to mark such
tasks.
The scheduler may use this as hints to pack such tasks on fewer number of
cores.
Signed-off-by: Parth Shah
---
include/linux/sched.h | 1 +
include/uapi/linux/sched.h | 4 +++-
kernel/sched/core.c| 9
with multiple NUMA domains, the Turbo frequency can be
sustained within the NUMA domain without being affected from other
NUMA. For such case, arch_turbo_domain can be tuned to change domain for
non idle core search.
Signed-off-by: Parth Shah
---
kernel/sched/fair.c | 11 ++-
1 file
within NUMA domain.
Signed-off-by: Parth Shah
---
arch/powerpc/include/asm/topology.h | 3 +++
arch/powerpc/kernel/smp.c | 7 +++
2 files changed, 10 insertions(+)
diff --git a/arch/powerpc/include/asm/topology.h
b/arch/powerpc/include/asm/topology.h
index f85e2b01c3df
CPU gives sufficient heuristics for CPU
doing enough work and not become idle in nearby timeframe.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 3 ++
kernel/sched/fair.c | 95 -
2 files changed, 97 insertions(+), 1 deletion(-)
diff --git a/kernel/sched
track of the tasks using the
TurboSched feature and also refcount classified background tasks. This
allows to enable the feature on setting first task classified as background
noise, similarly disable the feature on unsetting of such last task.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 20
uclamp support to energy_compute()")
References
==
[1]. https://lkml.org/lkml/2019/9/30/215
[2]. https://github.com/parthsl/tools/blob/master/benchmarks/turbo_bench.c
Parth Shah (6):
sched/core: Add manual background task classification using
sched_setattr syscall
sched: I
ore search for the tasks which requires least
latency. The userland providing hints to the scheduler by tagging such
tasks is a solution proposed in the community and has shown positive
results [1].
2> TurboSched
( -Parth Shah )
TurboSched [2] tries to minimize the number of
On 9/19/19 8:13 PM, Qais Yousef wrote:
> On 09/18/19 18:11, Parth Shah wrote:
>> Hello everyone,
>>
>> As per the discussion in LPC2019, new per-task property like latency-nice
>> can be useful in certain scenarios. The scheduler can take proper decision
>> by
On 9/18/19 9:12 PM, Valentin Schneider wrote:
> On 18/09/2019 15:18, Patrick Bellasi wrote:
>>> 1. Name: What should be the name for such attr for all the possible
>>> usecases?
>>> =
>>> Latency nice is the proposed name as of now where the lower value indicates
>>> that the task d
On 9/18/19 10:46 PM, Tim Chen wrote:
> On 9/18/19 5:41 AM, Parth Shah wrote:
>> Hello everyone,
>>
>> As per the discussion in LPC2019, new per-task property like latency-nice
>> can be useful in certain scenarios. The scheduler can take proper decision
>> by
On 9/18/19 7:48 PM, Patrick Bellasi wrote:
>
> On Wed, Sep 18, 2019 at 13:41:04 +0100, Parth Shah wrote...
>
>> Hello everyone,
>
> Hi Parth,
> thanks for staring this discussion.
>
> [ + patrick.bell...@matbug.net ] my new email address, since with
>
Hello everyone,
As per the discussion in LPC2019, new per-task property like latency-nice
can be useful in certain scenarios. The scheduler can take proper decision
by knowing latency requirement of a task from the end-user itself.
There has already been an effort from Subhra for introducing Task
On 9/6/19 7:43 PM, Valentin Schneider wrote:
> On 06/09/2019 13:45, Parth Shah wrote:>
>> I guess there is some usecase in case of thermal throttling.
>> If a task is heating up the core then in ideal scenarios POWER systems
>> throttle
>> down to rated frequency
On 9/5/19 6:37 PM, Patrick Bellasi wrote:
>
> On Thu, Sep 05, 2019 at 12:46:37 +0100, Valentin Schneider wrote...
>
>> On 05/09/2019 12:18, Patrick Bellasi wrote:
There's a few things wrong there; I really feel that if we call it nice,
it should be like nice. Otherwise we should call
On 9/5/19 3:15 PM, Patrick Bellasi wrote:
>
> On Thu, Sep 05, 2019 at 09:31:27 +0100, Peter Zijlstra wrote...
>
>> On Fri, Aug 30, 2019 at 10:49:36AM -0700, subhra mazumdar wrote:
>>> Add Cgroup interface for latency-nice. Each CPU Cgroup adds a new file
>>> "latency-nice" which is shared by a
On 9/5/19 3:41 PM, Patrick Bellasi wrote:
>
> On Thu, Sep 05, 2019 at 07:15:34 +0100, Parth Shah wrote...
>
>> On 9/4/19 11:02 PM, Tim Chen wrote:
>>> On 8/30/19 10:49 AM, subhra mazumdar wrote:
>>>> Add Cgroup interface for latency-nice. Each CPU Cgrou
On 8/30/19 11:19 PM, subhra mazumdar wrote:
> Rotate the cpu search window for better spread of threads. This will ensure
> an idle cpu will quickly be found if one exists.
>
> Signed-off-by: subhra mazumdar
> ---
> kernel/sched/fair.c | 10 --
> 1 file changed, 8 insertions(+), 2 del
On 8/30/19 11:19 PM, subhra mazumdar wrote:
> Put upper and lower limit on CPU search in select_idle_cpu. The lower limit
> is set to amount of CPUs in a core while upper limit is derived from the
> latency-nice of the thread. This ensures for any architecture we will
> usually search beyond a
On 9/4/19 11:02 PM, Tim Chen wrote:
> On 8/30/19 10:49 AM, subhra mazumdar wrote:
>> Add Cgroup interface for latency-nice. Each CPU Cgroup adds a new file
>> "latency-nice" which is shared by all the threads in that Cgroup.
>
>
> Subhra,
>
> Thanks for posting the patchset. Having a latency
Hi Subhra,
On 8/30/19 11:19 PM, subhra mazumdar wrote:
> Introduce new per task property latency-nice for controlling scalability
> in scheduler idle CPU search path. Valid latency-nice values are from 1 to
> 100 indicating 1% to 100% search of the LLC domain in select_idle_cpu. New
> CPU cgroup f
On 7/31/19 11:02 PM, Pavel Machek wrote:
> Hi!
>
Abstract
The modern servers allows multiple cores to run at range of frequencies
higher than rated range of frequencies. But the power budget of the system
inhibits sustaining these higher frequencies for lo
On 7/28/19 7:01 PM, Pavel Machek wrote:
> Hi!
>
>> Abstract
>>
>>
>> The modern servers allows multiple cores to run at range of frequencies
>> higher than rated range of frequencies. But the power budget of the system
>> inhibits sustaining these higher frequencies for longer duration
aggregated utilization of
<12.5%, it may go idle soon and hence packing on such core should be
ignored. The experiment showed that keeping this threshold to 12.5% gives
better decision capability on not selecting the core which will idle out
soon.
Signed-off-by: Parth Shah
---
kernel/sched/core.c |
Tune arch_scale_core_capacity for powerpc architecture by scaling
capacity w.r.t to the number of online SMT in the core such that for SMT-4,
core capacity is 1.5x the capacity of sibling thread.
Signed-off-by: Parth Shah
---
arch/powerpc/include/asm/topology.h | 4
arch/powerpc/kernel
Use the get/put methods to add/remove the use of TurboSched support, such
that the feature is turned on only if there is atleast one jitter task.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched
NUMA domains, the Turbo frequency
can be sustained within the NUMA domain without being affected from other
NUMA. For such case, arch_turbo_domain can be tuned to change domain for
non idle core search.
Signed-off-by: Parth Shah
---
kernel/sched/fair.c | 10 +-
1 file changed, 9
track of the tasks using the
TurboSched feature and also refcount jitter tasks. This allows to enable
the feature on setting first task classified as jitter, similarly disable
the feature on unsetting of such last task.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 20
uses sched_setattr syscall to mark tasks as jitter.
Signed-off-by: Parth Shah
---
include/linux/sched.h | 1 +
include/uapi/linux/sched.h | 4 +++-
kernel/sched/core.c| 9 +
3 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/include/linux/sched.h b/include/linux
ind a workaround for limiting task packing. I'm working around
that trying to find a solution for the same but would like to get community
response first to have better view.
Signed-off-by: Parth Shah
---
kernel/sched/fair.c | 19 +++
1 file changed, 19 insertions(+)
di
Provide an powerpc architecture specific implementation for
defining the turbo domain to make searching of the core to be bound within
the NUMA. This provides a way to decrease the searching time for specific
architectures where we know the domain for the power budget.
Signed-off-by: Parth Shah
busy
cores
Series can be applied on the top of tip/sched/core at
commit af24bde8df20 ("sched/uclamp: Add uclamp support to energy_compute()")
Parth Shah (8):
sched/core: Add manual jitter classification using sched_setattr
syscall
sched: Introduce switch to enable Turb
On 7/9/19 5:38 AM, Subhra Mazumdar wrote:
>
> On 7/8/19 10:24 AM, Parth Shah wrote:
>> When searching for an idle_sibling, scheduler first iterates to search for
>> an idle core and then for an idle CPU. By maintaining the idle CPU mask
>> while iterating through idl
On 7/8/19 1:38 PM, Peter Zijlstra wrote:
> On Mon, Jul 08, 2019 at 10:24:30AM +0530, Parth Shah wrote:
>> When searching for an idle_sibling, scheduler first iterates to search for
>> an idle core and then for an idle CPU. By maintaining the idle CPU mask
>> while iterati
Optimize idle CPUs search by marking already found non idle CPUs during
idle core search. This reduces iteration count when searching for idle
CPUs, resulting in the lower iteration count.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 3 +++
kernel/sched/fair.c | 13 +
2
le_mask to reuse the name in next patch
- Patch 02: Optimize the wakeup fast path
Parth Shah (2):
sched/fair: Rename select_idle_mask to iterator_mask
sched/fair: Optimize idle CPU search
kernel/sched/core.c | 3 +++
kernel/sched/fair.c | 15 ++-
2 files changed, 13 insertions(+), 5
ich can be used locally for CPU iteration.
Subsequent patch uses the select_idle_mask to keep track of the idle CPUs
which can be shared across function calls.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 4 ++--
kernel/sched/fair.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletion
On 7/2/19 2:07 AM, Subhra Mazumdar wrote:
>
Also, systems like POWER9 has sd_llc as a pair of core only. So it
won't benefit from the limits and hence also hiding your code in
select_idle_cpu
behind static keys will be much preferred.
>>> If it doesn't hurt then I don't see
Hi,
On 7/3/19 9:22 AM, Subhra Mazumdar wrote:
>
> On 7/2/19 1:54 AM, Patrick Bellasi wrote:
>> Wondering if searching and preempting needs will ever be conflicting?
>> I guess the winning point is that we don't commit behaviors to
>> userspace, but just abstract concepts which are turned into bia
On 6/29/19 3:59 AM, Subhra Mazumdar wrote:
>
> On 6/28/19 12:01 PM, Parth Shah wrote:
>>
>> On 6/27/19 6:59 AM, subhra mazumdar wrote:
>>> Use SIS_CORE to disable idle core search. For some workloads
>>> select_idle_core becomes a scalability bottleneck,
On 6/27/19 6:59 AM, subhra mazumdar wrote:
> Use SIS_CORE to disable idle core search. For some workloads
> select_idle_core becomes a scalability bottleneck, removing it improves
> throughput. Also there are workloads where disabling it can hurt latency,
> so need to have an option.
>
> Signed
On 6/27/19 6:59 AM, subhra mazumdar wrote:
> Put upper and lower limit on cpu search of select_idle_cpu. The lower limit
> is amount of cpus in a core while upper limit is twice that. This ensures
> for any architecture we will usually search beyond a core. The upper limit
> also helps in keepin
Hi Subhra,
I ran your patch series on IBM POWER systems and this is what I have observed.
On 6/27/19 6:59 AM, subhra mazumdar wrote:
> Rotate the cpu search window for better spread of threads. This will ensure
> an idle cpu will quickly be found if one exists.
>
> Signed-off-by: subhra mazumdar
Hi Patrick,
Thank you for taking interest at the patch set.
On 6/28/19 6:44 PM, Patrick Bellasi wrote:
> On 25-Jun 10:07, Parth Shah wrote:
>
> [...]
>
>> Implementation
>> ==
>>
>> These patches uses UCLAMP mechanism[2] used to clamp utilizat
find a workaround for limiting task packing. I'm working around
that trying to find a solution for the same but would like to get community
response first to have better view.
Signed-off-by: Parth Shah
---
kernel/sched/fair.c | 19 +++
1 file changed, 19 insertions(+)
di
methods to keep track of the tasks using
the TurboSched feature. This allows to enable the feature on setting first
task classified as jitter, similarly disable the feature on unsetting of
such last task.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 20
kernel/sched/sched.h
This patch tunes arch_scale_core_capacity for powerpc arch by scaling
capacity w.r.t to the number of online SMT in the core such that for SMT-4,
core capacity is 1.5x the capacity of sibling thread.
Signed-off-by: Parth Shah
---
arch/powerpc/include/asm/topology.h | 4
arch/powerpc
Use the get/put methods to add/remove the use of TurboSched support, such
that the feature is turned on only if there is atleast one jitter task.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c
with multiple NUMA domains, the Turbo frequency
can be sustained within the NUMA domain without being affected from other
NUMA. For such case, arch_turbo_domain can be tuned to change domain for
non idle core search.
Signed-off-by: Parth Shah
---
kernel/sched/fair.c | 10 +-
1 file changed
aggregated utilization of
<12.5%, it may go idle soon and hence packing on such core should be
ignored. The experiment showed that keeping this threshold to 12.5% gives
better decision capability on not selecting the core which will idle out
soon.
Signed-off-by: Parth Shah
---
kernel/sched/fair.c |
This patch provides an powerpc architecture specific implementation for
defining the turbo domain to make searching of the core to be bound within
the NUMA. This provides a way to decrease the searching time for specific
architectures.
Signed-off-by: Parth Shah
---
arch/powerpc/include/asm
si, Add utilization clamping support"
https://lkml.org/lkml/2019/5/15/212
Parth Shah (8):
sched/core: Add manual jitter classification using sched_setattr
syscall
sched: Introduce switch to enable TurboSched mode
sched/core: Update turbo_sched count only when required
sched/fa
`sched_setattr` syscall to set
sched_util_max attribute of the task which is used to classify the task as
jitter.
Use Case with turbo_bench.c
===
```
i=8;
./turbo_bench -t 30 -h $i -n $((2*i)) -j
```
This spawns 2*i total threads: of which i-CPU bound and i-jitter threads.
Signe
On 5/15/19 9:59 PM, Peter Zijlstra wrote:
> On Wed, May 15, 2019 at 07:23:17PM +0530, Parth Shah wrote:
>
>> Subject: [RFCv2 1/6] sched/core: Add manual jitter classification from
>> cgroup interface
>
> How can this be v2 ?! I've never seen v1.
>
On 5/15/19 10:14 PM, Peter Zijlstra wrote:
> On Wed, May 15, 2019 at 07:23:22PM +0530, Parth Shah wrote:
>> This patch specifies the sched domain to search for a non idle core.
>>
>> The select_non_idle_core searches for the non idle cores across whole
>> syst
On 5/15/19 10:00 PM, Peter Zijlstra wrote:
> On Wed, May 15, 2019 at 07:23:18PM +0530, Parth Shah wrote:
>> +void turbo_sched_get(void)
>> +{
>> +spin_lock(&turbo_sched_lock);
>> +if (!turbo_sched_count++)
>> +static_branch_enable(&__
On 5/15/19 10:18 PM, Peter Zijlstra wrote:
> On Wed, May 15, 2019 at 07:23:16PM +0530, Parth Shah wrote:
>> Abstract
>>
>>
>> The modern servers allows multiple cores to run at range of
>> frequencies higher than rated range of frequencies. But
. Since the core having aggregated
utilization of <12.5%, it may go idle soon and hence packing on such core
should be ignored. The experiment showed that keeping this threshold to
12.5% gives better decision capability on not selecting the core which will
idle out soon.
Signed-off-by: Parth S
Use the get/put methods to add/remove the use of TurboSched support from
the cgroup.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index facbedd2554e..4c55b5399985 100644
.
This patch provides an architecture specific implementation for defining
the turbo domain to make searching of the core to be bound within the NUMA.
Signed-off-by: Parth Shah
---
arch/powerpc/include/asm/topology.h | 3 +++
arch/powerpc/kernel/smp.c | 5 +
kernel/sched/fair.c
methods to keep track of the cgroups using
the TurboSched feature. This allows to enable the feature on adding first
cgroup classified as jitter, similarly disable the feature on removal of
such last cgroup.
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 20
kernel/sched
ocs;
```
Signed-off-by: Parth Shah
---
kernel/sched/core.c | 9 +
kernel/sched/sched.h | 1 +
2 files changed, 10 insertions(+)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d42c0f5eefa9..77aa4aee4478 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7192,
workload generator"
https://github.com/parthsl/tools/blob/master/benchmarks/turbo_bench.c
[3] "Patrick Bellasi, Add utilization clamping support"
https://lore.kernel.org/lkml/20190402104153.25404-1-patrick.bell...@arm.com/
Parth Shah (6):
sched/core: Add manual jitter clas
decision time which can be eliminated by keeping track of online
CPUs during hotplug task.
Signed-off-by: Parth Shah
---
arch/powerpc/include/asm/topology.h | 4
arch/powerpc/kernel/smp.c | 32 +
kernel/sched/fair.c | 19
Hello Jean,
Thank you for your response.
So, can we consider this patch?
Regards,
Parth Y Shah
On Fri, Aug 3, 2018 at 6:06 PM Jean Delvare wrote:
> On Fri, 3 Aug 2018 14:50:43 +0530, Parth Y Shah wrote:
> > Assignment of any variable should be kept outside the if statement
>
> Actually there
89 matches
Mail list logo