On Wed, Feb 26, 2025 at 07:50:57AM +0100, Andrea Righi wrote:
> Add a selftest to validate the behavior of the NUMA-aware scheduler
> functionalities, including idle CPU selection within nodes, per-node
> DSQs and CPU to node mapping.
>
> Signed-off-by: Andrea Righi
Applied t
Add a selftest to validate the behavior of the NUMA-aware scheduler
functionalities, including idle CPU selection within nodes, per-node
DSQs and CPU to node mapping.
Signed-off-by: Andrea Righi
---
tools/testing/selftests/sched_ext/Makefile | 1 +
tools/testing/selftests/sched_ext
On Wed, Oct 09, 2024 at 04:44:24PM +0100, Pavel Begunkov wrote:
> On 10/8/24 16:18, John Ogness wrote:
> > On 2024-10-04, Petr Mladek wrote:
> > > On Fri 2024-10-04 02:08:52, Breno Leitao wrote:
> > > > =
> > > > WARNING: HARDIR
On 10/8/24 16:18, John Ogness wrote:
On 2024-10-04, Petr Mladek wrote:
On Fri 2024-10-04 02:08:52, Breno Leitao wrote:
=
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.12.0-rc1-kbuilder-virtme-00033-g
On 2024-10-04, Petr Mladek wrote:
> On Fri 2024-10-04 02:08:52, Breno Leitao wrote:
>> =
>> WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
>> 6.12.0-rc1-kbuilder-virtme-00033-gd4ac164bde7a #50 Not tainted
>> -
unaligned' of
> > > > git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs")
> > >
> > > This looks like the normal lockdep splat you get when the scheduler does
> > > printk. I suspect you tripped a WARN, but since you only provided the
> &
> > This looks like the normal lockdep splat you get when the scheduler does
> > printk. I suspect you tripped a WARN, but since you only provided the
> > lockdep output and not the whole log, I cannot tell.
>
> Thanks for the quick answer. I didn't see a warning bef
've bisected the problem, and weirdly enough, this problem started to
> > show up after a unrelated(?) change in the scheduler:
> >
> > 52e11f6df293e816a ("sched/fair: Implement delayed dequeue")
> >
> > At this time, I have the impression that
q-unsafe "_xmit_ETHER#2" lock is
> acquired in virtnet_poll_tx() while holding the HARDIRQ-irq-safe, and
> lockdep doesn't like it much.
>
> I've bisected the problem, and weirdly enough, this problem started to
> show up after a unrelated(?) change in the schedul
q-unsafe "_xmit_ETHER#2" lock is
> acquired in virtnet_poll_tx() while holding the HARDIRQ-irq-safe, and
> lockdep doesn't like it much.
>
> I've bisected the problem, and weirdly enough, this problem started to
> show up after a unrelated(?) change in the schedul
e HARDIRQ-irq-safe, and
lockdep doesn't like it much.
I've bisected the problem, and weirdly enough, this problem started to
show up after a unrelated(?) change in the scheduler:
52e11f6df293e816a ("sched/fair: Implement delayed dequeue")
At this time, I have the impress
On Mon, 29 Jul 2024 at 14:27, Peter Zijlstra wrote:
>
> On Mon, Jul 29, 2024 at 01:46:09PM +0200, Radoslaw Zielonek wrote:
> > I am currently working on a syzbot-reported bug where bpf
> > is called from trace_sched_switch. In this scenario, we are still within
> > th
On Mon, Jul 29, 2024 at 01:46:09PM +0200, Radoslaw Zielonek wrote:
> I am currently working on a syzbot-reported bug where bpf
> is called from trace_sched_switch. In this scenario, we are still within
> the scheduler context, and calling printk can create a deadlock.
>
> I am unce
I am currently working on a syzbot-reported bug where bpf
is called from trace_sched_switch. In this scenario, we are still within
the scheduler context, and calling printk can create a deadlock.
I am uncertain about the best approach to fix this issue.
Should we simply forbid such calls, or
On Thu, Dec 21, 2023 at 8:57 PM Rafael J. Wysocki wrote:
>
> On Thu, Dec 21, 2023 at 4:24 PM Vincent Guittot
> wrote:
> >
> > Provide to the scheduler a feedback about the temporary max available
> > capacity. Unlike arch_update_thermal_pressure, this doesn'
On Thu, Dec 21, 2023 at 4:24 PM Vincent Guittot
wrote:
>
> Provide to the scheduler a feedback about the temporary max available
> capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> filtered as the pressure will happen for dozens ms or more.
>
>
Provide to the scheduler a feedback about the temporary max available
capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
filtered as the pressure will happen for dozens ms or more.
Signed-off-by: Vincent Guittot
---
drivers/cpufreq/cpufreq.c
Following the consolidation and cleanup of CPU capacity in [1], this serie
reworks how the scheduler gets the pressures on CPUs. We need to take into
account all pressures applied by cpufreq on the compute capacity of a CPU
for dozens of ms or more and not only cpufreq cooling device or HW
acity in [1], this
serie
reworks how the scheduler gets the pressures on CPUs. We need to
take into
account all pressures applied by cpufreq on the compute capacity of
a CPU
for dozens of ms or more and not only cpufreq cooling device or HW
mitigiations. we split the pressure applied on CPU'
On Thu, 14 Dec 2023 at 10:20, Lukasz Luba wrote:
>
>
>
> On 12/12/23 14:27, Vincent Guittot wrote:
> > Provide to the scheduler a feedback about the temporary max available
> > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> > filte
y->max
Agree, cpufreq sysfs scaling_max_freq is also important to handle
in this new design. Currently we don't reflect that as reduced CPU
capacity in the scheduler. There was discussion when I proposed to feed
that CPU frequency reduction into thermal_pressure [1].
The same applie
ue to cpufreq cooling or from userspace, we end up limiting the
> >> maximum possible frequency, will this routine always get called ?
> >
> > Yes, any update of a FREQ_QOS_MAX ends up calling cpufreq_set_policy()
> > to update the policy->max
> >
>
> Agree, cpu
On 12/12/23 14:27, Vincent Guittot wrote:
Provide to the scheduler a feedback about the temporary max available
capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
filtered as the pressure will happen for dozens ms or more.
Signed-off-by: Vincent Guittot
---
dr
Currently we don't reflect that as reduced CPU
capacity in the scheduler. There was discussion when I proposed to feed
that CPU frequency reduction into thermal_pressure [1].
The same applies for the DTPM which is missing currently the proper
impact to the CPU reduced capacity in the scheduler.
w the scheduler gets the pressures on CPUs. We need to take into
account all pressures applied by cpufreq on the compute capacity of a CPU
for dozens of ms or more and not only cpufreq cooling device or HW
mitigiations. we split the pressure applied on CPU's capacity in 2 parts:
- one from cpufreq an
On Thu, 14 Dec 2023 at 09:21, Lukasz Luba wrote:
>
> Hi Vincent,
>
> I've been waiting for this feature, thanks!
>
>
> On 12/12/23 14:27, Vincent Guittot wrote:
> > Following the consolidation and cleanup of CPU capacity in [1], this serie
> > reworks how th
Hi Vincent,
I've been waiting for this feature, thanks!
On 12/12/23 14:27, Vincent Guittot wrote:
Following the consolidation and cleanup of CPU capacity in [1], this serie
reworks how the scheduler gets the pressures on CPUs. We need to take into
account all pressures applied by cpufr
On Thu, 14 Dec 2023 at 06:43, Viresh Kumar wrote:
>
> On 12-12-23, 15:27, Vincent Guittot wrote:
> > @@ -2618,6 +2663,9 @@ static int cpufreq_set_policy(struct cpufreq_policy
> > *policy,
> > policy->max = __resolve_freq(policy, policy->max, CPUFREQ_RELATION_H);
> > trace_cpu_frequenc
On 12-12-23, 15:27, Vincent Guittot wrote:
> @@ -2618,6 +2663,9 @@ static int cpufreq_set_policy(struct cpufreq_policy
> *policy,
> policy->max = __resolve_freq(policy, policy->max, CPUFREQ_RELATION_H);
> trace_cpu_frequency_limits(policy);
>
> + cpus = policy->related_cpus;
> +
On 13-12-23, 16:41, Tim Chen wrote:
> Seems like the pressure value computed from the first CPU applies to all CPU.
> Will this be valid for non-homogeneous CPUs that could have different
> max_freq and max_capacity?
The will be part of different cpufreq policies and so it will work
fine.
--
vir
On Tue, 2023-12-12 at 15:27 +0100, Vincent Guittot wrote:
> Provide to the scheduler a feedback about the temporary max available
> capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> filtered as the pressure will happen for dozens ms or more.
>
> Si
On Wed, 13 Dec 2023 at 08:17, Viresh Kumar wrote:
>
> On 12-12-23, 15:27, Vincent Guittot wrote:
> > Provide to the scheduler a feedback about the temporary max available
> > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> > filtered as the pr
On 12-12-23, 15:27, Vincent Guittot wrote:
> Provide to the scheduler a feedback about the temporary max available
> capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
> filtered as the pressure will happen for dozens ms or more.
>
> Signed-off
Provide to the scheduler a feedback about the temporary max available
capacity. Unlike arch_update_thermal_pressure, this doesn't need to be
filtered as the pressure will happen for dozens ms or more.
Signed-off-by: Vincent Guittot
---
drivers/cpufreq/cpufreq.c
Following the consolidation and cleanup of CPU capacity in [1], this serie
reworks how the scheduler gets the pressures on CPUs. We need to take into
account all pressures applied by cpufreq on the compute capacity of a CPU
for dozens of ms or more and not only cpufreq cooling device or HW
Hi Lev,
kernel test robot noticed the following build warnings:
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url:
https://github.com/intel-lab-lkp/linux/commits/Lev-Pantiukhin/ipvs-add-a-stateless-type-of-service-and-a-stateless-Maglev-hashing-scheduler/20231204-232344
Hello,
On Mon, 4 Dec 2023, Lev Pantiukhin wrote:
> +#define IP_VS_SVC_F_STATELESS0x0040 /* stateless scheduling
> */
I have another idea for the traffic that does not
need per-client state. We need some per-dest cp to forward the packet.
If we replace the cp->
patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url:
https://github.com/intel-lab-lkp/linux/commits/Lev-Pantiukhin/ipvs-add-a-stateless-type-of-service-and-a-stateless-Maglev-hashing-scheduler/20231204-232344
bas
Hello,
On Mon, 4 Dec 2023, Lev Pantiukhin wrote:
> Maglev Hashing Stateless
>
>
> Introduction
>
>
> This patch to Linux kernel provides the following changes to IPVS:
>
> 1. Adds a new type (IP_VS_SVC_F_STATELESS) of
Maglev Hashing Stateless
Introduction
This patch to Linux kernel provides the following changes to IPVS:
1. Adds a new type (IP_VS_SVC_F_STATELESS) of scheduler that computes the
need for connection entry addition;
2. Adds a new mhs (Maglev Hashing
inux...@openeuler.org; h...@zytor.com
> Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
>
>
>
> On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote:
>
> >>
> >> On 3/18/21 9:16 PM, Barry Song wrote:
> >>> From: Tim
tasks are woken up
>>> in the same L2 cluster, we will benefit from keeping tasks
>>> related to each other and likely sharing data in the same L2
>>> cluster.
>>>
>>> Add CPU masks of CPUs sharing the L2 cache so we can build such
>>> L2 cluster s
On kunpeng920, cpus within one cluster can communicate wit each other
much faster than cpus across different clusters. A simple hackbench
can prove that.
hackbench running on 4 cpus in single one cluster and 4 cpus in
different clusters shows a large contrast:
(1) within a cluster:
root@ubuntu:~# t
.
Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.
Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.
Signed-off-by
i-core CPU chips at a cost of slightly
increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
+ bool "Cluster scheduler support"
+ help
+ Cluster scheduler support improves the CPU scheduler's decision
+ making when dealin
modification of the wakeup
path is the root of the hackbench improvement; especially with g=14 where
there should not be much idle CPUs with 14*40 tasks on at most 32 CPUs."
-v5:
* split "add scheduler level for clusters" into two patches to evaluate the
impact of spreading
Fixes the following W=1 kernel build warning(s):
drivers/gpu/drm/scheduler/sched_entity.c:204: warning: expecting prototype for
drm_sched_entity_kill_jobs(). Prototype was for drm_sched_entity_kill_jobs_cb()
instead
drivers/gpu/drm/scheduler/sched_entity.c:262: warning: expecting prototype
RCU_SCHEDULER_RUNNING is set when a scheduling is available.
That signal is used in order to check and queue a "monitor work"
to reclaim freed objects(if they are) during a boot-up phase.
We have it because, the main path of the kvfree_rcu() call can
not queue the work untill the sched
RCU_SCHEDULER_RUNNING is set when a scheduling is available.
That signal is used in order to check and queue a "monitor work"
to reclaim freed objects(if they are) during a boot-up phase.
We have it because, the main path of the kvfree_rcu() call can
not queue the work untill the sched
On 4/13/21 3:45 AM, Song Bao Hua (Barry Song) wrote:
>
>
>
> Right now in the main cases of using wake_affine to achieve
> better performance, processes are actually bound within one
> numa which is also a LLC in kunpeng920.
>
> Probably LLC=NUMA is also true for X86 Jacobsville, Tim?
In ge
.@arm.com; aubrey...@linux.intel.com;
> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
>
On Mon, 2021-04-05 at 09:04 +0200, Greg Kroah-Hartman wrote:
> On Wed, Mar 31, 2021 at 04:30:55PM +0800, Chunfeng Yun wrote:
> > cc Yaqii Wu
> >
> > I'll test it , thanks
>
> Did you test this series and find any problems? If not, I'll go queue
> these up...
Yes, found an issue on the start-spl
On Wed, Mar 31, 2021 at 04:30:55PM +0800, Chunfeng Yun wrote:
> cc Yaqii Wu
>
> I'll test it , thanks
Did you test this series and find any problems? If not, I'll go queue
these up...
thanks,
greg k-h
utl...@arm.com; sudeep.ho...@arm.com;
> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org; x...@kernel.org;
> xuwei (O) ; Zengtao (B) ;
> guodong...@linaro.org; yangyicong ; Liguozhu (Kenneth)
> ; linux...@openeuler.or
cc Yaqii Wu
I'll test it , thanks
On Tue, 2021-03-30 at 16:06 +0800, Ikjoon Jang wrote:
> Remove unnecessary variables in check_sch_bw().
> No functional changes, just for better readability.
>
> Signed-off-by: Ikjoon Jang
> ---
>
> drivers/usb/host/xhci-mtk-sch.c | 52 +-
Remove unnecessary variables in check_sch_bw().
No functional changes, just for better readability.
Signed-off-by: Ikjoon Jang
---
drivers/usb/host/xhci-mtk-sch.c | 52 +
1 file changed, 21 insertions(+), 31 deletions(-)
diff --git a/drivers/usb/host/xhci-mtk-sc
s/assymetry/asymmetry/
Signed-off-by: Bhaskar Chowdhury
---
Documentation/scheduler/sched-nice-design.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/scheduler/sched-nice-design.rst
b/Documentation/scheduler/sched-nice-design.rst
index 0571f1b47e64
s/simultanously/simultaneously/
Signed-off-by: Bhaskar Chowdhury
---
Documentation/scheduler/sched-bwc.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/scheduler/sched-bwc.rst
b/Documentation/scheduler/sched-bwc.rst
index 845eee659199..a7f9be925ab8 100644
inux...@openeuler.org; h...@zytor.com
> Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86
>
>
>
> On 3/18/21 9:16 PM, Barry Song wrote:
> > From: Tim Chen
> >
> > There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>
er.
>
> Add CPU masks of CPUs sharing the L2 cache so we can build such
> L2 cluster scheduler domain.
>
> Signed-off-by: Tim Chen
> Signed-off-by: Barry Song
Barry,
Can you also add this chunk to the patch.
Thanks.
Tim
diff --git a/arch/x86/include/asm/topology.h b/arch
.org; h...@zytor.com; Song Bao Hua
> (Barry Song)
> Subject: [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before
> scanning
> the whole llc
>
> On kunpeng920, cpus within one cluster can communicate wit each other
> much faster than cpus across different clusters. A simple ha
.
Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.
Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.
Signed-off-by
| |
| +-+ +--+|| +-+ +--+ |
+---++--+
-v5:
* split "add scheduler level for clusters" into two patches to evaluate the
impact of spreading and gathering separately;
* add a tracepoint of select_idle_cpu
On kunpeng920, cpus within one cluster can communicate wit each other
much faster than cpus across different clusters. A simple hackbench
can prove that.
hackbench running on 4 cpus in single one cluster and 4 cpus in
different clusters shows a large contrast:
(1) within a cluster:
root@ubuntu:~# t
reased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
+ bool "Cluster scheduler support"
+ help
+ Cluster scheduler support improves the CPU scheduler's decision
+ making when dealing with machines that have clusters(sharing interna
linux...@openeuler.org; h...@zytor.com
> Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
>
> On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote:
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 88a2e2b..d805e59 100644
>
> It seems sensible the more CPU we get in the cluster, the more
> we need the kernel to be aware of its existence.
>
> Tim, it is possible for you to bring up the cpu_cluster_mask and
> cluster_sibling for x86 so that the topology can be represented
> in sysfs and be use
@openeuler.org; h...@zytor.com
> Subject: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler
> level for x86
>
>
>
> On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> > On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> >> From: Tim Chen
> &
@openeuler.org; ACPI Devel Maling
> List ; xuwei (O) ; Jonathan
> Cameron ; yangyicong ;
> x86 ; msys.miz...@gmail.com; Liguozhu (Kenneth)
> ; Valentin Schneider ;
> LAK
> Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters
>
> On Tue, 2 Mar 202
9568b..158b0fa 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -971,6 +971,13 @@ config SCHED_MC
> making when dealing with multi-core CPU chips at a cost of slightly
> increased overhead in some places. If unsure say N here.
>
> +config
On 3/2/21 2:30 AM, Peter Zijlstra wrote:
> On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
>> From: Tim Chen
>>
>> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
>> is shared among a cluster of cores instead of being exclusive
>> to one single core.
>
> Isn't tha
On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 88a2e2b..d805e59 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu)
> if (cpumask_weight(c
On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote:
> From: Tim Chen
>
> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
> is shared among a cluster of cores instead of being exclusive
> to one single core.
Isn't that most atoms one way or another? Tremont seems to have
.
Also with cluster scheduling policy where tasks are woken up
in the same L2 cluster, we will benefit from keeping tasks
related to each other and likely sharing data in the same L2
cluster.
Add CPU masks of CPUs sharing the L2 cache so we can build such
L2 cluster scheduler domain.
Signed-off-by
ith multi-core CPU chips at a cost of slightly
increased overhead in some places. If unsure say N here.
+config SCHED_CLUSTER
+ bool "Cluster scheduler support"
+ help
+ Cluster scheduler support improves the CPU scheduler's decision
+ m
tasks
* avoided the iteration of sched_domain by moving to static_key(addressing
Vincent's comment
* used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment)
Barry Song (1):
scheduler: add scheduler level for clusters
Jonathan Cameron (1):
topology: Represent c
The pull request you sent on Wed, 17 Feb 2021 14:43:23 +0100:
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
> sched-core-2021-02-17
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/657bd90c93146a929c69cd43addf2804eb70c926
Thank you!
--
Deet-doot-dot, I
Linus,
Please pull the latest sched/core git tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
sched-core-2021-02-17
# HEAD: c5e6fc08feb2b88dc5dac2f3c817e1c2a4cafda4 sched,x86: Allow
!PREEMPT_DYNAMIC
Scheduler updates for v5.12:
[ NOTE: unfortunately this tree had
d domain for x86.
Thanks.
Tim
>8--
>From 9189e489b019e110ee6e9d4183e243e48f44ff25 Mon Sep 17 00:00:00 2001
From: Tim Chen
Date: Tue, 16 Feb 2021 08:24:39 -0800
Subject: [RFC PATCH] scheduler: Add cluster scheduler level for x86
To: , , ,
, , ,
, ,
, , ,
, , ,
, , ,
Cc: , ,
, , ,
, , Jonathan Ca
nux.intel.com
> Cc: linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster schedu
ost...@goodmis.org; bseg...@google.com; mgor...@suse.de;
>> mark.rutl...@arm.com; sudeep.ho...@arm.com; aubrey...@linux.intel.com;
>> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
>> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
>> ; Zengt
ubrey...@linux.intel.com;
> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add c
.@arm.com; aubrey...@linux.intel.com;
> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org;
> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
>
om; rost...@goodmis.org; bseg...@google.com;
>> mgor...@suse.de; mark.rutl...@arm.com; sudeep.ho...@arm.com;
>> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org;
>> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org;
>> linux...@openeuler.org; xuwei (O) ; Zengtao
On 11/01/2021 10:28, Morten Rasmussen wrote:
> On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>>
>>
>> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
>>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
On 1/6/21 12:30 AM, Barry Song wrote:
[...]
>> I think it is going to dep
4:56 PM Sagi Grimberg wrote:
> >>
> >>
> >> >>>> But if you think this has a better home, I'm assuming that the guys
> >> >>>> will be open to that.
> >> >>>
> >> >>> Also see the reply from Ming. I
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote:
>
>
> On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> >> On 1/6/21 12:30 AM, Barry Song wrote:
> >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> >>>
.com;
> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org;
> linux...@openeuler.org; xuwei (O) ; Zengtao (B)
> ; tiantao (H)
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> ad
On 1/8/21 7:12 AM, Morten Rasmussen wrote:
> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
>> On 1/6/21 12:30 AM, Barry Song wrote:
>>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
>>> cluster has 4 cpus. All clusters share L3 cache data while each cluster
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> On 1/6/21 12:30 AM, Barry Song wrote:
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > has local L3 tag. On the other hand, each cl
Reply-To:
References:
From: Tim Chen
Date: Wed, 19 Aug 2020 16:22:35 -0700
Subject: [RFC PATCH 1/2] sched: Add L2 cache cpu mask
There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce
is shared among a group of cores instead of being exclusive
to one single core.
To prevent oversub
uler.org; xuwei (O)
> ; Zengtao (B) ; tiantao (H)
>
> Subject: Re: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters
>
> On Wed, 6 Jan 2021 at 09:35, Barry Song wrote:
> >
> > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > cluster ha
utside the cluster:
> target cpu
> 19 -> 17
> 13 -> 15
> 23 -> 20
> 23 -> 20
> 19 -> 17
> 13 -> 15
> 16 -> 17
> 19 -> 17
> 7 -> 5
> 10 -> 11
> 23 -> 20
> *23 -> 4
> ...
>
> Signed-off-by: Barr
igned-off-by: Barry Song
---
-v3:
- rebased againest 5.11-rc2
- with respect to the comments of Valentin Schneider, Peter Zijlstra,
Vincent Guittot and Mel Gorman etc.
* moved the scheduler changes from arm64 to the common place for all
architectures.
* added SD_SHARE_CLS_
524
The score is 4.285 vs 5.524, shorter time means better performance.
All these testing implies that we should let the Linux scheduler use
this topology to make better load balancing and WAKE_AFFINE decisions.
However, the current scheduler totally has no idea of clusters.
This patchset expos
Just reminding people I'm still around and maintaining this patchset.
Announcing a new -ck release, 5.10-ck1 with the latest version of the
Multiple Queue Skiplist Scheduler, version 0.205 These are patches
designed to improve system responsiveness and interactivity with
specific emphasis o
The pull request you sent on Sun, 27 Dec 2020 10:16:01 +0100:
> git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
> sched-urgent-2020-12-27
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/3b80dee70eaa5f9a120db058c30cc8e63c443571
Thank you!
--
Deet-doot-dot,
Linus,
Please pull the latest sched/urgent git tree from:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
sched-urgent-2020-12-27
# HEAD: ae7927023243dcc7389b2d59b16c09cbbeaecc36 sched: Optimize
finish_lock_switch()
Fix a context switch performance regression.
Thanks,
e...@linuxfoundation.org; Jonathan Cameron ;
> Ingo Molnar ; Peter Zijlstra ; Juri
> Lelli ; Dietmar Eggemann ;
> Steven Rostedt ; Ben Segall ; Mel
> Gorman ; Mark Rutland ; LAK
> ; linux-kernel
> ; ACPI Devel Maling List
> ; Linuxarm ; xuwei (O)
> ; Zengtao (B)
> Subject: Re
On 08-12-20, 15:50, Peter Zijlstra wrote:
> On Tue, Dec 08, 2020 at 09:46:54AM +0530, Viresh Kumar wrote:
> > Viresh Kumar (3):
> > sched/core: Move schedutil_cpu_util() to core.c
> > sched/core: Rename schedutil_cpu_util() and allow rest of the kernel
> > to use it
> > thermal: cpufreq_c
1 - 100 of 3934 matches
Mail list logo