Re: [PATCH sched_ext/for-6.15] selftests/sched_ext: Add NUMA-aware scheduler test

2025-02-26 Thread Tejun Heo
On Wed, Feb 26, 2025 at 07:50:57AM +0100, Andrea Righi wrote: > Add a selftest to validate the behavior of the NUMA-aware scheduler > functionalities, including idle CPU selection within nodes, per-node > DSQs and CPU to node mapping. > > Signed-off-by: Andrea Righi Applied t

[PATCH sched_ext/for-6.15] selftests/sched_ext: Add NUMA-aware scheduler test

2025-02-25 Thread Andrea Righi
Add a selftest to validate the behavior of the NUMA-aware scheduler functionalities, including idle CPU selection within nodes, per-node DSQs and CPU to node mapping. Signed-off-by: Andrea Righi --- tools/testing/selftests/sched_ext/Makefile | 1 + tools/testing/selftests/sched_ext

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-09 Thread Breno Leitao
On Wed, Oct 09, 2024 at 04:44:24PM +0100, Pavel Begunkov wrote: > On 10/8/24 16:18, John Ogness wrote: > > On 2024-10-04, Petr Mladek wrote: > > > On Fri 2024-10-04 02:08:52, Breno Leitao wrote: > > > > = > > > > WARNING: HARDIR

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-09 Thread Pavel Begunkov
On 10/8/24 16:18, John Ogness wrote: On 2024-10-04, Petr Mladek wrote: On Fri 2024-10-04 02:08:52, Breno Leitao wrote: = WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 6.12.0-rc1-kbuilder-virtme-00033-g

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-08 Thread John Ogness
On 2024-10-04, Petr Mladek wrote: > On Fri 2024-10-04 02:08:52, Breno Leitao wrote: >> = >> WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected >> 6.12.0-rc1-kbuilder-virtme-00033-gd4ac164bde7a #50 Not tainted >> -

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-04 Thread Petr Mladek
unaligned' of > > > > git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs") > > > > > > This looks like the normal lockdep splat you get when the scheduler does > > > printk. I suspect you tripped a WARN, but since you only provided the > &

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-04 Thread Breno Leitao
> > This looks like the normal lockdep splat you get when the scheduler does > > printk. I suspect you tripped a WARN, but since you only provided the > > lockdep output and not the whole log, I cannot tell. > > Thanks for the quick answer. I didn't see a warning bef

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-03 Thread Breno Leitao
've bisected the problem, and weirdly enough, this problem started to > > show up after a unrelated(?) change in the scheduler: > > > > 52e11f6df293e816a ("sched/fair: Implement delayed dequeue") > > > > At this time, I have the impression that

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-03 Thread Peter Zijlstra
q-unsafe "_xmit_ETHER#2" lock is > acquired in virtnet_poll_tx() while holding the HARDIRQ-irq-safe, and > lockdep doesn't like it much. > > I've bisected the problem, and weirdly enough, this problem started to > show up after a unrelated(?) change in the schedul

Re: 6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-03 Thread Breno Leitao
q-unsafe "_xmit_ETHER#2" lock is > acquired in virtnet_poll_tx() while holding the HARDIRQ-irq-safe, and > lockdep doesn't like it much. > > I've bisected the problem, and weirdly enough, this problem started to > show up after a unrelated(?) change in the schedul

6.12-rc1: Lockdep regression bissected (virtio-net/console/scheduler)

2024-10-03 Thread Breno Leitao
e HARDIRQ-irq-safe, and lockdep doesn't like it much. I've bisected the problem, and weirdly enough, this problem started to show up after a unrelated(?) change in the scheduler: 52e11f6df293e816a ("sched/fair: Implement delayed dequeue") At this time, I have the impress

Re: [RFC] Printk deadlock in bpf trace called from scheduler context

2024-07-29 Thread Marco Elver
On Mon, 29 Jul 2024 at 14:27, Peter Zijlstra wrote: > > On Mon, Jul 29, 2024 at 01:46:09PM +0200, Radoslaw Zielonek wrote: > > I am currently working on a syzbot-reported bug where bpf > > is called from trace_sched_switch. In this scenario, we are still within > > th

Re: [RFC] Printk deadlock in bpf trace called from scheduler context

2024-07-29 Thread Peter Zijlstra
On Mon, Jul 29, 2024 at 01:46:09PM +0200, Radoslaw Zielonek wrote: > I am currently working on a syzbot-reported bug where bpf > is called from trace_sched_switch. In this scenario, we are still within > the scheduler context, and calling printk can create a deadlock. > > I am unce

[RFC] Printk deadlock in bpf trace called from scheduler context

2024-07-29 Thread Radoslaw Zielonek
I am currently working on a syzbot-reported bug where bpf is called from trace_sched_switch. In this scenario, we are still within the scheduler context, and calling printk can create a deadlock. I am uncertain about the best approach to fix this issue. Should we simply forbid such calls, or

Re: [PATCH v2 1/5] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-21 Thread Rafael J. Wysocki
On Thu, Dec 21, 2023 at 8:57 PM Rafael J. Wysocki wrote: > > On Thu, Dec 21, 2023 at 4:24 PM Vincent Guittot > wrote: > > > > Provide to the scheduler a feedback about the temporary max available > > capacity. Unlike arch_update_thermal_pressure, this doesn'

Re: [PATCH v2 1/5] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-21 Thread Rafael J. Wysocki
On Thu, Dec 21, 2023 at 4:24 PM Vincent Guittot wrote: > > Provide to the scheduler a feedback about the temporary max available > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be > filtered as the pressure will happen for dozens ms or more. > >

[PATCH v2 1/5] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-21 Thread Vincent Guittot
Provide to the scheduler a feedback about the temporary max available capacity. Unlike arch_update_thermal_pressure, this doesn't need to be filtered as the pressure will happen for dozens ms or more. Signed-off-by: Vincent Guittot --- drivers/cpufreq/cpufreq.c

[PATCH v2 0/5] Rework system pressure interface to the scheduler

2023-12-21 Thread Vincent Guittot
Following the consolidation and cleanup of CPU capacity in [1], this serie reworks how the scheduler gets the pressures on CPUs. We need to take into account all pressures applied by cpufreq on the compute capacity of a CPU for dozens of ms or more and not only cpufreq cooling device or HW

Re: [PATCH 0/5] Rework system pressure interface to the scheduler

2023-12-15 Thread Lukasz Luba
acity in [1], this serie reworks how the scheduler gets the pressures on CPUs. We need to take into account all pressures applied by cpufreq on the compute capacity of a CPU for dozens of ms or more and not only cpufreq cooling device or HW mitigiations. we split the pressure applied on CPU'

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-14 Thread Vincent Guittot
On Thu, 14 Dec 2023 at 10:20, Lukasz Luba wrote: > > > > On 12/12/23 14:27, Vincent Guittot wrote: > > Provide to the scheduler a feedback about the temporary max available > > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be > > filte

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-14 Thread Lukasz Luba
y->max Agree, cpufreq sysfs scaling_max_freq is also important to handle in this new design. Currently we don't reflect that as reduced CPU capacity in the scheduler. There was discussion when I proposed to feed that CPU frequency reduction into thermal_pressure [1]. The same applie

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-14 Thread Rafael J. Wysocki
ue to cpufreq cooling or from userspace, we end up limiting the > >> maximum possible frequency, will this routine always get called ? > > > > Yes, any update of a FREQ_QOS_MAX ends up calling cpufreq_set_policy() > > to update the policy->max > > > > Agree, cpu

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-14 Thread Lukasz Luba
On 12/12/23 14:27, Vincent Guittot wrote: Provide to the scheduler a feedback about the temporary max available capacity. Unlike arch_update_thermal_pressure, this doesn't need to be filtered as the pressure will happen for dozens ms or more. Signed-off-by: Vincent Guittot --- dr

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-14 Thread Lukasz Luba
Currently we don't reflect that as reduced CPU capacity in the scheduler. There was discussion when I proposed to feed that CPU frequency reduction into thermal_pressure [1]. The same applies for the DTPM which is missing currently the proper impact to the CPU reduced capacity in the scheduler.

Re: [PATCH 0/5] Rework system pressure interface to the scheduler

2023-12-14 Thread Lukasz Luba
w the scheduler gets the pressures on CPUs. We need to take into account all pressures applied by cpufreq on the compute capacity of a CPU for dozens of ms or more and not only cpufreq cooling device or HW mitigiations. we split the pressure applied on CPU's capacity in 2 parts: - one from cpufreq an

Re: [PATCH 0/5] Rework system pressure interface to the scheduler

2023-12-14 Thread Vincent Guittot
On Thu, 14 Dec 2023 at 09:21, Lukasz Luba wrote: > > Hi Vincent, > > I've been waiting for this feature, thanks! > > > On 12/12/23 14:27, Vincent Guittot wrote: > > Following the consolidation and cleanup of CPU capacity in [1], this serie > > reworks how th

Re: [PATCH 0/5] Rework system pressure interface to the scheduler

2023-12-14 Thread Lukasz Luba
Hi Vincent, I've been waiting for this feature, thanks! On 12/12/23 14:27, Vincent Guittot wrote: Following the consolidation and cleanup of CPU capacity in [1], this serie reworks how the scheduler gets the pressures on CPUs. We need to take into account all pressures applied by cpufr

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-13 Thread Vincent Guittot
On Thu, 14 Dec 2023 at 06:43, Viresh Kumar wrote: > > On 12-12-23, 15:27, Vincent Guittot wrote: > > @@ -2618,6 +2663,9 @@ static int cpufreq_set_policy(struct cpufreq_policy > > *policy, > > policy->max = __resolve_freq(policy, policy->max, CPUFREQ_RELATION_H); > > trace_cpu_frequenc

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-13 Thread Viresh Kumar
On 12-12-23, 15:27, Vincent Guittot wrote: > @@ -2618,6 +2663,9 @@ static int cpufreq_set_policy(struct cpufreq_policy > *policy, > policy->max = __resolve_freq(policy, policy->max, CPUFREQ_RELATION_H); > trace_cpu_frequency_limits(policy); > > + cpus = policy->related_cpus; > +

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-13 Thread Viresh Kumar
On 13-12-23, 16:41, Tim Chen wrote: > Seems like the pressure value computed from the first CPU applies to all CPU. > Will this be valid for non-homogeneous CPUs that could have different > max_freq and max_capacity? The will be part of different cpufreq policies and so it will work fine. -- vir

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-13 Thread Tim Chen
On Tue, 2023-12-12 at 15:27 +0100, Vincent Guittot wrote: > Provide to the scheduler a feedback about the temporary max available > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be > filtered as the pressure will happen for dozens ms or more. > > Si

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-13 Thread Vincent Guittot
On Wed, 13 Dec 2023 at 08:17, Viresh Kumar wrote: > > On 12-12-23, 15:27, Vincent Guittot wrote: > > Provide to the scheduler a feedback about the temporary max available > > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be > > filtered as the pr

Re: [PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-12 Thread Viresh Kumar
On 12-12-23, 15:27, Vincent Guittot wrote: > Provide to the scheduler a feedback about the temporary max available > capacity. Unlike arch_update_thermal_pressure, this doesn't need to be > filtered as the pressure will happen for dozens ms or more. > > Signed-off

[PATCH 1/4] cpufreq: Add a cpufreq pressure feedback for the scheduler

2023-12-12 Thread Vincent Guittot
Provide to the scheduler a feedback about the temporary max available capacity. Unlike arch_update_thermal_pressure, this doesn't need to be filtered as the pressure will happen for dozens ms or more. Signed-off-by: Vincent Guittot --- drivers/cpufreq/cpufreq.c

[PATCH 0/5] Rework system pressure interface to the scheduler

2023-12-12 Thread Vincent Guittot
Following the consolidation and cleanup of CPU capacity in [1], this serie reworks how the scheduler gets the pressures on CPUs. We need to take into account all pressures applied by cpufreq on the compute capacity of a CPU for dozens of ms or more and not only cpufreq cooling device or HW

Re: [PATCH] ipvs: add a stateless type of service and a stateless Maglev hashing scheduler

2023-12-06 Thread Dan Carpenter
Hi Lev, kernel test robot noticed the following build warnings: https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Lev-Pantiukhin/ipvs-add-a-stateless-type-of-service-and-a-stateless-Maglev-hashing-scheduler/20231204-232344

Re: [PATCH] ipvs: add a stateless type of service and a stateless Maglev hashing scheduler

2023-12-06 Thread Julian Anastasov
Hello, On Mon, 4 Dec 2023, Lev Pantiukhin wrote: > +#define IP_VS_SVC_F_STATELESS0x0040 /* stateless scheduling > */ I have another idea for the traffic that does not need per-client state. We need some per-dest cp to forward the packet. If we replace the cp->

Re: [PATCH] ipvs: add a stateless type of service and a stateless Maglev hashing scheduler

2023-12-06 Thread kernel test robot
patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/Lev-Pantiukhin/ipvs-add-a-stateless-type-of-service-and-a-stateless-Maglev-hashing-scheduler/20231204-232344 bas

Re: [PATCH] ipvs: add a stateless type of service and a stateless Maglev hashing scheduler

2023-12-05 Thread Julian Anastasov
Hello, On Mon, 4 Dec 2023, Lev Pantiukhin wrote: > Maglev Hashing Stateless > > > Introduction > > > This patch to Linux kernel provides the following changes to IPVS: > > 1. Adds a new type (IP_VS_SVC_F_STATELESS) of

[PATCH] ipvs: add a stateless type of service and a stateless Maglev hashing scheduler

2023-12-04 Thread Lev Pantiukhin
Maglev Hashing Stateless Introduction This patch to Linux kernel provides the following changes to IPVS: 1. Adds a new type (IP_VS_SVC_F_STATELESS) of scheduler that computes the need for connection entry addition; 2. Adds a new mhs (Maglev Hashing

RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86

2021-04-20 Thread Song Bao Hua (Barry Song)
inux...@openeuler.org; h...@zytor.com > Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 > > > > On 3/23/21 4:21 PM, Song Bao Hua (Barry Song) wrote: > > >> > >> On 3/18/21 9:16 PM, Barry Song wrote: > >>> From: Tim

Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86

2021-04-20 Thread Tim Chen
tasks are woken up >>> in the same L2 cluster, we will benefit from keeping tasks >>> related to each other and likely sharing data in the same L2 >>> cluster. >>> >>> Add CPU masks of CPUs sharing the L2 cache so we can build such >>> L2 cluster s

[RFC PATCH v6 3/4] scheduler: scan idle cpu in cluster for tasks within one LLC

2021-04-19 Thread Barry Song
On kunpeng920, cpus within one cluster can communicate wit each other much faster than cpus across different clusters. A simple hackbench can prove that. hackbench running on 4 cpus in single one cluster and 4 cpus in different clusters shows a large contrast: (1) within a cluster: root@ubuntu:~# t

[RFC PATCH v6 4/4] scheduler: Add cluster scheduler level for x86

2021-04-19 Thread Barry Song
. Also with cluster scheduling policy where tasks are woken up in the same L2 cluster, we will benefit from keeping tasks related to each other and likely sharing data in the same L2 cluster. Add CPU masks of CPUs sharing the L2 cache so we can build such L2 cluster scheduler domain. Signed-off-by

[RFC PATCH v6 2/4] scheduler: add scheduler level for clusters

2021-04-19 Thread Barry Song
i-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealin

[RFC PATCH v6 0/4] scheduler: expose the topology of clusters and add cluster scheduler

2021-04-19 Thread Barry Song
modification of the wakeup path is the root of the hackbench improvement; especially with g=14 where there should not be much idle CPUs with 14*40 tasks on at most 32 CPUs." -v5: * split "add scheduler level for clusters" into two patches to evaluate the impact of spreading

[PATCH 24/40] drm/scheduler/sched_entity: Fix some function name disparity

2021-04-16 Thread Lee Jones
Fixes the following W=1 kernel build warning(s): drivers/gpu/drm/scheduler/sched_entity.c:204: warning: expecting prototype for drm_sched_entity_kill_jobs(). Prototype was for drm_sched_entity_kill_jobs_cb() instead drivers/gpu/drm/scheduler/sched_entity.c:262: warning: expecting prototype

[PATCH v2 3/5] kvfree_rcu: Add a bulk-list check when a scheduler is run

2021-04-15 Thread Uladzislau Rezki (Sony)
RCU_SCHEDULER_RUNNING is set when a scheduling is available. That signal is used in order to check and queue a "monitor work" to reclaim freed objects(if they are) during a boot-up phase. We have it because, the main path of the kvfree_rcu() call can not queue the work untill the sched

[PATCH 4/6] kvfree_rcu: add a bulk-list check when a scheduler is run

2021-04-14 Thread Uladzislau Rezki (Sony)
RCU_SCHEDULER_RUNNING is set when a scheduling is available. That signal is used in order to check and queue a "monitor work" to reclaim freed objects(if they are) during a boot-up phase. We have it because, the main path of the kvfree_rcu() call can not queue the work untill the sched

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-04-13 Thread Tim Chen
On 4/13/21 3:45 AM, Song Bao Hua (Barry Song) wrote: > > > > Right now in the main cases of using wake_affine to achieve > better performance, processes are actually bound within one > numa which is also a LLC in kunpeng920. > > Probably LLC=NUMA is also true for X86 Jacobsville, Tim? In ge

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-04-13 Thread Song Bao Hua (Barry Song)
.@arm.com; aubrey...@linux.intel.com; > linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org; > linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O) > ; Zengtao (B) ; tiantao (H) > > Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and >

Re: [PATCH 1/2] usb: xhci-mtk: remove unnecessary assignments in periodic TT scheduler

2021-04-05 Thread Chunfeng Yun
On Mon, 2021-04-05 at 09:04 +0200, Greg Kroah-Hartman wrote: > On Wed, Mar 31, 2021 at 04:30:55PM +0800, Chunfeng Yun wrote: > > cc Yaqii Wu > > > > I'll test it , thanks > > Did you test this series and find any problems? If not, I'll go queue > these up... Yes, found an issue on the start-spl

Re: [PATCH 1/2] usb: xhci-mtk: remove unnecessary assignments in periodic TT scheduler

2021-04-05 Thread Greg Kroah-Hartman
On Wed, Mar 31, 2021 at 04:30:55PM +0800, Chunfeng Yun wrote: > cc Yaqii Wu > > I'll test it , thanks Did you test this series and find any problems? If not, I'll go queue these up... thanks, greg k-h

RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86

2021-03-31 Thread Song Bao Hua (Barry Song)
utl...@arm.com; sudeep.ho...@arm.com; > aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org; > linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org; x...@kernel.org; > xuwei (O) ; Zengtao (B) ; > guodong...@linaro.org; yangyicong ; Liguozhu (Kenneth) > ; linux...@openeuler.or

Re: [PATCH 1/2] usb: xhci-mtk: remove unnecessary assignments in periodic TT scheduler

2021-03-31 Thread Chunfeng Yun
cc Yaqii Wu I'll test it , thanks On Tue, 2021-03-30 at 16:06 +0800, Ikjoon Jang wrote: > Remove unnecessary variables in check_sch_bw(). > No functional changes, just for better readability. > > Signed-off-by: Ikjoon Jang > --- > > drivers/usb/host/xhci-mtk-sch.c | 52 +-

[PATCH 1/2] usb: xhci-mtk: remove unnecessary assignments in periodic TT scheduler

2021-03-30 Thread Ikjoon Jang
Remove unnecessary variables in check_sch_bw(). No functional changes, just for better readability. Signed-off-by: Ikjoon Jang --- drivers/usb/host/xhci-mtk-sch.c | 52 + 1 file changed, 21 insertions(+), 31 deletions(-) diff --git a/drivers/usb/host/xhci-mtk-sc

[PATCH 21/23] scheduler: sched-nice-design.rst: Fix a typo

2021-03-28 Thread Bhaskar Chowdhury
s/assymetry/asymmetry/ Signed-off-by: Bhaskar Chowdhury --- Documentation/scheduler/sched-nice-design.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/scheduler/sched-nice-design.rst b/Documentation/scheduler/sched-nice-design.rst index 0571f1b47e64

[PATCH 20/23] scheduler: sched-bwc.rst: Fix a typo

2021-03-28 Thread Bhaskar Chowdhury
s/simultanously/simultaneously/ Signed-off-by: Bhaskar Chowdhury --- Documentation/scheduler/sched-bwc.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Documentation/scheduler/sched-bwc.rst b/Documentation/scheduler/sched-bwc.rst index 845eee659199..a7f9be925ab8 100644

RE: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86

2021-03-23 Thread Song Bao Hua (Barry Song)
inux...@openeuler.org; h...@zytor.com > Subject: Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86 > > > > On 3/18/21 9:16 PM, Barry Song wrote: > > From: Tim Chen > > > > There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce >

Re: [RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86

2021-03-23 Thread Tim Chen
er. > > Add CPU masks of CPUs sharing the L2 cache so we can build such > L2 cluster scheduler domain. > > Signed-off-by: Tim Chen > Signed-off-by: Barry Song Barry, Can you also add this chunk to the patch. Thanks. Tim diff --git a/arch/x86/include/asm/topology.h b/arch

RE: [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc

2021-03-19 Thread Song Bao Hua (Barry Song)
.org; h...@zytor.com; Song Bao Hua > (Barry Song) > Subject: [RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before > scanning > the whole llc > > On kunpeng920, cpus within one cluster can communicate wit each other > much faster than cpus across different clusters. A simple ha

[RFC PATCH v5 4/4] scheduler: Add cluster scheduler level for x86

2021-03-18 Thread Barry Song
. Also with cluster scheduling policy where tasks are woken up in the same L2 cluster, we will benefit from keeping tasks related to each other and likely sharing data in the same L2 cluster. Add CPU masks of CPUs sharing the L2 cache so we can build such L2 cluster scheduler domain. Signed-off-by

[RFC PATCH v5 0/4] scheduler: expose the topology of clusters and add cluster scheduler

2021-03-18 Thread Barry Song
| | | +-+ +--+|| +-+ +--+ | +---++--+ -v5: * split "add scheduler level for clusters" into two patches to evaluate the impact of spreading and gathering separately; * add a tracepoint of select_idle_cpu

[RFC PATCH v5 3/4] scheduler: scan idle cpu in cluster before scanning the whole llc

2021-03-18 Thread Barry Song
On kunpeng920, cpus within one cluster can communicate wit each other much faster than cpus across different clusters. A simple hackbench can prove that. hackbench running on 4 cpus in single one cluster and 4 cpus in different clusters shows a large contrast: (1) within a cluster: root@ubuntu:~# t

[RFC PATCH v5 2/4] scheduler: add scheduler level for clusters

2021-03-18 Thread Barry Song
reased overhead in some places. If unsure say N here. +config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + making when dealing with machines that have clusters(sharing interna

RE: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters

2021-03-16 Thread Song Bao Hua (Barry Song)
linux...@openeuler.org; h...@zytor.com > Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters > > On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote: > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 88a2e2b..d805e59 100644 >

Re: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86

2021-03-15 Thread Tim Chen
> It seems sensible the more CPU we get in the cluster, the more > we need the kernel to be aware of its existence. > > Tim, it is possible for you to bring up the cpu_cluster_mask and > cluster_sibling for x86 so that the topology can be represented > in sysfs and be use

RE: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86

2021-03-08 Thread Song Bao Hua (Barry Song)
@openeuler.org; h...@zytor.com > Subject: [Linuxarm] Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler > level for x86 > > > > On 3/2/21 2:30 AM, Peter Zijlstra wrote: > > On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote: > >> From: Tim Chen > &

RE: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters

2021-03-08 Thread Song Bao Hua (Barry Song)
@openeuler.org; ACPI Devel Maling > List ; xuwei (O) ; Jonathan > Cameron ; yangyicong ; > x86 ; msys.miz...@gmail.com; Liguozhu (Kenneth) > ; Valentin Schneider ; > LAK > Subject: Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters > > On Tue, 2 Mar 202

Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters

2021-03-08 Thread Vincent Guittot
9568b..158b0fa 100644 > --- a/arch/arm64/Kconfig > +++ b/arch/arm64/Kconfig > @@ -971,6 +971,13 @@ config SCHED_MC > making when dealing with multi-core CPU chips at a cost of slightly > increased overhead in some places. If unsure say N here. > > +config

Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86

2021-03-03 Thread Tim Chen
On 3/2/21 2:30 AM, Peter Zijlstra wrote: > On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote: >> From: Tim Chen >> >> There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce >> is shared among a cluster of cores instead of being exclusive >> to one single core. > > Isn't tha

Re: [RFC PATCH v4 2/3] scheduler: add scheduler level for clusters

2021-03-02 Thread Peter Zijlstra
On Tue, Mar 02, 2021 at 11:59:39AM +1300, Barry Song wrote: > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 88a2e2b..d805e59 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -7797,6 +7797,16 @@ int sched_cpu_activate(unsigned int cpu) > if (cpumask_weight(c

Re: [RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86

2021-03-02 Thread Peter Zijlstra
On Tue, Mar 02, 2021 at 11:59:40AM +1300, Barry Song wrote: > From: Tim Chen > > There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce > is shared among a cluster of cores instead of being exclusive > to one single core. Isn't that most atoms one way or another? Tremont seems to have

[RFC PATCH v4 3/3] scheduler: Add cluster scheduler level for x86

2021-03-01 Thread Barry Song
. Also with cluster scheduling policy where tasks are woken up in the same L2 cluster, we will benefit from keeping tasks related to each other and likely sharing data in the same L2 cluster. Add CPU masks of CPUs sharing the L2 cache so we can build such L2 cluster scheduler domain. Signed-off-by

[RFC PATCH v4 2/3] scheduler: add scheduler level for clusters

2021-03-01 Thread Barry Song
ith multi-core CPU chips at a cost of slightly increased overhead in some places. If unsure say N here. +config SCHED_CLUSTER + bool "Cluster scheduler support" + help + Cluster scheduler support improves the CPU scheduler's decision + m

[RFC PATCH v4 0/3] scheduler: expose the topology of clusters and add cluster scheduler

2021-03-01 Thread Barry Song
tasks * avoided the iteration of sched_domain by moving to static_key(addressing Vincent's comment * used acpi_cpu_id for acpi_find_processor_node(addressing Masa's comment) Barry Song (1): scheduler: add scheduler level for clusters Jonathan Cameron (1): topology: Represent c

Re: [GIT PULL] scheduler updates for v5.12

2021-02-21 Thread pr-tracker-bot
The pull request you sent on Wed, 17 Feb 2021 14:43:23 +0100: > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git > sched-core-2021-02-17 has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/657bd90c93146a929c69cd43addf2804eb70c926 Thank you! -- Deet-doot-dot, I

[GIT PULL] scheduler updates for v5.12

2021-02-17 Thread Ingo Molnar
Linus, Please pull the latest sched/core git tree from: git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-core-2021-02-17 # HEAD: c5e6fc08feb2b88dc5dac2f3c817e1c2a4cafda4 sched,x86: Allow !PREEMPT_DYNAMIC Scheduler updates for v5.12: [ NOTE: unfortunately this tree had

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-02-16 Thread Tim Chen
d domain for x86. Thanks. Tim >8-- >From 9189e489b019e110ee6e9d4183e243e48f44ff25 Mon Sep 17 00:00:00 2001 From: Tim Chen Date: Tue, 16 Feb 2021 08:24:39 -0800 Subject: [RFC PATCH] scheduler: Add cluster scheduler level for x86 To: , , , , , , , , , , , , , , , , , Cc: , , , , , , , Jonathan Ca

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-02-03 Thread Song Bao Hua (Barry Song)
nux.intel.com > Cc: linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org; > linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O) > ; Zengtao (B) ; tiantao (H) > > Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and > add cluster schedu

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-26 Thread Dietmar Eggemann
ost...@goodmis.org; bseg...@google.com; mgor...@suse.de; >> mark.rutl...@arm.com; sudeep.ho...@arm.com; aubrey...@linux.intel.com; >> linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org; >> linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O) >> ; Zengt

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-25 Thread Song Bao Hua (Barry Song)
ubrey...@linux.intel.com; > linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org; > linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O) > ; Zengtao (B) ; tiantao (H) > > Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and > add c

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-25 Thread Song Bao Hua (Barry Song)
.@arm.com; aubrey...@linux.intel.com; > linux-arm-ker...@lists.infradead.org; linux-kernel@vger.kernel.org; > linux-a...@vger.kernel.org; linux...@openeuler.org; xuwei (O) > ; Zengtao (B) ; tiantao (H) > > Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and >

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-12 Thread Dietmar Eggemann
om; rost...@goodmis.org; bseg...@google.com; >> mgor...@suse.de; mark.rutl...@arm.com; sudeep.ho...@arm.com; >> aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org; >> linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org; >> linux...@openeuler.org; xuwei (O) ; Zengtao

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-12 Thread Dietmar Eggemann
On 11/01/2021 10:28, Morten Rasmussen wrote: > On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote: >> >> >> On 1/8/21 7:12 AM, Morten Rasmussen wrote: >>> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote: On 1/6/21 12:30 AM, Barry Song wrote: [...] >> I think it is going to dep

Re: [PATCH] iosched: Add i10 I/O Scheduler

2021-01-11 Thread Rachit Agarwal
4:56 PM Sagi Grimberg wrote: > >> > >> > >> >>>> But if you think this has a better home, I'm assuming that the guys > >> >>>> will be open to that. > >> >>> > >> >>> Also see the reply from Ming. I

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-11 Thread Morten Rasmussen
On Fri, Jan 08, 2021 at 12:22:41PM -0800, Tim Chen wrote: > > > On 1/8/21 7:12 AM, Morten Rasmussen wrote: > > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote: > >> On 1/6/21 12:30 AM, Barry Song wrote: > >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each > >>>

RE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-08 Thread Song Bao Hua (Barry Song)
.com; > aubrey...@linux.intel.com; linux-arm-ker...@lists.infradead.org; > linux-kernel@vger.kernel.org; linux-a...@vger.kernel.org; > linux...@openeuler.org; xuwei (O) ; Zengtao (B) > ; tiantao (H) > Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and > ad

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-08 Thread Tim Chen
On 1/8/21 7:12 AM, Morten Rasmussen wrote: > On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote: >> On 1/6/21 12:30 AM, Barry Song wrote: >>> ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each >>> cluster has 4 cpus. All clusters share L3 cache data while each cluster

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-08 Thread Morten Rasmussen
On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote: > On 1/6/21 12:30 AM, Barry Song wrote: > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each > > cluster has 4 cpus. All clusters share L3 cache data while each cluster > > has local L3 tag. On the other hand, each cl

Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-07 Thread Tim Chen
Reply-To: References: From: Tim Chen Date: Wed, 19 Aug 2020 16:22:35 -0700 Subject: [RFC PATCH 1/2] sched: Add L2 cache cpu mask There are x86 CPU architectures (e.g. Jacobsville) where L2 cahce is shared among a group of cores instead of being exclusive to one single core. To prevent oversub

RE: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters

2021-01-06 Thread Song Bao Hua (Barry Song)
uler.org; xuwei (O) > ; Zengtao (B) ; tiantao (H) > > Subject: Re: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters > > On Wed, 6 Jan 2021 at 09:35, Barry Song wrote: > > > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each > > cluster ha

Re: [RFC PATCH v3 2/2] scheduler: add scheduler level for clusters

2021-01-06 Thread Vincent Guittot
utside the cluster: > target cpu > 19 -> 17 > 13 -> 15 > 23 -> 20 > 23 -> 20 > 19 -> 17 > 13 -> 15 > 16 -> 17 > 19 -> 17 > 7 -> 5 > 10 -> 11 > 23 -> 20 > *23 -> 4 > ... > > Signed-off-by: Barr

[RFC PATCH v3 2/2] scheduler: add scheduler level for clusters

2021-01-06 Thread Barry Song
igned-off-by: Barry Song --- -v3: - rebased againest 5.11-rc2 - with respect to the comments of Valentin Schneider, Peter Zijlstra, Vincent Guittot and Mel Gorman etc. * moved the scheduler changes from arm64 to the common place for all architectures. * added SD_SHARE_CLS_

[RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler

2021-01-06 Thread Barry Song
524 The score is 4.285 vs 5.524, shorter time means better performance. All these testing implies that we should let the Linux scheduler use this topology to make better load balancing and WAKE_AFFINE decisions. However, the current scheduler totally has no idea of clusters. This patchset expos

Linux-5.10-ck1, MuQSS CPU scheduler v0.205

2021-01-03 Thread Con Kolivas
Just reminding people I'm still around and maintaining this patchset. Announcing a new -ck release, 5.10-ck1 with the latest version of the Multiple Queue Skiplist Scheduler, version 0.205 These are patches designed to improve system responsiveness and interactivity with specific emphasis o

Re: [GIT PULL] scheduler fix

2020-12-27 Thread pr-tracker-bot
The pull request you sent on Sun, 27 Dec 2020 10:16:01 +0100: > git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git > sched-urgent-2020-12-27 has been merged into torvalds/linux.git: https://git.kernel.org/torvalds/c/3b80dee70eaa5f9a120db058c30cc8e63c443571 Thank you! -- Deet-doot-dot,

[GIT PULL] scheduler fix

2020-12-27 Thread Ingo Molnar
Linus, Please pull the latest sched/urgent git tree from: git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2020-12-27 # HEAD: ae7927023243dcc7389b2d59b16c09cbbeaecc36 sched: Optimize finish_lock_switch() Fix a context switch performance regression. Thanks,

RE: [RFC PATCH v2 2/2] scheduler: add scheduler level for clusters

2020-12-09 Thread Song Bao Hua (Barry Song)
e...@linuxfoundation.org; Jonathan Cameron ; > Ingo Molnar ; Peter Zijlstra ; Juri > Lelli ; Dietmar Eggemann ; > Steven Rostedt ; Ben Segall ; Mel > Gorman ; Mark Rutland ; LAK > ; linux-kernel > ; ACPI Devel Maling List > ; Linuxarm ; xuwei (O) > ; Zengtao (B) > Subject: Re

Re: [PATCH V5 0/3] cpufreq_cooling: Get effective CPU utilization from scheduler

2020-12-08 Thread Viresh Kumar
On 08-12-20, 15:50, Peter Zijlstra wrote: > On Tue, Dec 08, 2020 at 09:46:54AM +0530, Viresh Kumar wrote: > > Viresh Kumar (3): > > sched/core: Move schedutil_cpu_util() to core.c > > sched/core: Rename schedutil_cpu_util() and allow rest of the kernel > > to use it > > thermal: cpufreq_c

  1   2   3   4   5   6   7   8   9   10   >