Hi,
On 06/08/18 11:20, Vincent Guittot wrote:
> Hi Valentin,
>
> On Tue, 31 Jul 2018 at 14:33, Valentin Schneider
> wrote:
>>
>> Hi,
>>
>> On 31/07/18 13:17, Vincent Guittot wrote:
>> [...]
>>>>
>>>> This can easily happen w
Hi,
On 24/07/18 13:25, Quentin Perret wrote:
> Add another member to the family of per-cpu sched_domain shortcut
> pointers. This one, sd_ea, points to the lowest level at which energy
> aware scheduling should be used.
>
> Generally speaking, the largest opportunity to save energy via scheduling
Hi,
On 09/07/18 16:08, Morten Rasmussen wrote:
> On Fri, Jul 06, 2018 at 12:18:27PM +0200, Vincent Guittot wrote:
>> Hi Morten,
>>
>> On Wed, 4 Jul 2018 at 12:18, Morten Rasmussen
>> wrote:
>>> [...]
>> As already said , I'm not convinced by the proposal which seems quite
>> complex and also add
Hi Peter,
On 31/07/18 13:00, Peter Zijlstra wrote:
>
>
> Aside from the first patch, which I posted the change on, I've picked up
> until 10. I think that other SD_ASYM patch-set replaces 11 and 12,
> right?
>
11 is no longer needed, but AFAICT we still need 12 - we don't want
PREFER_SIBLING to
Hi,
On 31/07/18 13:17, Vincent Guittot wrote:
> On Fri, 6 Jul 2018 at 16:31, Morten Rasmussen
> wrote:
>>
>> On Fri, Jul 06, 2018 at 12:18:17PM +0200, Vincent Guittot wrote:
>>> [...]
>>
>> Scheduling one task per cpu when n_task == n_cpus on asymmetric
>> topologies is generally broken already
(Resending because I snuck in some HTML... Apologies)
On 01/30/2018 08:32 AM, Vincent Guittot wrote:
On 29 January 2018 at 20:31, Valentin Schneider
wrote:
Hi Vincent, Peter,
I've been running some tests on your patches (Peter's base + the 2 from
Vincent). The results themselves
Hi Vincent, Peter,
I've been running some tests on your patches (Peter's base + the 2 from
Vincent). The results themselves are hosted at [1].
The base of those tests is the same: a task ("accumulator") is ran for 5
seconds (arbitrary value) to accumulate some load, then goes to sleep
for .5 s
On 26/09/18 11:33, Vincent Guittot wrote:
> On Wed, 26 Sep 2018 at 11:35, Valentin Schneider
> [...]
>>> Can you give us details about the use case that you care about ?
>>>
>>
>> It's the same as I presented last week - devlib (some python target
>
The alignment of the condition is off, clean that up.
Also, logical operators have lower precedence than bitwise/relational
operators, so remove one layer of parentheses to make the condition a
bit simpler to follow.
Signed-off-by: Valentin Schneider
---
kernel/sched/fair.c | 6 +++---
1 file
ease the balance interval when going
through a newidle balance.
This is a similar approach to what is done in commit 58b26c4c0257
("sched: Increment cache_nice_tries only on periodic lb"), where we
disregard newidle balance and rely on periodic balance for more stable
results.
Signed-
On 19/11/2018 17:31, Steven Sistare wrote:
[...]
>>> +#define IF_SMP(statement) statement
>>> +
>>
>> I'm not too hot on those IF_SMP() macros. Since you're not introducing
>> any other user for them, what about an inline function for rq->idle_stamp
>> setting ? When it's mapped to an empty statem
On 19/11/2018 17:33, Steven Sistare wrote:
[...]
>>
>> Thinking about misfit stealing, we can't use the sd_llc_shared's because
>> on big.LITTLE misfit migrations happen across LLC domains.
>>
>> I was thinking of adding a misfit sparsemask to the root_domain, but
>> then I thought we could do the
On 19/11/2018 17:32, Steven Sistare wrote:
> On 11/9/2018 12:38 PM, Valentin Schneider wrote:
>> Hi Steve,
>>
>> On 09/11/2018 12:50, Steve Sistare wrote:
>> [...]
>>> @@ -482,6 +484,10 @@ static void update_top_cache_domain(int cpu)
&
Hi Steve,
On 09/11/2018 12:50, Steve Sistare wrote:
> From: Steve Sistare
>
> Define and initialize a sparse bitmap of overloaded CPUs, per
> last-level-cache scheduling domain, for use by the CFS scheduling class.
> Save a pointer to cfs_overload_cpus in the rq for efficient access.
>
> Signed
Hi Steve,
On 06/12/2018 16:40, Steven Sistare wrote:
> [...]
>>
>> Ah yes, that would work. Thing is, I had excluded having the misfit masks
>> being in the sd_llc_shareds, since from a logical standpoint they don't
>> really belong there.
>>
>> With asymmetric CPU capacities we kind of disregard
Hi Steve,
On 06/12/2018 21:28, Steve Sistare wrote:
[...]
FYI git gets lost when it comes to applying this one on tip/sched/core
(v4.20-rc5 based), but first applying it on rc1 then rebasing the stack
on rc5 works fine.
[...]
Hi Steve,
On 06/12/2018 21:28, Steve Sistare wrote:
[...]
> @@ -6392,6 +6422,7 @@ static int wake_cap(struct task_struct *p, int cpu, int
> prev_cpu)
> static int
> select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int
> wake_flags)
> {
> + unsigned long time = schedst
Hi Steve,
On 06/12/2018 21:28, Steve Sistare wrote:
[...]
> @@ -1621,7 +1626,22 @@ static void __sdt_free(const struct cpumask *cpu_map)
>
> static int sd_llc_alloc(struct sched_domain *sd)
> {
> - /* Allocate sd->shared data here. Empty for now. */
> + struct sched_domain_shared *sds
Hi Steve,
On 06/12/2018 21:28, Steve Sistare wrote:
[...]
> @@ -3724,6 +3725,28 @@ static inline void update_misfit_status(struct
> task_struct *p, struct rq *rq)
> rq->misfit_task_load = task_h_load(p);
> }
>
> +static void overload_clear(struct rq *rq)
Nitpicky nit: cfs_overload_{clea
Hi Steve,
On 06/12/2018 21:28, Steve Sistare wrote:
[...]
> @@ -6778,20 +6791,22 @@ static void check_preempt_wakeup(struct rq *rq,
> struct task_struct *p, int wake_
> update_misfit_status(NULL, rq);
>
> /*
> - * We must set idle_stamp _before_ calling idle_balance(), such tha
(new_tasks == 0 &&
+ (!static_key_unlikely(&sched_energy_present) ||
READ_ONCE(rq->rd->overutilized))
new_tasks = try_steal(rq, rf);
if (new_tasks)
-8<-
It all looks good from my end - if things were to go wrong on big.LITTLE
platforms it
On 07/12/2018 22:35, Steven Sistare wrote:
[...]
>>> + if (!sds->cfs_overload_cpus) {
>>> + mask = sparsemask_alloc_node(nr_cpu_ids, 3, flags, nid);
>> ^^ ^^^
>> (1)(2)
>>
>> (1): I
On 07/12/2018 22:36, Steven Sistare wrote:
> On 12/7/2018 3:21 PM, Valentin Schneider wrote:
>> Hi Steve,
>>
>> On 06/12/2018 21:28, Steve Sistare wrote:
>> [...]
>>> @@ -6778,20 +6791,22 @@ static void check_preempt_wakeup(struct rq *rq,
On 07/12/2018 22:35, Steven Sistare wrote:
[...]
>>> @@ -4468,8 +4495,12 @@ static void throttle_cfs_rq(struct cfs_rq *cfs_rq)
>>> dequeue = 0;
>>> }
>>>
>>> - if (!se)
>>> + if (!se) {
>>> sub_nr_running(rq, task_delta);
>>> + if (prev_nr >= 2 &&
On 03/12/2018 08:33, Peter Zijlstra wrote:
> On Sat, Dec 01, 2018 at 05:09:36PM +0800, Wen Yang wrote:
>> Fix the following warnings reported by coccinelle:
>> kernel//sched/fair.c:7958:3-12: WARNING: Assignment of bool to 0/1
>>
Duh, Patrick raised that one to me last week but I got caught up in
Hi Steve,
On 26/11/2018 19:06, Steven Sistare wrote:
> [...]
>> Mmm I was thinking we could abuse the wrap() and start at
>> (fls(prev_span) + 1), but we're not guaranteed to have contiguous spans -
>> the Arm Juno for instance has [0, 3, 4], [1, 2] as MC-level domains, so
>> that goes down the dr
Hi Steve,
On 09/11/2018 12:50, Steve Sistare wrote:
[...]
> @@ -482,6 +484,10 @@ static void update_top_cache_domain(int cpu)
> dirty_sched_domain_sysctl(cpu);
> destroy_sched_domains(tmp);
>
> + sd = highest_flag_domain(cpu, SD_SHARE_PKG_RESOURCES);
> + cfs_overload_cpus = (
Hi Steve,
On 09/11/2018 12:50, Steve Sistare wrote:
> Move the update of idle_stamp from idle_balance to the call site in
> pick_next_task_fair, to prepare for a future patch that adds work to
> pick_next_task_fair which must be included in the idle_stamp interval.
> No functional change.
>
> Sig
Hi,
On 04/07/18 15:24, Matt Fleming wrote:
> It's possible that the CPU doing nohz idle balance hasn't had its own
> load updated for many seconds. This can lead to huge deltas between
> rq->avg_stamp and rq->clock when rebalancing, and has been seen to
> cause the following crash:
>
> divide er
On 05/07/18 14:27, Matt Fleming wrote:
> On Thu, 05 Jul, at 11:10:42AM, Valentin Schneider wrote:
>> Hi,
>>
>> On 04/07/18 15:24, Matt Fleming wrote:
>>> It's possible that the CPU doing nohz idle balance hasn't had its own
>>> load updated for m
On 06/07/18 12:41, Peter Zijlstra wrote:
> On Thu, Jun 28, 2018 at 12:40:39PM +0100, Quentin Perret wrote:
>> @@ -698,6 +698,9 @@ struct root_domain {
>> /* Indicate more than one runnable task for any CPU */
>> booloverload;
>
> While there, make that an int.
That's
Hi,
On 13/06/18 05:13, Tony Lindgren wrote:
> * John Stultz [180612 22:15]:
>> Hey Folks,
>> I noticed with linus/master wifi wasn't coming up on HiKey960. I
>> bisected it down and it seems to be due to:
>>
>> 60f36637bbbd ("wlcore: sdio: allow pm to handle sdio power") and
>> 728a9dc61f13 ("
Hi,
On 13/06/18 16:13, Ryan Grachek wrote:
> These properties are required for compatibility with runtime PM.
> Without these properties, MMC host controller will not be aware
> of power capabilities. When the wlcore driver attempts to power
> on the device, it will erroneously fail with -EACCES.
et_sync(-13)
Tested-by: Valentin Schneider
> Signed-off-by: Ryan Grachek
> ---
> arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/arm64/boot/dts/hisilicon/hi3660-hikey960.dts
> b/arch/arm64/boot/dts/hisili
Hi,
On 18/05/18 12:29, Peter Zijlstra wrote:
> On Fri, May 18, 2018 at 11:57:42AM +0100, Patrick Bellasi wrote:
>> Thus, my simple (maybe dumb) questions are:
>> - why can't we just fold turbo boost frequency into the existing concepts?
>> - what are the limitations of such a "simple" approach?
>
413201ae578820fb63f8a811f6c4e), and confirmed that I don't get
it on my setup when checking out the direct parent of
693350a7998018391852c48f68956cf0f855b2b9.
Tried out the diff and the board doesn't seem to die anymore, so FWIW:
Tested-by: Valentin Schneider
> Out of interest -- do you know if Hikey960 is u
Hi,
On 17/08/18 11:27, Matt Fleming wrote:
> On Thu, 05 Jul, at 05:54:02PM, Valentin Schneider wrote:
>> On 05/07/18 14:27, Matt Fleming wrote:
>>> On Thu, 05 Jul, at 11:10:42AM, Valentin Schneider wrote:
>>>> Hi,
>>>>
>>>> On 04/07/18 15:24
On 18/12/2018 09:32, Vincent Guittot wrote:
[...]
> In this asym packing case, It has nothing to do with pinned tasks and
> that's the root cause of the problem:
> the active balance triggered by asym packing is wrongly assumed to be
> an active balance due to pinned task(s) and the load balance in
On 18/12/2018 08:17, Vincent Guittot wrote:
[...]
>> That change looks fine. However, you're mentioning newidle load_balance()
>> not being triggered - you'd want to set root_domain->overload for any
>> newidle pull to happen, probably with something like this:
>
> It's not needed in this case bec
On 18/12/2018 13:23, Vincent Guittot wrote:
[...]
>> Ah, I think I get it: you're saying that this balance_interval increase
>> is done because it is always assumed we do an active balance with
>> busiest->nr_running > 1 && pinned tasks, and that all that is left to
>> migrate after the active_bala
On 14/12/2018 16:01, Vincent Guittot wrote:
> When check_asym_packing() is triggered, the imbalance is set to :
> busiest_stat.avg_load * busiest_stat.group_capacity / SCHED_CAPACITY_SCALE
> busiest_stat.avg_load also comes from a division and the final rounding
> can make imbalance slightly lower
On 19/12/2018 08:27, Vincent Guittot wrote:
[...]
>> Wouldn't LBF_ALL_PINNED cover all relevant cases? It's set at the very top
>> of the 'if (busiest->nr_running > 1)' block and cleared whenever we find
>> at least one task we could pull, even if we don't pull it because of
>> other reasons in can
On 19/12/2018 08:32, Vincent Guittot wrote:
[...]
> This is another UC, asym packing is used at SMT level for now and we
> don't face this kind of problem, it has been also tested and DynamiQ
> configuration which is similar to SMT : 1 CPU per sched_group
> The legacy b.L one was not the main targe
On 19/12/2018 13:39, Vincent Guittot wrote:
[...]
>> I used that setup out of convenience for myself, but AFAICT that use-case
>> just stresses that issue.
>
> After looking at you UC in details, your problem comes from the wl=1
> for cpu0 whereas there is no running task.
> But wl!=0 without r
On 19/12/2018 15:05, Vincent Guittot wrote:
[...]
>> True, I had a look at the trace and there doesn't seem to be any running
>> task on that CPU. That's a separate matter however - the rounding issues
>> can happen regardless of the wl values.
>
> But it means that the rounding fix +1 works and y
On 19/12/2018 15:20, Vincent Guittot wrote:
[...]
>> Oh yes, I never said it didn't work - I was doing some investigation on
>> the reason as to why we'd need this fix, because it's wasn't explicit from
>> the commit message.
>>
>> The rounding errors are countered by the +1, yes, but I'd rather re
On 19/12/2018 13:29, Vincent Guittot wrote:
[...]
>> My point is that AFAICT the LBF_ALL_PINNED flag would cover all the cases
>> we care about, although the one you're mentioning is the only one I can
>> think of. In that case LBF_ALL_PINNED would never be cleared, so when we do
>> the active bala
This reverts commit 40fa3780bac2b654edf23f6b13f4e2dd550aea10.
Now that we have a system-wide muting of hotplug lockdep during init,
this is no longer needed.
Signed-off-by: Valentin Schneider
---
kernel/sched/core.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/kernel
reverted patch.
[1]: https://lore.kernel.org/lkml/1543877121-4098-1-git-send-email-...@gmx.us/#t
Valentin Schneider (2):
cpu/hotplug: Mute hotplug lockdep during init
Revert "sched/core: Take the hotplug lock in sched_init_smp()"
kernel/cpu.c| 9 +
kernel/sched/
ck operations in the init codepaths, mute the
warnings until they start warning about real problems.
Suggested-by: Peter Zijlstra
Signed-off-by: Valentin Schneider
---
FYI Thomas Gleixner suggested using SYSTEM_SCHEDULING instead of
SYSTEM_RUNNING, but that seems to still be too early - sched_ini
Hi Vincent,
About time I had a look at this one...
On 14/12/2018 16:01, Vincent Guittot wrote:
> In case of active balance, we increase the balance interval to cover
> pinned tasks cases not covered by all_pinned logic.
AFAIUI the balance increase is there to have plenty of time to
stop the task
Hi Vincent,
On 14/12/2018 16:01, Vincent Guittot wrote:
> newly idle load balance is not always triggered when a cpu becomes idle.
> This prevent the scheduler to get a chance to migrate task for asym packing.
> Enable active migration because of asym packing during idle load balance too.
>
> Sig
Hi,
On 10/12/2018 14:07, Peter Zijlstra wrote:
[...]
> ---
> kernel/cpu.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 91d5c38eb7e5..e1ee8caf28b5 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -313,6 +313,8 @@ void cpus_write_unlock(void)
Hi,
On 10/12/2018 16:29, Steven Sistare wrote:
[...]
>> I have run some hackbench tests on my hikey arm64 octo cores with your
>> patchset. My original intent was to send a tested-by but I have some
>> performances regressions.
>> This hikey is the smp one and not the asymetric hikey960 that Valen
349 |
| CPU0 | 1844000 | 377 | 377 |
|--+-+--+---|
| CPU4 | 903000 | 249 | 248 |
| CPU4 | 1421000 | 249 | 394 |
| CPU4 | 1805000 | 500 | 500 |
| CPU4 | 2112000 | 499 | 583 |
| CPU4 | 2362000 | 653 | 654 |
We need this pretty badly, otherwise frequency settin
ROUND_CLOSEST(
> - sds->busiest_stat.avg_load * sds->busiest_stat.group_capacity,
> - SCHED_CAPACITY_SCALE);
> + env->imbalance = sds->busiest_stat.avg_load;
That should be group_load, not avg_load. With that fixed:
Reviewed-by: Valentin Schneider
>
> return 1;
> }
>
needs to force migrate tasks from busy but
>
Regarding just extending the condition to include idle balance:
Reviewed-by: Valentin Schneider
As in the previous thread, I'll still argue that if you want to *reliably*
exploit newidle balances to do asym packing active balancing, you should
add
On 20/12/2018 07:55, Vincent Guittot wrote:
> In case of active balance, we increase the balance interval to cover
> pinned tasks cases not covered by all_pinned logic. Neverthless, the
> active migration triggered by asym packing should be treated as the normal
> unbalanced case and reset the inte
Hi,
While running some hotplug torture test [1] on my Juno r0 I came across
the follow splat:
[ 716.561862] [ cut here ]
[ 716.566451] refcount_t: underflow; use-after-free.
[ 716.571240] WARNING: CPU: 2 PID: 18 at lib/refcount.c:280
refcount_dec_not_one+0x9c/0xc0
[ 7
On 20/12/2018 14:33, Vincent Guittot wrote:
[...]
>> As in the previous thread, I'll still argue that if you want to *reliably*
>> exploit newidle balances to do asym packing active balancing, you should
>> add some logic to raise rq->rd->overload when we notice some asym packing
>> could be done,
On 20/12/2018 14:50, Vincent Guittot wrote:
[...]
>> So now we reset the interval for all active balances (expect last active
>> balance case), even when it is done as a last resort because all other
>> tasks were pinned.
>>
>> Arguably the current code isn't much better (always increase the inter
On 21/12/2018 14:49, Vincent Guittot wrote:
[...]
> After looking at shed.c at this sha1, (sd->nr_balance_failed >
> sd->cache_nice_tries+2) was the only condition for doing active
> migration and as a result it was the only reason for doubling
> sd->balance_interval.
> My patch keeps exactly the
point indicator")
Signed-off-by: Valentin Schneider
---
kernel/sched/fair.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6483834f1278..6d653947a829 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5083,7 +5083,6 @@ static inlin
nd to
> accomplish that, this patch also creates _opp_kref_release_unlocked()
> which can be called from this new helper with the opp_table lock already
> held.
>
> Cc: 4.20 # v4.20
> Reported-by: Valentin Schneider
> Fixes: 2a4eb7358aba ("OPP: Don't remove dynamic
On 26/10/2018 19:28, Steven Sistare wrote:
> On 10/26/2018 2:04 PM, Valentin Schneider wrote:
[...]
>>
>> I was thinking that perhaps we could have scenarios where some rq's
>> keep stealing tasks off of each other and we end up circulating tasks
>> between CPUs. N
Hi,
On 26/09/2018 16:12, Valentin Schneider wrote:
> The alignment of the condition is off, clean that up.
>
> Also, logical operators have lower precedence than bitwise/relational
> operators, so remove one layer of parentheses to make the condition a
> bit simpler to follow.
>
On 31/10/2018 15:43, Steven Sistare wrote:
> On 10/29/2018 3:34 PM, Valentin Schneider wrote:
[...]
>> Suppose you have 2 rq's sharing a workload of 3 tasks. You get one rq with
>> nr_running == 1 (r_1) and one rq with nr_running == 2 (r_2).
>>
>> As soon as th
Hi folks,
I was cleaning up some hotplug torture test, and happened to run that on my
HiKey960 which resulted in a failure.
Turns out just a few hotplug operations are needed to trigger this, so I
boiled it down to this small script:
for ((i = 0; i < 4; i++)); do
echo "OFF $i"
echo 0 > /
On 14/10/2018 19:00, Valentin Schneider wrote:
> [...]
This was initially sent on Sunday but only got delivered today. In the
meantime I dug a bit deeper and I think it has to do with the firmware
implementation of PSCI. I raised a ticket for ATF ([1]).
[1]: https://github.com/ARM-software
Hi Steve,
On 05/11/2018 20:07, Steve Sistare wrote:
[...]
> The patch series is based on kernel 4.19.0-rc7. It compiles, boots, and
> runs with/without each of CONFIG_SCHED_SMT, CONFIG_SMP, CONFIG_SCHED_DEBUG,
> and CONFIG_PREEMPT. It runs without error with CONFIG_DEBUG_PREEMPT +
> CONFIG_SLUB_
On 31/10/2018 19:14, Peter Zijlstra wrote:
> On Mon, Oct 29, 2018 at 07:34:50PM +0000, Valentin Schneider wrote:
>> On a sidenote, I find it a bit odd that the exec_start threshold depends on
>> sysctl_sched_migration_cost, which to me is more about idle_balance() cost
>> t
e semantics of underlying
callees and make lockdep happy, take the hotplug lock in
sched_init_smp(). This also satisfies the comment atop
sched_init_domains() that says "Callers must hold the hotplug lock".
Reported-by: Sudeep Holla
Tested-by: Sudeep Holla
Signed-off-by: Valentin Schn
Hi,
On 22/10/2018 20:07, Steven Sistare wrote:
> On 10/22/2018 1:04 PM, Peter Zijlstra wrote:
[...]
>
> We could delete idle_balance() and use stealing exclusively for handling
> new idle. For each sd level, stealing would look for an overloaded CPU
> in the overloaded bitmap(s) that overlap tha
On 24/10/2018 20:27, Steven Sistare wrote:
[...]
> Hi Valentin,
>
> Asymmetric systems could maintain a separate bitmap for misfits; set a bit
> when a CPU goes on CPU, clear it going off. When a fast CPU goes new idle,
> it would first search the misfits mask, then search cfs_overload_cpus.
>
Hi Steve,
On 22/10/2018 15:59, Steve Sistare wrote:
[...]
> @@ -6740,8 +6744,19 @@ static void check_preempt_wakeup(struct rq *rq, struct
> task_struct *p, int wake_
> return p;
>
> idle:
> + /*
> + * We must set idle_stamp _before_ calling idle_balance(), such that we
> +
Hi Steve,
On 22/10/2018 15:59, Steve Sistare wrote:
[...]
> @@ -9683,6 +9698,141 @@ void trigger_load_balance(struct rq *rq)
> nohz_balancer_kick(rq);
> }
>
> +/*
> + * Search the runnable tasks in @cfs_rq in order of next to run, and find
> + * the first one that can be migrated to @dst_
Hi Steve,
On 22/10/2018 15:59, Steve Sistare wrote:
> Define a simpler version of can_migrate_task called can_migrate_task_llc
> which does not require a struct lb_env argument, and judges whether a
> migration from one CPU to another within the same LLC should be allowed.
>
> Signed-off-by: Stev
Hi,
On 19/10/2018 09:02, Ingo Molnar wrote:
>
> * Thara Gopinath wrote:
[...]
> So what unifies RT and DL utilization is that those are all direct task
> loads independent of external factors.
>
> Thermal load is more of a complex physical property of the combination of
> various internal
On 05/02/24 14:39, Mark Rutland wrote:
> [adding Valentin]
>
Thanks!
> On Mon, Feb 05, 2024 at 08:06:09AM -0500, Steven Rostedt wrote:
>> On Mon, 5 Feb 2024 10:28:57 +
>> Mark Rutland wrote:
>>
>> > > I try to write below:
>> > > echo 'target_cpus == 11 && reason == "Function call interrupts
On 06/02/24 16:42, richard clark wrote:
> On Tue, Feb 6, 2024 at 12:05 AM Valentin Schneider
> wrote:
>>
>> The CPUS{} thingie only works with an event field that is either declared as
>> a
>> cpumask (__cpumask) or a scalar. That's not the case for ipi_raise
On 13/02/21 13:50, Peter Zijlstra wrote:
> When affine_move_task(p) is called on a running task @p, which is not
> otherwise already changing affinity, we'll first set
> p->migration_pending and then do:
>
>stop_one_cpu(cpu_of_rq(rq), migration_cpu_stop, &arg);
>
> This then gets us to migr
G are likely to face similar issues.
Signed-off-by: Lingutla Chandrasekhar
[Use kthread_is_per_cpu() rather than p->nr_cpus_allowed]
[Reword changelog]
Signed-off-by: Valentin Schneider
Reviewed-by: Vincent Guittot
Reviewed-by: Dietmar Eggemann
---
kernel/sched/fair.c | 4
1 file
.scheduler.misfit.StaggeredFinishes
[2]: http://lore.kernel.org/r/20210217120854.1280-1-clingu...@codeaurora.org
[3]: http://lore.kernel.org/r/20210223023004.GB25487@xsang-OptiPlex-9020
Cheers,
Valentin
Lingutla Chandrasekhar (1):
sched/fair: Ignore percpu threads for imbalance pulls
Valentin
active balancing, and use it in
can_migrate_task(). Remove the sd->nr_balance_failed write that served the
same purpose. Cleanup the LBF_DST_PINNED active balance special case.
Signed-off-by: Valentin Schneider
Reviewed-by: Vincent Guittot
Reviewed-by: Dietmar Eggemann
---
kerne
pulled by a known env->dst_cpu, whose capacity can be anywhere within the
local group's capacity extrema.
While at it, replace group_smaller_{min, max}_cpu_capacity() with
comparisons of the source group's min/max capacity and the destination
CPU's capacity.
Signed-off-by: Valentin
lta is lost in the noise (and there is quite a lot of
noise, unfortunately).
I'm still looking for something I can benchmark on the eMAG to get some
GICv3 results.
Links
=
[1]:
https://lore.kernel.org/lkml/1414235215-10468-1-git-send-email-marc.zyng...@arm.com/
Valentin Schneider (
ASKS_FLOW, to denote chips with such
behaviour. Add a new IRQ data flag, IRQD_IRQ_FLOW_MASKED, to keep this
flow-induced mask state separate from regular mask / unmask operations
(IRQD_IRQ_MASKED).
Signed-off-by: Valentin Schneider
---
include/linux/irq.h| 10 ++
kernel/irq/chip.c
The newly-added IRQCHIP_AUTOMASKS_FLOW flag requires some additional
bookkeeping around chip->{irq_ack, irq_eoi}() calls. Define wrappers around
those chip callbacks to drive the IRQD_IRQ_FLOW_MASKED state of an IRQ when
the chip has the IRQCHIP_AUTOMASKS_FLOW flag.
Signed-off-by: Valen
nctionality.
Signed-off-by: Valentin Schneider
---
kernel/irq/chip.c | 12 +++-
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 046b4486c88c..936ef247b13d 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -429,10 +429,12 @@ st
ollowed by a single ->irq_eoi(). No more, no less, and in that
order.
Introduce a new flow handler which guarantees said ack / eoi pairing. Note
that it is strikingly similar to handle_fasteoi_mask_irq() for now, but
will be further modified in later patches
Signed-off-by: Valentin Schneider
--
A subsequent patch will let IRQs end up in irq_finalize_oneshot() without
IRQD_IRQ_MASKED, but with IRQD_IRQ_FLOW_MASKED set instead. Let such IRQs
receive their final ->irq_eoi().
Signed-off-by: Valentin Schneider
---
kernel/irq/manage.c | 2 +-
1 file changed, 1 insertion(+), 1 delet
are bounded by a final eoi_irq() - this is the case for chips with
IRQCHIP_AUTOMASKS_FLOW and IRQCHIP_EOI_THREADED.
Make handle_strict_flow_irq() leverage IRQCHIP_AUTOMASKS_FLOW and issue an
ack_irq() rather than a mask_ack_irq() when possible.
Signed-off-by: Valentin Schneider
---
kernel/irq/chi
Subsequent patches will make the gic-v3 irqchip use an ->irq_ack()
callback. As a preparation, make the NMI flow handlers call said callback
if it is available.
Since this departs from the fasteoi scheme of only issuing a suffix
->eoi(), rename the NMI flow handlers.
Signed-off-by: Va
X: what about pMSI and fMSI ?
Signed-off-by: Valentin Schneider
---
drivers/irqchip/irq-gic-v3-its-pci-msi.c | 1 +
drivers/irqchip/irq-gic-v3-its.c | 1 +
2 files changed, 2 insertions(+)
diff --git a/drivers/irqchip/irq-gic-v3-its-pci-msi.c
b/drivers/irqchip/irq-gic-v3-its-p
t;irq_mask() call.
Despite not having an Active state, LPIs are made to use
handle_strict_flow_irq() as well. This lets them re-use
gic_eoimode1_chip.irq_ack() as Priority Drop, rather than special-case them
in gic_handle_irq().
EOImode=0 handling remains unchanged.
Signed-off-by: Valentin Schnei
t;irq_mask() call.
EOImode=0 handling remains unchanged.
Signed-off-by: Valentin Schneider
---
drivers/irqchip/irq-gic.c | 14 +++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index b1d9c22caf2e..4919478c3e41 100
Hi,
On 02/28/2018 07:46 AM, Gaku Inami wrote:
> Hi
>
[...]
>
> Tested-by: Gaku Inami
>
> I tested as same in other SoC. It looks fine.
>
Thanks for testing on your side !
Hi,
On 03/04/18 13:17, Vincent Guittot wrote:
> Hi Valentin,
>
[...]
>>
>> I believe ASYM_PACKING behaves better here because the workload is only
>> sysbench threads. As stated above, since task utilization is disregarded, I
>
> It behaves better because it doesn't wait for the task's utilizati
Hi,
LGTM. Tiny inline comment but TBH might not be worth it.
FWIW: Reviewed-by: Valentin Schneider
On 26/04/18 11:30, Viresh Kumar wrote:
> Rearrange select_task_rq_fair() a bit to avoid executing some
> conditional statements in few specific code-paths. That gets rid of the
> got
Hi,
On 27/04/18 15:04, Jiada Wang wrote:
> Hi
>
> with this patch, if enable CONFIG_DEBUG_ATOMIC_SLEEP=y,
> then I am getting following BUG report during early startup
>
Thanks for bringing that up.
> Backtrace caused by [1] during early kernel startup:
> [ 5.325288] CPU: All CPU(s) started at
1 - 100 of 997 matches
Mail list logo