On 06/10/24 20:20, Qais Yousef wrote:
> Make rt_task() return true only for RT class and add new realtime_task() to
> return true for RT and DL classes to avoid some confusion the old API can
> cause.
I am not aware of any pending review comments for this series. Is it ready to
be
On 06/05/24 16:07, Daniel Bristot de Oliveira wrote:
> On 6/5/24 15:24, Qais Yousef wrote:
> >>> But rt is a shortened version of realtime, and so it is making *it less*
> >>> clear that we also have DL here.
> >> Can SCHED_DL be considered a real-time sch
Some find the name realtime overloaded. Use rt_or_dl() as an
alternative, hopefully better, name.
Suggested-by: Daniel Bristot de Oliveira
Signed-off-by: Qais Yousef
---
fs/bcachefs/six.c | 2 +-
fs/select.c | 2 +-
include/linux/ioprio.h| 2
{rt, realtime, dl}_{task, prio}() functions' return value is actually
a bool. Convert their return type to reflect that.
Suggested-by: Steven Rostedt (Google)
Reviewed-by: Sebastian Andrzej Siewior
Reviewed-by: Steven Rostedt (Google)
Reviewed-by: Metin Kaya
Signed-off-by: Qais Y
0527234508.1062360-1-qyou...@layalina.io/
v4 discussion:
https://lore.kernel.org/lkml/20240601213309.1262206-1-qyou...@layalina.io/
v5 discussion:
https://lore.kernel.org/lkml/20240604144228.1356121-1-qyou...@layalina.io/
Qais Yousef (3):
sched/rt: Clean up usage of rt_task()
sched/rt, dl: Convert functio
d-by: Phil Auld
Reviewed-by: Steven Rostedt (Google)
Reviewed-by: Sebastian Andrzej Siewior
Signed-off-by: Qais Yousef
---
fs/bcachefs/six.c | 2 +-
fs/select.c | 2 +-
include/linux/ioprio.h| 2 +-
include/linux/sched/deadline.h| 6 -
On 06/05/24 11:32, Sebastian Andrzej Siewior wrote:
> On 2024-06-04 17:57:46 [+0200], Daniel Bristot de Oliveira wrote:
> > On 6/4/24 16:42, Qais Yousef wrote:
> > > - (wakeup_rt && !dl_task(p) && !rt_task(p)) ||
> > > + (wakeup_rt &
{rt, realtime, dl}_{task, prio}() functions return value is actually
a bool. Convert their return type to reflect that.
Suggested-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
include/linux/sched/deadline.h | 8 +++-
include/linux/sched/rt.h | 16 ++--
2
d-by: Phil Auld
Reviewed-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
fs/bcachefs/six.c | 2 +-
fs/select.c | 2 +-
include/linux/ioprio.h| 2 +-
include/linux/sched/deadline.h| 6 --
include/linux/sched/prio.h| 1
l.org/lkml/20240527234508.1062360-1-qyou...@layalina.io/
v4 discussion:
https://lore.kernel.org/lkml/20240601213309.1262206-1-qyou...@layalina.io/
Qais Yousef (2):
sched/rt: Clean up usage of rt_task()
sched/rt, dl: Convert functions to return bool
fs/bcachefs/six.c | 2 +-
f
On 06/03/24 08:33, Metin Kaya wrote:
> On 01/06/2024 10:33 pm, Qais Yousef wrote:
> > {rt, realtime, dl}_{task, prio}() functions return value is actually
> > a bool. Convert their return type to reflect that.
> >
> > Suggested-by: Steven Rostedt (Google)
>
On 05/31/24 08:30, Sebastian Andrzej Siewior wrote:
> On 2024-05-30 12:10:44 [+0100], Qais Yousef wrote:
> > > This is not consistent because IMHO the clock setup & slack should be
> > > handled equally. So I am asking the sched folks for a policy and I am
> > &g
d-by: Phil Auld
Reviewed-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
fs/bcachefs/six.c | 2 +-
fs/select.c | 2 +-
include/linux/ioprio.h| 2 +-
include/linux/sched/deadline.h| 6 --
include/linux/sched/prio.h| 1
{rt, realtime, dl}_{task, prio}() functions return value is actually
a bool. Convert their return type to reflect that.
Suggested-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
include/linux/sched/deadline.h | 8
include/linux/sched/rt.h | 16
2
989-1-qyou...@layalina.io/
v2 discussion:
https://lore.kernel.org/lkml/20240515220536.823145-1-qyou...@layalina.io/
v3 discussion:
https://lore.kernel.org/lkml/20240527234508.1062360-1-qyou...@layalina.io/
Qais Yousef (2):
sched/rt: Clean up usage of rt_task()
sched/rt, dl: Convert functions
On 05/29/24 12:55, Sebastian Andrzej Siewior wrote:
> On 2024-05-29 11:34:09 [+0100], Qais Yousef wrote:
> > > behaviour. But then it is insistent which matters only in the RT case.
> > > Puh. Any sched folks regarding policy?
> >
> > I am not sure I understood yo
On 05/29/24 10:29, Sebastian Andrzej Siewior wrote:
> On 2024-05-27 18:26:50 [+0100], Qais Yousef wrote:
> > > In order to be PI-boosted you need to acquire a lock and the only lock
> > > you can sleep while acquired without generating a warning is a mutex_t
> > > (o
On 05/29/24 09:34, Sebastian Andrzej Siewior wrote:
> On 2024-05-28 00:45:08 [+0100], Qais Yousef wrote:
> > diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
> > index 5cb88b748ad6..87d2370dd3db 100644
> > --- a/include/linux/sched/deadline.h
&g
{rt, realtime, dl}_{task, prio}() functions return value is actually
a bool. Convert their return type to reflect that.
Suggested-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
include/linux/sched/deadline.h | 4 ++--
include/linux/sched/rt.h | 8
2 files changed, 6
As Sebastian explained in [1], We need only look at the policy to decide
if we need to remove the slack because PI-boosted tasks should not
sleep.
[1] https://lore.kernel.org/lkml/20240521110035.kriwl...@linutronix.de/
Suggested-by: Sebastian Andrzej Siewior
Signed-off-by: Qais Yousef
d-by: Phil Auld
Reviewed-by: Steven Rostedt (Google)
Signed-off-by: Qais Yousef
---
fs/bcachefs/six.c | 2 +-
fs/select.c | 2 +-
include/linux/ioprio.h| 2 +-
include/linux/sched/deadline.h| 6 --
include/linux/sched/prio.h| 1
some rt_task()
users.
v1 discussion:
https://lore.kernel.org/lkml/20240514234112.792989-1-qyou...@layalina.io/
v2 discussion:
https://lore.kernel.org/lkml/20240515220536.823145-1-qyou...@layalina.io/
Qais Yousef (3):
sched/rt: Clean up usage of rt_task()
hrtimer: Convert
On 05/23/24 11:45, Steven Rostedt wrote:
> On Wed, 15 May 2024 23:05:36 +0100
> Qais Yousef wrote:
> > diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
> > index df3aca89d4f5..5cb88b748ad6 100644
> > --- a/include/linux/sched/deadline.h
>
On 05/21/24 13:00, Sebastian Andrzej Siewior wrote:
> On 2024-05-15 23:05:36 [+0100], Qais Yousef wrote:
> > rt_task() checks if a task has RT priority. But depends on your
> > dictionary, this could mean it belongs to RT class, or is a 'realtime'
> > task, w
d-by: Phil Auld
Signed-off-by: Qais Yousef
---
Changes since v1:
* Use realtime_task_policy() instead task_has_realtime_policy() (Peter)
* Improve commit message readability about replace some rt_task()
users.
v1 discussion:
https://lore.kernel.org/lkml/2024051423411
audit the users and
replace the ones required the old behavior with the new realtime_task()
which returns true for RT and DL classes. Introduce similar
realtime_prio() to create similar distinction to rt_prio() and update
the users that required the old behavior to use the new function.
"""
> Reviewed-by: Phil Auld
Thanks for having a look!
Cheers
--
Qais Yousef
On 05/15/24 07:20, Phil Auld wrote:
> On Wed, May 15, 2024 at 10:32:38AM +0200 Peter Zijlstra wrote:
> > On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote:
> > >
> > > Hi Qais,
> > >
> > > On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef
On 05/15/24 10:32, Peter Zijlstra wrote:
> On Tue, May 14, 2024 at 07:58:51PM -0400, Phil Auld wrote:
> >
> > Hi Qais,
> >
> > On Wed, May 15, 2024 at 12:41:12AM +0100 Qais Yousef wrote:
> > > rt_task() checks if a task has RT priority. But depends on your
ltime() to task_has_realtime_policy() as the old name
is confusing against the new realtime_task().
No functional changes were intended.
[1]
https://lore.kernel.org/lkml/20240506100509.gl40...@noisy.programming.kicks-ass.net/
Signed-off-by: Qais Yousef
---
fs/select.c
On 04/19/21 14:54, Florian Fainelli wrote:
>
>
> On 4/12/2021 4:08 AM, Qais Yousef wrote:
> > Hi Alexander
> >
> > Fixing Ard's email as the Linaro one keeps bouncing back. Please fix that in
> > your future postings.
> >
> > On 04/12/21 08:2
ff-by: Dongli Zhang
> ---
I don't see the harm in adding the debug if some find it useful.
FWIW
Reviewed-by: Qais Yousef
Cheers
--
Qais Yousef
> Changed since v1 RFC:
> - use pr_debug() but not pr_err_once() (suggested by Qais Yousef)
> - print log for cpuhp_down_call
On 04/12/21 12:55, Peter Zijlstra wrote:
> On Sun, Mar 21, 2021 at 07:30:37PM +0000, Qais Yousef wrote:
> > On 03/10/21 15:53, Peter Zijlstra wrote:
> > > --- a/kernel/cpu.c
> > > +++ b/kernel/cpu.c
> > > @@ -160,6 +160,9 @@ static int cpuhp_invoke_callback(un
Hi Alexander
Fixing Ard's email as the Linaro one keeps bouncing back. Please fix that in
your future postings.
On 04/12/21 08:28, Alexander Sverdlin wrote:
> Hi!
>
> On 09/04/2021 17:33, Qais Yousef wrote:
> > I still think the ifdefery in patch 3 is ugly. Any reason my su
dif
+
There's an ifdef, followed by a code that embeds
!IS_ENABLED(CONFIG_ARM_MODULE_PLTS) followed by another ifdef :-/
And there was no need to make the new warn arg visible all the way to
ftrace_call_repalce() and all of its users.
FWIW
Tested-by: Qais Yousef
If this gets accepted as-is, I'll send a patch to improve on this.
Thanks
--
Qais Yousef
nce() but I think this can fail for legitimate
reasons and is not necessarily strictly always an error?
Thanks
--
Qais Yousef
o that, then cpu_up_down_serialize_trainwrech() can be called from
cpu_device_up/down() which implies !task_frozen.
Can't remember now if Alexey moved the uevent() handling out of the loop for
efficiency reasons or was seeing something else. I doubt it was the latter.
Thanks
--
Qais Yousef
On 03/24/21 17:33, Alexander Sverdlin wrote:
> Hello Qais,
>
> On 24/03/2021 16:57, Qais Yousef wrote:
> >>> FWIW my main concern is about duplicating the range check in
> >>> ftrace_call_replace() and using magic values that already exist in
> >>>
Hi Florian
On 03/23/21 20:37, Florian Fainelli wrote:
> Hi Qais,
>
> On 3/23/2021 3:22 PM, Qais Yousef wrote:
> > Hi Alexander
> >
> > On 03/22/21 18:02, Alexander Sverdlin wrote:
> >> Hi Qais,
> >>
> >> On 22/03/2021 17:32, Qais Yousef wr
Hey Alexander
On 03/24/21 10:04, Alexander Sverdlin wrote:
> Hi Qais,
>
> On 23/03/2021 23:22, Qais Yousef wrote:
> >>> Yes you're right. I was a bit optimistic on CONFIG_DYNAMIC_FTRACE will
> >>> imply
> >>> CONFIG_ARM_MODULE_PLTS is enabled
Hi Alexander
On 03/22/21 18:02, Alexander Sverdlin wrote:
> Hi Qais,
>
> On 22/03/2021 17:32, Qais Yousef wrote:
> > Yes you're right. I was a bit optimistic on CONFIG_DYNAMIC_FTRACE will imply
> > CONFIG_ARM_MODULE_PLTS is enabled too.
> >
> > It only h
On 03/22/21 11:01, Steven Rostedt wrote:
> On Sun, 21 Mar 2021 19:06:11 +
> Qais Yousef wrote:
>
> > #ifdef CONFIG_DYNAMIC_FTRACE
> > struct dyn_arch_ftrace {
> > -#ifdef CONFIG_ARM_MODULE_PLTS
> > struct module *mod;
> > -#endif
> >
st->fail = CPUHP_INVALID;
> return -EAGAIN;
Thanks
--
Qais yousef
Hi Alexander
On 03/14/21 22:02, Qais Yousef wrote:
> I fixed Ard's email as it kept bouncing back.
>
> +CC Linus Walleij
>
> On 03/12/21 10:35, Florian Fainelli wrote:
> > On 3/12/21 9:24 AM, Qais Yousef wrote:
> > > Hi Alexander
> > >
>
I fixed Ard's email as it kept bouncing back.
+CC Linus Walleij
On 03/12/21 10:35, Florian Fainelli wrote:
> On 3/12/21 9:24 AM, Qais Yousef wrote:
> > Hi Alexander
> >
> > On 03/10/21 18:17, Alexander Sverdlin wrote:
> >> Hi!
> >>
> >> On
the patch):
>
> https://www.spinics.net/lists/arm-kernel/msg878599.html
I am testing with your module. I can't reproduce the problem you describe with
it as I stated.
I will try to spend more time on it on the weekend.
Thanks
--
Qais Yousef
On 03/05/21 15:41, Valentin Schneider wrote:
> On 05/03/21 15:56, Peter Zijlstra wrote:
> > On Sat, Dec 26, 2020 at 01:54:45PM +0000, Qais Yousef wrote:
> >>
> >> > +static inline struct task_struct *get_push_task(struct rq *rq)
> >> > +{
> &
On 03/08/21 08:58, Alexander Sverdlin wrote:
> Hi!
>
> On 07/03/2021 18:26, Qais Yousef wrote:
> > I tried on 5.12-rc2 and 5.11 but couldn't reproduce the problem using your
I still can't reproduce on 5.12-rc2.
I do have CONFIG_ARM_MODULE_PLTS=y. Do you need to do som
dr);
> +
> +#ifdef CONFIG_ARM_MODULE_PLTS
> + if (!new) {
> + struct module *mod = rec->arch.mod;
> +
> + if (mod) {
What would happen if !new and !mod?
> + aaddr = get_module_plt(mod, ip, aaddr);
> + new = ftrace_call_replace(ip, aaddr);
I assume we're guaranteed to have a sensible value returned in 'new' here?
Thanks
--
Qais Yousef
> + }
> + }
> +#endif
On 03/05/21 15:41, Valentin Schneider wrote:
> On 05/03/21 15:56, Peter Zijlstra wrote:
> > On Sat, Dec 26, 2020 at 01:54:45PM +0000, Qais Yousef wrote:
> >>
> >> > +static inline struct task_struct *get_push_task(struct rq *rq)
> >> > +{
> &
On 03/05/21 15:56, Peter Zijlstra wrote:
> On Sat, Dec 26, 2020 at 01:54:45PM +0000, Qais Yousef wrote:
> > Hi Peter
> >
> > Apologies for the late comments on the patch.
>
> Ha!, it seems I too need to apologize for never having actually found
> your reply ;-)
No
oad |= update_nohz_stats(rq);
I think Dietmar commented on this on v1. There's a change in behavior here
AFAICT. Worth expanding the changelog to explain that this will be rate limited
and why it's okay? It'll help a lost soul like me who doesn't have the ins and
outs of this code carved in their head :-)
Thanks
--
Qais Yousef
>
> /*
>* If time for next balance is due,
> --
> 2.17.1
>
nohz_idle_balance(cpu_rq(cpu), NOHZ_STATS_KICK, CPU_IDLE);
> +}
nit: need_resched() implies this_cpu, but the function signature implies you
could operate on any CPU. Do need_resched() outside this function or make
the function read smp_processor_id() itself without taking an arg?
Thanks
--
Qais Yousef
build-tested
> Cc: Stephen Rothwell
> Cc: Thomas Gleixner
> Cc: Greg Kroah-Hartman
> Cc: Qais Yousef
> ---
> drivers/virt/acrn/hsm.c | 9 +
> 1 file changed, 9 insertions(+)
>
> diff --git a/drivers/virt/acrn/hsm.c b/drivers/virt/acrn/hsm.c
> index 1f6b7c54a
quot;)
> Reported-by: Randy Dunlap
> Signed-off-by: Shuo Liu
> Acked-by: Randy Dunlap # build-tested
> Cc: Stephen Rothwell
> Cc: Thomas Gleixner
> Cc: Greg Kroah-Hartman
> Cc: Qais Yousef
> ---
Reviewed-by: Qais Yousef
Thanks!
--
Qais Yousef
> include/linux/c
hotplug() after onlining the
> cpu in cpu_device_up() and in cpuhp_smt_enable().
>
> Co-analyzed-by: Joshua Baker
> Signed-off-by: Alexey Klimov
> ---
This looks good to me.
Reviewed-by: Qais Yousef
Thanks
--
Qais Yousef
Hi Thorsten
On 02/15/21 06:55, Thorsten Leemhuis wrote:
> Hi! Many thx for looking into this, much appreciated!
>
> Am 14.02.21 um 17:00 schrieb Qais Yousef:
> > On 02/10/21 06:48, Thorsten Leemhuis wrote:
> >
> >> - * If the failure includes a stack dum
eadable, which is explained in
> -admin-guide/bug-hunting.rst.
> +Note, if you can't get this to work, simply skip this step and mention the
> +reason for it in the report. If you're lucky, it might not be needed. And if
> it
> +is, someone might help you to get things going. Also be aware this is just
> one
> +of several ways to decode kernel stack traces. Sometimes different steps will
> +be required to retrieve the relevant details. Don't worry about that, if
> that's
> +needed in your case, developers will tell you what to do.
Ah you already clarify nicely here this is a good-to-have rather than
a must-have as I was trying to elude to above :-)
This looks good to me in general. With the above minor nits fixed, feel free to
add my
Reviewed-by: Qais Yousef
Thanks!
--
Qais Yousef
>
>
> Special care for regressions
> --
> 2.29.2
>
his is that you'll now get a bunch of notifications
> across things like suspend/hybernate.
And the resume latency will incur 5-30ms * nr_cpu_ids.
Since you just care about device_online(), isn't cpu_device_up() a better place
for the wait? This function is special helper for device_online(), leaving
suspend/resume and kexec paths free from having to do this unnecessary wait.
Thanks
--
Qais Yousef
id
> interaction was intentional.
The '+1' was added in that comment. 'Original' code was just resetting the
nr_balance_failed cache_nice_tries, so that we don't do another one too soon
I think.
With this change, no active balance is allowed until later. Which makes sense.
I can't see why we would have allowed another kick sooner tbh. But as you say,
this is ancient piece of logic.
I agree I can't see a reason to worry about this (potential) change of
behavior.
Thanks
--
Qais Yousef
On 02/03/21 18:59, Valentin Schneider wrote:
> On 03/02/21 17:23, Qais Yousef wrote:
> > On 01/27/21 19:30, Valentin Schneider wrote:
> >> Fiddling some more with a TLA+ model of set_cpus_allowed_ptr() & friends
> >> unearthed one more outstanding i
> > > if (pulled_task)
> > > this_rq->idle_stamp = 0;
> > > + else
> > > + nohz_newidle_balance(this_rq);
> >
> > Since nohz_newidle_balance() will not do any real work now, I couldn't
> > figure
> > out what moving this here achieves. Fault from my end to parse the change
> > most
> > likely :-)
>
> The goal is to schedule the update only if we are about to be idle and
> nothing else has been queued in the meantime
I see. This short coming already existed and not *strictly* related to moving
update of blocked load out of newidle balance.
Thanks
--
Qais Yousef
ight?
If I didn't miss something, then dest_cpu should be CPU0 too, not CPU1 and the
task should be moved back to CPU0 as expected?
Thanks
--
Qais Yousef
>task_rq(p) == rq && pending
> __migrate_task(dest_
balance(struct rq *this_rq, struct
> rq_flags *rf)
>
> if (pulled_task)
> this_rq->idle_stamp = 0;
> + else
> + nohz_newidle_balance(this_rq);
Since nohz_newidle_balance() will not do any real work now, I couldn't figure
out what moving this here achieves. Fault from my end to parse the change most
likely :-)
Joel can still test this patch as is of course. This is just an early review
since I already spent the time trying to understand it.
Thanks
--
Qais Yousef
>
> rq_repin_lock(this_rq, rf);
>
> --
> 2.17.1
tive_balance() too, no? We enter
this path because need_active_balance() returned true; one of the conditions it
checks for is
return unlikely(sd->nr_balance_failed > sd->cache_nice_tries+2);
So since we used to reset nr_balanced_failed to cache_nice_tries+1, the above
conditio
r is already doing a humongous amount of work, but turning
> those checks into NOPs for those who don't need it is fairly
> straightforward, so do that.
>
> Suggested-by: Rik van Riel
> Signed-off-by: Valentin Schneider
> ---
Reviewed-by: Qais Yousef
Thanks
--
Qais Y
led to pull anything and the src_rq has a misfit task, but
> + * the busiest group_type was higher than group_misfit_task, try to
> + * go for a misfit active balance anyway.
> + */
> + if ((env->idle != CPU_NOT_IDLE) &&
> + env->src_rq->misfit_task_load &&
> + cpu_capacity_greater(env->dst_cpu, env->src_cpu))
> + return 1;
> +
Reviewed-by: Qais Yousef
Thanks
--
Qais Yousef
> return 0;
> }
>
> --
> 2.27.0
>
* For ASYM_CPUCAPACITY domains with misfit tasks we
> - * simply seek the "biggest" misfit task.
> + * simply seek the "biggest" misfit task we can
> + * accommodate.
>
ery busy task that is running on the biggest cpu this will always
return true.
> + cpu_capacity_greater(env->dst_cpu, env->src_cpu))
But this will save us from triggering unnecessary migration.
We could swap them and optimize for this particular case, but tbh this is the
type of mic
gt; single check:
>
> capacity_greater(, );
>
> This has the added benefit of returning false if the misfit task CPU's is
> heavily pressured, but there are no better candidates for migration.
>
> Signed-off-by: Valentin Schneider
> ---
check_cpu_capacity() call looks redundant
p's
> capacity extrema.
>
> Replace group_smaller_{min, max}_cpu_capacity() with comparisons of the
> source group's min/max capacity and the destination CPU's capacity.
>
> Signed-off-by: Valentin Schneider
> ---
Reviewed-by: Qais Yo
ly few lines below we have
return capacity_greater(ref->sgc->max_capacity, sg->sgc->max_capacity);
which pass 'ref->...' as cap which can be confusing when looking at the
function signature @ref.
Either way, this LGTM
Reviewed-by: Qais Yousef
Thanks
--
Qais
capacities less
> than
> 5% apart.
One more margin is a cause of apprehension to me. But in this case I think it
is the appropriate thing to do now. I can't think of a scenario where this
could hurt.
Thanks
--
Qais Yousef
fined rate_limit_us value.
And tweaked the way we call cpufreq_update_util() from
update_blocked_averages() too so that we first update blocked load on all cpus,
then we ask for the frequency update. Combined with above this should result to
a single call to sugov_update_shared() for each po
On 01/26/21 17:23, Peter Zijlstra wrote:
> On Wed, Jan 27, 2021 at 12:58:33AM +0900, Sergey Senozhatsky wrote:
> > On (21/01/26 14:59), Qais Yousef wrote:
>
> > > # [67628.388606] hrtimer: interrupt took 304720 ns
> > > [67628
On 01/25/21 14:23, Vincent Guittot wrote:
> On Fri, 22 Jan 2021 at 19:39, Qais Yousef wrote:
> >
> > On 01/22/21 17:56, Vincent Guittot wrote:
> > > > ---
> > > > kernel/sched/fair.c | 2 +-
> > > > 1 file changed, 1 insertion(+), 1 deletion(-)
On 01/26/21 13:46, Sergey Senozhatsky wrote:
> On (21/01/23 23:37), Qais Yousef wrote:
> >
> > I hit a pr_warn() inside hrtimer_interrupt() which lead to a BUG: Invalid
> > wait
> > context splat.
> >
> > The problem wasn't reproducible but I think th
On 01/25/21 12:04, John Ogness wrote:
> On 2021-01-25, Peter Zijlstra wrote:
> > On Sat, Jan 23, 2021 at 11:37:40PM +0000, Qais Yousef wrote:
> >> To allow users in code where printk is not allowed.
> >>
> >> Signed-off-by: Qais Yousef
> &
printk is not allowed in this context and causes a BUG: Invalid wait context.
Signed-off-by: Qais Yousef
---
kernel/time/hrtimer.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
index 743c852e10f2..2d9b7cf1d5e2 100644
--- a
g; but the name
ended not much shorter and I'm not sure if the wrappers are a win overall.
Since I've already done it, I'm sticking to it in this post. But will be happy
to drop it and just open code the printk_deferred_once(KERN_WARN, ...) in
hrtimer_interrupt() instead.
Thanks
Qais
To allow users in code where printk is not allowed.
Signed-off-by: Qais Yousef
---
include/linux/printk.h | 24
1 file changed, 24 insertions(+)
diff --git a/include/linux/printk.h b/include/linux/printk.h
index fe7eb2351610..92c0978c7b44 100644
--- a/include/linux
te_blocked_fair() expensive, and it seems
to always return something has decayed so we end up with a call to
sugov_update_shared() in every call.
I think we should limit the expensive call to update_blocked_averages() but
I honestly don't know what would be the right way to do it
Very good catch! Yes, this missed the reentered kprobe case.
>
> Acked-by: Masami Hiramatsu
Thanks!
>
> >
> > Fixes: ba090f9cafd5 ("arm64: kprobes: Remove redundant kprobe_step_ctx")
> > Signed-off-by: Qais Yousef
> > ---
> >
> > Anoth
e the problem.
Fixes: ba090f9cafd5 ("arm64: kprobes: Remove redundant kprobe_step_ctx")
Signed-off-by: Qais Yousef
---
Another change in behavior I noticed is that before ba090f9cafd5 ("arm64:
kprobes: Remove redundant kprobe_step_ctx") if 'cur' was NULL we wouldn
On 01/14/21 12:45, Qais Yousef wrote:
> Hi
>
> I hit this splat today
>
> # [67628.388606] hrtimer: interrupt took 304720 ns
> [67628.393546]
> [67628.393550] =
> [67628.393554] [ BUG: Invalid wait context ]
>
On 01/19/21 17:50, Quentin Perret wrote:
> On Tuesday 19 Jan 2021 at 17:42:44 (+), Qais Yousef wrote:
> > Hmm IIUC you want to still tag it as misfit so it'll be balanced within the
> > little cores in case there's another core with more spare capacity, right?
>
On 01/19/21 16:55, Quentin Perret wrote:
> On Tuesday 19 Jan 2021 at 16:40:27 (+), Qais Yousef wrote:
> > On 01/19/21 15:35, Quentin Perret wrote:
> > > Do you mean failing the sched_setaffinity syscall if e.g. the task
> > > has a min clamp that is higher than
On 01/19/21 15:35, Quentin Perret wrote:
> On Tuesday 19 Jan 2021 at 12:07:55 (+), Qais Yousef wrote:
> > If the task is pinned to a cpu, setting the misfit status means that
> > we'll unnecessarily continuously attempt to migrate the task but fail.
> >
> > Th
Documentation/bpf/bpf_design_QA.rst to document this contract.
Acked-by: Yonghong Song
Signed-off-by: Qais Yousef
---
Documentation/bpf/bpf_design_QA.rst | 6 ++
include/trace/bpf_probe.h | 12 ++--
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a
Reuse module_attach infrastructure to add a new bare tracepoint to check
we can attach to it as a raw tracepoint.
Signed-off-by: Qais Yousef
---
.../bpf/bpf_testmod/bpf_testmod-events.h | 6 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 21 ++-
.../selftests/bpf
ts in tracefs created with them.
Qais Yousef (2):
trace: bpf: Allow bpf to attach to bare tracepoints
selftests: bpf: Add a new test for bare tracepoints
Documentation/bpf/bpf_design_QA.rst | 6 +
include/trace/bpf_probe.h | 12 +++--
.../bpf/bpf_te
7; load-balance type")
Signed-off-by: Qais Yousef
---
kernel/sched/fair.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 197a51473e0c..9379a481dd8c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4060,7 +406
< 0, "testmod_file_open", "failed: %d\n", err))
> return err;
>
> My above rewrite intends to use "err" during final "return" statement,
> so I put assignment of "err = -errno" inside the CHECK branch.
> But there are different ways to implement this properly.
Okay I see now. Sorry I missed your point initially. I will fix and send v3.
Thanks
--
Qais Yousef
On 01/16/21 18:11, Yonghong Song wrote:
>
>
> On 1/16/21 10:21 AM, Qais Yousef wrote:
> > Reuse module_attach infrastructure to add a new bare tracepoint to check
> > we can attach to it as a raw tracepoint.
> >
> > Signed-off-by: Qais Yousef
> > --
tracepoints are declare
with TRACE_EVENT().
BPF can attach to these tracepoints as RAW_TRACEPOINT() only as there're no
events in tracefs created with them.
Qais Yousef (2):
trace: bpf: Allow bpf to attach to bare tracepoints
selftests: bpf: Add a new test for bare tracepoints
Documentatio
Documentation/bpf/bpf_design_QA.rst to document this contract.
Signed-off-by: Qais Yousef
---
Documentation/bpf/bpf_design_QA.rst | 6 ++
include/trace/bpf_probe.h | 12 ++--
2 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/Documentation/bpf/bpf_design_QA.rst
b
Reuse module_attach infrastructure to add a new bare tracepoint to check
we can attach to it as a raw tracepoint.
Signed-off-by: Qais Yousef
---
.../bpf/bpf_testmod/bpf_testmod-events.h | 6 +
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 21 ++-
.../selftests/bpf
On 01/13/21 17:40, Jean-Philippe Brucker wrote:
> On Wed, Jan 13, 2021 at 10:21:31AM +0000, Qais Yousef wrote:
> > On 01/12/21 12:07, Andrii Nakryiko wrote:
> > > > > > $ sudo ./test_progs -v -t module_attach
> > > > >
> > > > > u
end up here. Not sure if there's an
appropriate or easy fix for that.
But for the sake of documenting at least, sending this report to LKML.
It was a random occurrence and not something I can reproduce.
Thanks
--
Qais Yousef
have all necessary FTRACE options enabled,
including DYNAMIC_FTRACE. I think I did try enabling fault injection too just
in case. I have CONFIG_FAULT_INJECTION=y and CONFIG_FUNCTION_ERROR_INJECTION=y.
I will look at the CI config and see if I can figure it out.
I will likely get a chance to look at all of this and send v2 over the
weekend.
Thanks
--
Qais Yousef
o you know what is the possible reason?
Yeah I did a last minute fix to address a checkpatch.pl error and my
verification of the change wasn't good enough obviously.
If you're keen to try out I can send you a patch with the fix. I should send v2
by the weekend too.
Thanks for having a look.
Cheers
--
Qais Yousef
1 - 100 of 683 matches
Mail list logo