Le Thu, May 08, 2025 at 02:43:11PM +0800, Z qiang a écrit :
> On Thu, May 8, 2025 at 12:25 AM Frederic Weisbecker
> wrote:
> > On a second thought, isn't "rdp == this_cpu_ptr(&rcu_data)" enough?
>
> If the CONFIG_DEBUG_PREEMPT=y, the following code wil
give final
> green signal. Then I'll pull this particular one.
>
> One thing I was wondering -- it would be really nice if preemptible() itself
> checked for softirq_count() by default. Or adding something like a
> really_preemptible() which checks for both CONFIG_PREEMPT_COUNT and
> softirq_count() along with preemptible(). I feel like this always comes back
> to
> bite us in different ways, and not knowing atomicity complicates various code
> paths.
>
> Maybe a summer holidays project? ;)
I thought about that too but I think this is semantically incorrect.
In PREEMPT_RT, softirqs are actually preemptible.
Thanks.
>
> - Joel
>
--
Frederic Weisbecker
SUSE Labs
if the CPU is completely offline.
But if the current CPU is looking at the local rdp, it means it is online
and the rdp can't be concurrently [de]offloaded, right?
Thanks.
> rcu_current_is_nocb_kthread(rdp)),
> "Unsafe read of RCU_NOCB offloaded state"
> --
> 2.17.1
>
>
--
Frederic Weisbecker
SUSE Labs
s point to take it in for 6.16, though I've also stored it
> in
> my rcu/dev branch for Neeraj's 6.17 PR, just in case :)
I'm fine either way. To me it's neither too late nor too early :-)
Thanks.
>
> - Joel
>
>
--
Frederic Weisbecker
SUSE Labs
Le Wed, Apr 30, 2025 at 02:20:31AM +, Joel Fernandes a écrit :
>
>
> > On Apr 29, 2025, at 9:44 AM, Frederic Weisbecker
> > wrote:
>
> Hi Frederic,
> These all look good to me. Do you wish for
> these to go into the upcoming merge window or
> can I push
is expected to be very short. However #VMEXIT and other
hazards can stay on the way. Report long delays, 10 jiffies is
considered a high threshold already.
Reported-by: Paul E. McKenney
Reviewed-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_exp.h | 10 ++
1
ns to be OK but an accident is waiting to happen.
For all those reasons, remove this optimization that doesn't look worthy
to keep around.
Reported-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Reviewed-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
t CPU will
be reported on its behalf by the RCU exp kworker.
Provide an assertion to verify those expectations.
Reviewed-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
igned-off-by: Frederic Weisbecker
---
kernel/rcu/tree_plugin.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 3c0b686f..d51cc7a5dfc7 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -534,7 +
ul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_exp.h | 14 +++---
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index c36c7d5575ca..2fa7aa9155bd 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_e
Hi,
No real updates in this set. Just collected a few tags.
Thanks.
Frederic Weisbecker (5):
rcu/exp: Protect against early QS report
rcu/exp: Remove confusing needless full barrier on task unblock
rcu/exp: Remove needless CPU up quiescent state report
rcu/exp: Warn on QS requested on
accordingly.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 27 +--
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 90d0214c05c7..cbcd579c5630 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -3
(!(IS_ENABLED(CONFIG_PREEMPT_COUNT) && preemptible()) &&
> rdp == this_cpu_ptr(&rcu_data)) ||
> - rcu_current_is_nocb_kthread(rdp)),
> + rcu_current_is_nocb_kthread(rdp) ||
> + (IS_ENABLED(CONFIG_PREEMPT_RT) &&
&
Le Sat, Mar 22, 2025 at 03:40:53PM +0100, Joel Fernandes a écrit :
>
>
> On 3/22/2025 3:20 PM, Joel Fernandes wrote:
> >
> > On 3/22/2025 11:25 AM, Frederic Weisbecker wrote:
> >> Le Sat, Mar 22, 2025 at 03:06:08AM +0100, Joel Fernandes a écrit :
> >>&
gt;gp_seq = 1
> > >>WRITE_ONCE(rnp->gp_seq, rcu_state.gp_seq);
> > >>
> > >> This can happen due to get_state_synchronize_rcu_full() sampling
> > >> rcu_state.gp_seq_polled, however the poll_state_synchronize_rcu_full()
> > >> sampling
Le Sat, Mar 22, 2025 at 06:00:13PM +0100, Joel Fernandes a écrit :
>
>
> On 3/18/2025 2:56 PM, Frederic Weisbecker wrote:
> > RCU relies on the context tracking nesting counter in order to determine
> > if it is running in extended quiescent state.
> >
> > How
Le Wed, Apr 02, 2025 at 06:53:24AM +, Kuyo Chang (張建文) a écrit :
> Hi,
>
> By review the get_nohz_timer_target(), it's probably making an offline
> CPU visible at timer candidates, maybe this patch could fix it?
>
>
> [PATCH] sched/core: Exclude offline CPUs from the timer candidates
>
> Th
Le Tue, Apr 01, 2025 at 12:30:40PM -0400, Joel Fernandes a écrit :
> Hello, Frederic,
>
> On Tue, 1 Apr 2025 16:27:36 GMT, Frederic Weisbecker wrote:
> > Le Mon, Mar 31, 2025 at 02:29:52PM -0700, Paul E. McKenney a écrit :
> > > The disagreement is a feature, at least up t
Le Mon, Mar 31, 2025 at 02:29:52PM -0700, Paul E. McKenney a écrit :
> The disagreement is a feature, at least up to a point. That feature
> allows CPUs to go idle for long periods without RCU having to bother
> them or to mess with their per-CPU data (give or take ->gpwrap). It also
> allows per
Le Mon, Mar 31, 2025 at 11:28:47AM -0700, Paul E. McKenney a écrit :
> > So I'm unfortunately asking again if it wouldn't be a good idea to have a
> > single
> > global state counter that lives in the root node so that we don't have it
> > duplicated in rcu_state.gp_seq.
> >
> > This involves som
Hi Walter Chang,
Le Wed, Mar 26, 2025 at 05:46:38AM +, Walter Chang (張維哲) a écrit :
> On Tue, 2025-01-21 at 09:08 -0800, Paul E. McKenney wrote:
> > On Sat, Jan 18, 2025 at 12:24:33AM +0100, Frederic Weisbecker wrote:
> > > hrtimers are migrated away from the dying CPU to
Le Sat, Mar 22, 2025 at 03:06:08AM +0100, Joel Fernandes a écrit :
> Insomnia kicked in, so 3 am reply here (Zurich local time) ;-):
>
> On 3/20/2025 3:15 PM, Frederic Weisbecker wrote:
> > Le Wed, Mar 19, 2025 at 03:38:31PM -0400, Joel Fernandes a écrit :
> >> On Tue, Ma
Le Wed, Mar 19, 2025 at 03:38:31PM -0400, Joel Fernandes a écrit :
> On Tue, Mar 18, 2025 at 11:37:38AM -0700, Paul E. McKenney wrote:
> > On Tue, Mar 18, 2025 at 02:56:18PM +0100, Frederic Weisbecker wrote:
> > > The numbers used in rcu_seq_done_exact() lack some explanation
Le Tue, Mar 18, 2025 at 10:22:33AM -0700, Paul E. McKenney a écrit :
> On Fri, Mar 14, 2025 at 03:36:42PM +0100, Frederic Weisbecker wrote:
> > A CPU within hotplug operations can make the RCU exp kworker lagging if:
> >
> > * The dying CPU is running after CPUHP_TE
Le Tue, Mar 18, 2025 at 10:21:48AM -0700, Paul E. McKenney a écrit :
> On Fri, Mar 14, 2025 at 03:36:41PM +0100, Frederic Weisbecker wrote:
> > It is not possible to send an IPI to a dying CPU that has passed the
> > CPUHP_TEARDOWN_CPU stage. Remaining unhandled IPIs are h
Le Tue, Mar 18, 2025 at 10:18:12AM -0700, Paul E. McKenney a écrit :
> On Fri, Mar 14, 2025 at 03:36:39PM +0100, Frederic Weisbecker wrote:
> > A full memory barrier in the RCU-PREEMPT task unblock path advertizes
> > to order the context switch (or rather the accesses prior to
>
Le Tue, Mar 18, 2025 at 10:17:16AM -0700, Paul E. McKenney a écrit :
> On Fri, Mar 14, 2025 at 03:36:38PM +0100, Frederic Weisbecker wrote:
> > When a grace period is started, the ->expmask of each node is set up
> > from sync_exp_reset_tree(). Then later on each leaf node also
re accurate indicators available.
Clarify and robustify accordingly.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 27 +--
1 file changed, 17 insertions(+), 10 deletions(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 79dced5fb72e..90c43061c981 1
;gp_seq, rcu_state.gp_seq);
Add a comment about those expectations and to clarify the magic within
the relevant function.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/rcu.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index eed2951a4962..7a
Hi,
Just a bunch of things I had lagging in my local queue.
Thanks.
Frederic Weisbecker (2):
rcu: Comment on the extraneous delta test on rcu_seq_done_exact()
rcu: Robustify rcu_is_cpu_rrupt_from_idle()
kernel/rcu/rcu.h | 7 +++
kernel/rcu/tree.c | 27 +--
2
eport.
Reported-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_nocb.h | 3 +++
kernel/rcu/tree_stall.h | 3 +--
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
index 4a954ecf1c36..56baa78c6e85 100644
Le Sun, Mar 16, 2025 at 10:23:45AM -0400, Joel Fernandes a écrit :
> >> A small side effect of this patch could be:
> >>
> >> In the existing code, if between the sync_exp_reset_tree() and the
> >> __sync_rcu_exp_select_node_cpus(), if a pre-existing reader unblocked and
> >> completed, then I thin
Le Sat, Mar 15, 2025 at 07:59:45PM -0400, Joel Fernandes a écrit :
> Hi Frederic,
>
> On Fri, Mar 14, 2025 at 03:36:38PM +0100, Frederic Weisbecker wrote:
> > When a grace period is started, the ->expmask of each node is set up
> > from sync_exp_reset_tree(). Then late
ns to be OK but an accident is waiting to happen.
For all those reasons, remove this optimization that doesn't look worthy
to keep around.
Reported-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Reviewed-by: Paul E. McKenney
---
kernel/rcu/tree.c | 2 --
ker
t CPU will
be reported on its behalf by the RCU exp kworker.
Provide an assertion to verify those expectations.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 6 ++
1 file changed, 6 insertions(+)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 3fe68057d8b4..79dced5
Le Mon, Mar 03, 2025 at 12:10:50PM -0800, Paul E. McKenney a écrit :
> On Fri, Feb 14, 2025 at 12:25:59AM +0100, Frederic Weisbecker wrote:
> > A CPU coming online checks for an ongoing grace period and reports
> > a quiescent state accordingly if needed. This special treatment tha
is expected to be very short. However #VMEXIT and other
hazards can stay on the way. Report long delays, 10 jiffies is
considered a high threshold already.
Reported-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_exp.h | 10 ++
1 file changed, 10 insertions
g the
the root rnp to wait/check for the GP completion.
3) The (perhaps redundant given step 1) and 2)) smp_mb() in rcu_seq_end()
before the grace period completes.
This makes the explicit barrier in this place superflous. Therefore
remove it as it is confusing.
Signed-off-by: Frederic Weisb
side critical section before
sync_exp_reset_tree() is called and is then unblocked between
sync_exp_reset_tree() and __sync_rcu_exp_select_node_cpus(), the QS
won't be reported because no RCU exp IPI had been issued to request it
through the setting of srdp->cpu_no_qs.b.exp.
Signed-off-b
Hi,
Changes in this version:
* [1/5] Explain why it's fine if a task unblocks between
sync_exp_reset_tree() and __sync_rcu_exp_select_node_cpus(), per Paul's
suggestion.
* [3/5] Add Paul's reviewed-by tag
* [4/5] and [5/5] are new patches after discussion.
Frederic Weisbe
Le Fri, Feb 14, 2025 at 01:10:43AM -0800, Paul E. McKenney a écrit :
> On Fri, Feb 14, 2025 at 12:25:57AM +0100, Frederic Weisbecker wrote:
> > When a grace period is started, the ->expmask of each node is set up
> > from sync_exp_reset_tree(). Then later on each leaf node also
Le Wed, Feb 26, 2025 at 10:26:34AM -0500, Joel Fernandes a écrit :
>
>
> On 2/26/2025 10:04 AM, Paul E. McKenney wrote:
> >>> I was wondering if you could also point to the fastpath that this is
> >>> racing
> >>> with, it is not immediately clear (to me) what this smp_mb() is pairing
> >>> wit
Le Tue, Feb 25, 2025 at 04:59:08PM -0500, Joel Fernandes a écrit :
> On Fri, Feb 14, 2025 at 12:25:58AM +0100, Frederic Weisbecker wrote:
> > A full memory barrier in the RCU-PREEMPT task unblock path advertizes
> > to order the context switch (or rather the accesses prior to
>
Le Wed, Feb 19, 2025 at 06:58:36AM -0800, Paul E. McKenney a écrit :
> On Sat, Feb 15, 2025 at 11:23:45PM +0100, Frederic Weisbecker wrote:
> > > Before. There was also some buggy debug code in play. Also, to get the
> > > failure, it was necessary to make TREE03 disable
Le Wed, Feb 19, 2025 at 07:55:05AM -0800, Paul E. McKenney a écrit :
> > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > > index 86935fe00397..40d6090a33f5 100644
> > > --- a/kernel/rcu/tree.c
> > > +++ b/kernel/rcu/tree.c
> > > @@ -4347,6 +4347,12 @@ void rcutree_report_cpu_dead(void)
> >
Le Sat, Feb 15, 2025 at 02:38:04AM -0800, Paul E. McKenney a écrit :
> On Fri, Feb 14, 2025 at 01:10:52PM +0100, Frederic Weisbecker wrote:
> > Le Fri, Feb 14, 2025 at 01:01:56AM -0800, Paul E. McKenney a écrit :
> > > On Fri, Feb 14, 2025 at 12:25:59AM +0100, Frederic Weisbecke
Le Fri, Feb 14, 2025 at 01:01:56AM -0800, Paul E. McKenney a écrit :
> On Fri, Feb 14, 2025 at 12:25:59AM +0100, Frederic Weisbecker wrote:
> > A CPU coming online checks for an ongoing grace period and reports
> > a quiescent state accordingly if needed. This special treatment tha
g the
the root rnp to wait/check for the GP completion.
3) The (perhaps redundant given step 1) and 2)) smp_mb() in rcu_seq_end()
before the grace period completes.
This makes the explicit barrier in this place superflous. Therefore
remove it as it is confusing.
Signed-off-by: Frederic Weisb
ns to be OK but an accident is waiting to happen.
For all those reasons, remove this optimization that doesn't look worthy
to keep around.
Reported-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 2 --
kernel/rcu/tree_exp.h | 45 ++---
e
reported and propagated while ignoring tasks that blocked _before_ the
start of the grace period.
Prevent such trouble to happen in the future and initialize both the
quiescent states mask to report and the blocked tasks head from the same
node locked block.
Signed-off-by: Frederic Weisbecker
-
Here are a few updates for expedited RCU. Some were inspired by debates
with Paul while he was investigating the issue that got eventually
fixed by "rcu: Fix get_state_synchronize_rcu_full() GP-start detection".
Frederic Weisbecker (3):
rcu/exp: Protect against early QS report
rcu/e
Le Fri, Jan 24, 2025 at 04:01:55PM -0800, Paul E. McKenney a écrit :
> On Sat, Jan 25, 2025 at 12:03:58AM +0100, Frederic Weisbecker wrote:
> > Le Fri, Dec 13, 2024 at 11:49:49AM -0800, Paul E. McKenney a écrit :
> > > diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
> &g
Le Fri, Dec 13, 2024 at 11:49:49AM -0800, Paul E. McKenney a écrit :
> diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
> index 2f9c9272cd486..d2a91f705a4ab 100644
> --- a/kernel/rcu/rcu.h
> +++ b/kernel/rcu/rcu.h
> @@ -162,7 +162,7 @@ static inline bool rcu_seq_done_exact(unsigned long *sp,
> uns
Le Fri, Jan 24, 2025 at 11:40:54AM -0800, Paul E. McKenney a écrit :
> > > > I'm wondering, what prevents us from removing rcu_state.gp_seq and rely
> > > > only on
> > > > the root node for the global state ?
> > >
> > > One scenario comes to mind immediately. There may be others.
> > >
> > >
Le Fri, Jan 24, 2025 at 07:58:20AM -0800, Paul E. McKenney a écrit :
> On Fri, Jan 24, 2025 at 03:49:24PM +0100, Frederic Weisbecker wrote:
> > Le Fri, Dec 13, 2024 at 11:49:49AM -0800, Paul E. McKenney a écrit :
> > > The get_state_synchronize_rcu_full() and poll_state_sy
Le Thu, Jan 23, 2025 at 08:49:47PM -0500, Joel Fernandes a écrit :
> On Thu, Dec 12, 2024 at 7:59 PM Paul E. McKenney wrote:
> >
> > The get_state_synchronize_rcu_full() and poll_state_synchronize_rcu_full()
> > functions use the root rcu_node structure's ->gp_seq field to detect
> > the beginning
Le Fri, Dec 13, 2024 at 11:49:49AM -0800, Paul E. McKenney a écrit :
> The get_state_synchronize_rcu_full() and poll_state_synchronize_rcu_full()
> functions use the root rcu_node structure's ->gp_seq field to detect
> the beginnings and ends of grace periods, respectively. This choice is
> necess
and remove their halfway working ad-hoc
affinity implementation
* Implement kthreads preferred affinity
* Unify kthread worker and kthread API's style
* Convert RCU kthreads to the new API and remove the ad-hoc affinity
implementation.
----
outgoing CPU
earlier")
Closes: 20241213203739.1519801-1-usamaarif...@gmail.com
Signed-off-by: Frederic Weisbecker
Signed-off-by: Paul E. McKenney
---
include/linux/hrtimer_defs.h | 1 +
kernel/time/hrtimer.c| 92 +---
2 files changed, 75 insertions(+), 18
Le Thu, Jan 16, 2025 at 11:59:48AM +0100, Thomas Gleixner a écrit :
> On Tue, Dec 31 2024 at 18:07, Frederic Weisbecker wrote:
> > hrtimers are migrated away from the dying CPU to any online target at
> > the CPUHP_AP_HRTIMERS_DYING stage in order not to delay bandwidth timers
>
Le Fri, Jan 03, 2025 at 03:27:03PM +, Will Deacon a écrit :
> On Wed, Dec 11, 2024 at 04:40:23PM +0100, Frederic Weisbecker wrote:
> > +const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
> > +{
> > + if (!static_branch_unlikely(&a
This reverts commit f7345ccc62a4b880cf76458db5f320725f28e400.
swake_up_one_online() has been removed because hrtimers can now assign
a proper online target to hrtimers queued from offline CPUs. Therefore
remove the related hackery.
Reviewed-by: Usama Arif
Signed-off-by: Frederic Weisbecker
It's now ok to perform a wake-up from an offline CPU because the
resulting armed scheduler bandwidth hrtimers are now correctly targeted
by hrtimer infrastructure.
Remove the obsolete hackerry.
Reviewed-by: Usama Arif
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
queued from an offline CPU.
This will also allow to revert all the above RCU disgraceful hacks.
Reported-by: Vlad Poenaru
Reported-by: Usama Arif
Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU
earlier")
Closes: 20241213203739.1519801-1-usamaarif...@gmail.com
a newlines
Frederic Weisbecker (3):
hrtimers: Force migrate away hrtimers queued after
CPUHP_AP_HRTIMERS_DYING
rcu: Remove swake_up_one_online() bandaid
Revert "rcu/nocb: Fix rcuog wake-up from offline softirq"
include/linux/hrtimer_defs.h | 1 +
kernel/rcu/tr
This reverts commit f7345ccc62a4b880cf76458db5f320725f28e400.
swake_up_one_online() has been removed because hrtimers can now assign
a proper online target to hrtimers queued from offline CPUs. Therefore
remove the related hackery.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_nocb.h
...@gmail.com
Signed-off-by: Frederic Weisbecker
Signed-off-by: Paul E. McKenney
---
include/linux/hrtimer_defs.h | 1 +
kernel/time/hrtimer.c| 55 +---
2 files changed, 52 insertions(+), 4 deletions(-)
diff --git a/include/linux/hrtimer_defs.h b/incl
It's now ok to perform a wake-up from an offline CPU because the
resulting armed scheduler bandwidth hrtimers are now correctly targeted
by hrtimer infrastructure.
Remove the obsolete hackerry.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
PLUG_CPU=n (folded #ifdeffery by Paul)
_ Remove the unconditionaly base lock within the IPI when both nohz and
high resolution are off. There is really nothing to do for the IPI in
such case.
Frederic Weisbecker (3):
hrtimers: Force migrate away hrtimers queued after
CPUHP_AP_HRTIMERS_D
Le Thu, Dec 19, 2024 at 10:00:12PM +0300, Usama Arif a écrit :
> > @@ -1240,6 +1280,12 @@ static int __hrtimer_start_range_ns(struct hrtimer
> > *timer, ktime_t tim,
> >
> > hrtimer_set_expires_range_ns(timer, tim, delta_ns);
> >
> > + if (unlikely(!this_cpu_base->online)) {
> > +
Le Fri, Dec 20, 2024 at 03:19:31PM -0800, Paul E. McKenney a écrit :
> On Thu, Dec 19, 2024 at 09:42:48AM -0800, Paul E. McKenney wrote:
> > On Wed, Dec 18, 2024 at 05:50:05PM +0100, Frederic Weisbecker wrote:
> > > 5c0930ccaad5 ("hrtimers: Push pending hrtimers
This reverts commit f7345ccc62a4b880cf76458db5f320725f28e400.
swake_up_one_online() has been removed because hrtimers can now assign
a proper online target to hrtimers queued from offline CPUs. Therefore
remove the related hackery.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_nocb.h
It's now ok to perform a wake-up from an offline CPU because the
resulting armed scheduler bandwidth hrtimers are now correctly targeted
by hrtimer infrastructure.
Remove the obsolete hackerry.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
arif...@gmail.com
Signed-off-by: Frederic Weisbecker
---
include/linux/hrtimer_defs.h | 1 +
kernel/time/hrtimer.c| 60 +++-
2 files changed, 54 insertions(+), 7 deletions(-)
diff --git a/include/linux/hrtimer_defs.h b/include/linux/hrtimer_defs.h
index c3
onfined to RCU. But not anymore as it is spreading to hotplug code
itself
(https://lore.kernel.org/all/20241213203739.1519801-1-usamaarif...@gmail.com/)
Instead of introducing yet another new hackery, fix the problem in
hrtimers for everyone.
Frederic Weisbecker (3):
hrtimers: Force migrate awa
Le Thu, Dec 12, 2024 at 10:42:13AM -0800, Paul E. McKenney a écrit :
> From: Frederic Weisbecker
>
> It's more convenient to benefit from the fallthrough feature of
> switch / case to handle the timer state machine. Also a new state is
> about to be added that will take adva
Le Thu, Dec 12, 2024 at 10:42:14AM -0800, Paul E. McKenney a écrit :
> From: Frederic Weisbecker
>
> After a CPU has set itself offline and before it eventually calls
> rcutree_report_cpu_dead(), there are still opportunities for callbacks
> to be enqueued, for example from an
Le Fri, Dec 13, 2024 at 08:33:45PM +, Usama Arif a écrit :
> The following warning is being encountered at boot time:
>
>WARNING: CPU: 94 PID: 588 at kernel/time/hrtimer.c:1086
> hrtimer_start_range_ns+0x289/0x2d0
>Modules linked in:
>CPU: 94 UID: 0 PID: 58
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU exp workers to
use it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 105
hread names.
Unify the behaviours and convert kthread_create_worker_on_cpu() to
use the printf behaviour of kthread_create_on_cpu().
Signed-off-by: Frederic Weisbecker
---
fs/erofs/zdata.c| 2 +-
include/linux/kthread.h | 21 +++
kernel/kthread.c
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU boost to use
it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c| 27
kthread_bind_mask() or
kthread_affine_preferred() before starting it.
Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]()
that behaves just like kthread_run(). kthread_create_worker[_on_cpu]()
will now only create a kthread worker without starting it.
Signed-off-by: Frederic
task is woken up) automatically by the scheduler to other housekeepers
within the preferred affinity or, as a last resort, to all
housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/kthread.h | 1 +
kernel/kthread.c
d_mask(), reporting potential misuse of the API.
Upcoming patches will make further use of this facility.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
kernel/kthread.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index a5ac612b1609..
disturbed by
unbound kthreads or even detached pinned user tasks.
Make the fallback affinity setting aware of nohz_full.
Suggested-by: Michal Hocko
Signed-off-by: Frederic Weisbecker
---
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/include/asm/mmu_context.h | 2 ++
arch/ar
the
same node or, as a last resort, to all housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/cpuhotplug.h | 1 +
kernel/kthread.c | 106 -
2 files changed, 106 insertions(+), 1 deletio
Le Fri, Nov 15, 2024 at 11:01:25AM +0800, Mingcong Bai a écrit :
> Hi Frederic,
>
>
>
> > Just in case, Mingcong Bai can you test the following patch without the
> > revert and see if it triggers something?
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 35949ec1f935..b4f8e
erface.
- Remove leftover function declaration
Baruch Siach (1):
doc: rcu: update printed dynticks counter bits
Frederic Weisbecker (1):
Merge branches 'rcu/fixes', 'rcu/nocb', 'rcu/torture', 'rcu/stall' and
'rc
Le Mon, Nov 11, 2024 at 01:07:16PM +0530, Neeraj Upadhyay a écrit :
>
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index 16865475120b..2605dd234a13 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -891,7 +891,18 @@ static void nocb_cb_wait(str
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/refscale.c | 37 ++---
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/refscale.c b/kernel/rcu/
er Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
.../admin-guide/kernel-parameters.txt | 8 +
kernel/rcu/rcutorture.c | 30 ++-
2 files changed, 30 insertions(+), 8 deletions(-)
diff --git
d_mask(), reporting potential misuse of the API.
Upcoming patches will make further use of this facility.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
kernel/kthread.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 9bb36897b6c6..
hread names.
Unify the behaviours and convert kthread_create_worker_on_cpu() to
use the printf behaviour of kthread_create_on_cpu().
Signed-off-by: Frederic Weisbecker
---
fs/erofs/zdata.c| 2 +-
include/linux/kthread.h | 21 +++
kernel/kthread.c
From: "Paul E. McKenney"
The rcu_gp_might_be_stalled() function is no longer used, so this commit
removes it.
Signed-off-by: Paul E. McKenney
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Frederic Weisbecker
---
include/linux/rcutiny.h | 1 -
include/linux/rcutree.h | 1
is also a drive-by
white-space fixeup!
Signed-off-by: Paul E. McKenney
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcu.h | 21 ++---
include/linux/
Hello,
Please find below the RCU stall patches targeted for the upcoming
merge window.
Paul E. McKenney (3):
rcu: Delete unused rcu_gp_might_be_stalled() function
rcu: Stop stall warning from dumping stacks if grace period ends
rcu: Finer-grained grace-period-end checks in rcu_dump_cpu_stac
task is woken up) automatically by the scheduler to other housekeepers
within the preferred affinity or, as a last resort, to all
housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/kthread.h | 1 +
kernel/kthread.c
street
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
tools/testing/selftests/rcutorture/configs/rcu/CFLIST | 1 +
tools/testing/selftests/rcutorture/configs/rcu/SRCU-L | 10 ++
.../selftests/rcutorture/configs/rcu/SRCU-L.boot | 3 +++
.../selftests/rcutortu
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU boost to use
it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c| 27
From: "Paul E. McKenney"
Where RCU is watching is where it is OK to invoke rcu_read_lock().
Reported-by: Andrii Nakryiko
Signed-off-by: Paul E. McKenney
Acked-by: Andrii Nakryiko
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcu.h | 3 +
1 - 100 of 3172 matches
Mail list logo