outgoing CPU
earlier")
Closes: 20241213203739.1519801-1-usamaarif...@gmail.com
Signed-off-by: Frederic Weisbecker
Signed-off-by: Paul E. McKenney
---
include/linux/hrtimer_defs.h | 1 +
kernel/time/hrtimer.c| 92 +---
2 files changed, 75 insertions(+), 18
Le Thu, Jan 16, 2025 at 11:59:48AM +0100, Thomas Gleixner a écrit :
> On Tue, Dec 31 2024 at 18:07, Frederic Weisbecker wrote:
> > hrtimers are migrated away from the dying CPU to any online target at
> > the CPUHP_AP_HRTIMERS_DYING stage in order not to delay bandwidth timers
>
Le Fri, Jan 03, 2025 at 03:27:03PM +, Will Deacon a écrit :
> On Wed, Dec 11, 2024 at 04:40:23PM +0100, Frederic Weisbecker wrote:
> > +const struct cpumask *task_cpu_fallback_mask(struct task_struct *p)
> > +{
> > + if (!static_branch_unlikely(&a
This reverts commit f7345ccc62a4b880cf76458db5f320725f28e400.
swake_up_one_online() has been removed because hrtimers can now assign
a proper online target to hrtimers queued from offline CPUs. Therefore
remove the related hackery.
Reviewed-by: Usama Arif
Signed-off-by: Frederic Weisbecker
It's now ok to perform a wake-up from an offline CPU because the
resulting armed scheduler bandwidth hrtimers are now correctly targeted
by hrtimer infrastructure.
Remove the obsolete hackerry.
Reviewed-by: Usama Arif
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
queued from an offline CPU.
This will also allow to revert all the above RCU disgraceful hacks.
Reported-by: Vlad Poenaru
Reported-by: Usama Arif
Fixes: 5c0930ccaad5 ("hrtimers: Push pending hrtimers away from outgoing CPU
earlier")
Closes: 20241213203739.1519801-1-usamaarif...@gmail.com
a newlines
Frederic Weisbecker (3):
hrtimers: Force migrate away hrtimers queued after
CPUHP_AP_HRTIMERS_DYING
rcu: Remove swake_up_one_online() bandaid
Revert "rcu/nocb: Fix rcuog wake-up from offline softirq"
include/linux/hrtimer_defs.h | 1 +
kernel/rcu/tr
This reverts commit f7345ccc62a4b880cf76458db5f320725f28e400.
swake_up_one_online() has been removed because hrtimers can now assign
a proper online target to hrtimers queued from offline CPUs. Therefore
remove the related hackery.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_nocb.h
...@gmail.com
Signed-off-by: Frederic Weisbecker
Signed-off-by: Paul E. McKenney
---
include/linux/hrtimer_defs.h | 1 +
kernel/time/hrtimer.c| 55 +---
2 files changed, 52 insertions(+), 4 deletions(-)
diff --git a/include/linux/hrtimer_defs.h b/incl
It's now ok to perform a wake-up from an offline CPU because the
resulting armed scheduler bandwidth hrtimers are now correctly targeted
by hrtimer infrastructure.
Remove the obsolete hackerry.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
PLUG_CPU=n (folded #ifdeffery by Paul)
_ Remove the unconditionaly base lock within the IPI when both nohz and
high resolution are off. There is really nothing to do for the IPI in
such case.
Frederic Weisbecker (3):
hrtimers: Force migrate away hrtimers queued after
CPUHP_AP_HRTIMERS_D
Le Thu, Dec 19, 2024 at 10:00:12PM +0300, Usama Arif a écrit :
> > @@ -1240,6 +1280,12 @@ static int __hrtimer_start_range_ns(struct hrtimer
> > *timer, ktime_t tim,
> >
> > hrtimer_set_expires_range_ns(timer, tim, delta_ns);
> >
> > + if (unlikely(!this_cpu_base->online)) {
> > +
Le Fri, Dec 20, 2024 at 03:19:31PM -0800, Paul E. McKenney a écrit :
> On Thu, Dec 19, 2024 at 09:42:48AM -0800, Paul E. McKenney wrote:
> > On Wed, Dec 18, 2024 at 05:50:05PM +0100, Frederic Weisbecker wrote:
> > > 5c0930ccaad5 ("hrtimers: Push pending hrtimers
This reverts commit f7345ccc62a4b880cf76458db5f320725f28e400.
swake_up_one_online() has been removed because hrtimers can now assign
a proper online target to hrtimers queued from offline CPUs. Therefore
remove the related hackery.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_nocb.h
It's now ok to perform a wake-up from an offline CPU because the
resulting armed scheduler bandwidth hrtimers are now correctly targeted
by hrtimer infrastructure.
Remove the obsolete hackerry.
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c
arif...@gmail.com
Signed-off-by: Frederic Weisbecker
---
include/linux/hrtimer_defs.h | 1 +
kernel/time/hrtimer.c| 60 +++-
2 files changed, 54 insertions(+), 7 deletions(-)
diff --git a/include/linux/hrtimer_defs.h b/include/linux/hrtimer_defs.h
index c3
onfined to RCU. But not anymore as it is spreading to hotplug code
itself
(https://lore.kernel.org/all/20241213203739.1519801-1-usamaarif...@gmail.com/)
Instead of introducing yet another new hackery, fix the problem in
hrtimers for everyone.
Frederic Weisbecker (3):
hrtimers: Force migrate awa
Le Thu, Dec 12, 2024 at 10:42:13AM -0800, Paul E. McKenney a écrit :
> From: Frederic Weisbecker
>
> It's more convenient to benefit from the fallthrough feature of
> switch / case to handle the timer state machine. Also a new state is
> about to be added that will take adva
Le Thu, Dec 12, 2024 at 10:42:14AM -0800, Paul E. McKenney a écrit :
> From: Frederic Weisbecker
>
> After a CPU has set itself offline and before it eventually calls
> rcutree_report_cpu_dead(), there are still opportunities for callbacks
> to be enqueued, for example from an
Le Fri, Dec 13, 2024 at 08:33:45PM +, Usama Arif a écrit :
> The following warning is being encountered at boot time:
>
>WARNING: CPU: 94 PID: 588 at kernel/time/hrtimer.c:1086
> hrtimer_start_range_ns+0x289/0x2d0
>Modules linked in:
>CPU: 94 UID: 0 PID: 58
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU exp workers to
use it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 105
hread names.
Unify the behaviours and convert kthread_create_worker_on_cpu() to
use the printf behaviour of kthread_create_on_cpu().
Signed-off-by: Frederic Weisbecker
---
fs/erofs/zdata.c| 2 +-
include/linux/kthread.h | 21 +++
kernel/kthread.c
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU boost to use
it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c| 27
kthread_bind_mask() or
kthread_affine_preferred() before starting it.
Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]()
that behaves just like kthread_run(). kthread_create_worker[_on_cpu]()
will now only create a kthread worker without starting it.
Signed-off-by: Frederic
task is woken up) automatically by the scheduler to other housekeepers
within the preferred affinity or, as a last resort, to all
housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/kthread.h | 1 +
kernel/kthread.c
d_mask(), reporting potential misuse of the API.
Upcoming patches will make further use of this facility.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
kernel/kthread.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index a5ac612b1609..
disturbed by
unbound kthreads or even detached pinned user tasks.
Make the fallback affinity setting aware of nohz_full.
Suggested-by: Michal Hocko
Signed-off-by: Frederic Weisbecker
---
arch/arm64/include/asm/cpufeature.h | 1 +
arch/arm64/include/asm/mmu_context.h | 2 ++
arch/ar
the
same node or, as a last resort, to all housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/cpuhotplug.h | 1 +
kernel/kthread.c | 106 -
2 files changed, 106 insertions(+), 1 deletio
Le Fri, Nov 15, 2024 at 11:01:25AM +0800, Mingcong Bai a écrit :
> Hi Frederic,
>
>
>
> > Just in case, Mingcong Bai can you test the following patch without the
> > revert and see if it triggers something?
> >
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 35949ec1f935..b4f8e
erface.
- Remove leftover function declaration
Baruch Siach (1):
doc: rcu: update printed dynticks counter bits
Frederic Weisbecker (1):
Merge branches 'rcu/fixes', 'rcu/nocb', 'rcu/torture', 'rcu/stall' and
'rc
Le Mon, Nov 11, 2024 at 01:07:16PM +0530, Neeraj Upadhyay a écrit :
>
> > diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
> > index 16865475120b..2605dd234a13 100644
> > --- a/kernel/rcu/tree_nocb.h
> > +++ b/kernel/rcu/tree_nocb.h
> > @@ -891,7 +891,18 @@ static void nocb_cb_wait(str
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/refscale.c | 37 ++---
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/kernel/rcu/refscale.c b/kernel/rcu/
er Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
.../admin-guide/kernel-parameters.txt | 8 +
kernel/rcu/rcutorture.c | 30 ++-
2 files changed, 30 insertions(+), 8 deletions(-)
diff --git
d_mask(), reporting potential misuse of the API.
Upcoming patches will make further use of this facility.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
kernel/kthread.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index 9bb36897b6c6..
hread names.
Unify the behaviours and convert kthread_create_worker_on_cpu() to
use the printf behaviour of kthread_create_on_cpu().
Signed-off-by: Frederic Weisbecker
---
fs/erofs/zdata.c| 2 +-
include/linux/kthread.h | 21 +++
kernel/kthread.c
From: "Paul E. McKenney"
The rcu_gp_might_be_stalled() function is no longer used, so this commit
removes it.
Signed-off-by: Paul E. McKenney
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Frederic Weisbecker
---
include/linux/rcutiny.h | 1 -
include/linux/rcutree.h | 1
is also a drive-by
white-space fixeup!
Signed-off-by: Paul E. McKenney
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcu.h | 21 ++---
include/linux/
Hello,
Please find below the RCU stall patches targeted for the upcoming
merge window.
Paul E. McKenney (3):
rcu: Delete unused rcu_gp_might_be_stalled() function
rcu: Stop stall warning from dumping stacks if grace period ends
rcu: Finer-grained grace-period-end checks in rcu_dump_cpu_stac
task is woken up) automatically by the scheduler to other housekeepers
within the preferred affinity or, as a last resort, to all
housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/kthread.h | 1 +
kernel/kthread.c
street
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
tools/testing/selftests/rcutorture/configs/rcu/CFLIST | 1 +
tools/testing/selftests/rcutorture/configs/rcu/SRCU-L | 10 ++
.../selftests/rcutorture/configs/rcu/SRCU-L.boot | 3 +++
.../selftests/rcutortu
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU boost to use
it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c| 27
From: "Paul E. McKenney"
Where RCU is watching is where it is OK to invoke rcu_read_lock().
Reported-by: Andrii Nakryiko
Signed-off-by: Paul E. McKenney
Acked-by: Andrii Nakryiko
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcu.h | 3 +
eviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
Documentation/admin-guide/kernel-parameters.txt | 4 ++--
kernel/rcu/rcutorture.c | 7 +++
2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/Documentation/admin-guide/kernel-paramet
ay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/rcutorture.c | 28 ++--
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcutorture.c
index bb75dbf5c800..f96ab98f8182 100644
--- a/kernel/rcu/rcutorture.c
+++ b/
: Alexei Starovoitov
Signed-off-by: Paul E. McKenney
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcutree.h | 39 ++
kernel/rc
g feedback. ]
[ paulmck: Apply kernel test robot feedback. ]
Signed-off-by: Paul E. McKenney
Tested-by: kernel test robot
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcu.
by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
include/linux/srcu.h | 20
include/linux/srcutree.h | 4
kernel/rcu/srcutree.c| 21 +++--
3 files changed, 23 insertions(+), 22 deletions(-)
diff --git a/include/linux/srcu.h b/include/linux/sr
From: "Paul E. McKenney"
This commit adds some additional usage constraints to the kernel-doc
headers of srcu_read_lock() and srcu_read_lock_nmi_safe().
Suggested-by: Andrii Nakryiko
Signed-off-by: Paul E. McKenney
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
--
t Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/srcutree.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 9774bc500de5..b85da944d794 100644
--- a/kernel/r
n of light-weight
(as in memory-barrier-free) readers.
Signed-off-by: Paul E. McKenney
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/srcutree.c | 7 ---
1 file
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/srcutree.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index 2fe0abade9c0..5b1a315f7
om what it currently does to why
it does it, this latter being more future-proof.
Signed-off-by: Paul E. McKenney
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Reviewed-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/srcut
more sense to force them to be equal
using BUILD_BUG_ON().
Signed-off-by: Zhen Lei
Signed-off-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/srcutree.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
Hello,
Please find below the SRCU patches targeted for the upcoming
merge window.
Paul E. McKenney (15):
srcu: Rename srcu_might_be_idle() to srcu_should_expedite()
srcu: Introduce srcu_gp_is_expedited() helper function
srcu: Renaming in preparation for additional reader flavor
srcu: Bit
kthread_bind_mask() or
kthread_affine_preferred() before starting it.
Consolidate the behaviours and introduce kthread_run_worker[_on_cpu]()
that behaves just like kthread_run(). kthread_create_worker[_on_cpu]()
will now only create a kthread worker without starting it.
Signed-off-by: Frederic
a small number of CPUs are stalling
the current grace period, which means that the ->lock need be acquired
only for a small fraction of the rcu_node structures.
[ paulmck: Apply Dan Carpenter feedback. ]
Signed-off-by: Paul E. McKenney
Reviewed-by: Joel Fernandes (Google)
Signed-off-by: Frederi
-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree_stall.h | 17 +++--
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/kernel/rcu/tree_stall.h b/kernel/rcu/tree_stall.h
index d7cdd535e50b..b530844becf8 100644
--- a/kernel/rcu/tree_stall.h
+
Now that kthreads have an infrastructure to handle preferred affinity
against CPU hotplug and housekeeping cpumask, convert RCU exp workers to
use it instead of handling all the constraints by itself.
Acked-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tree.c | 105
disturbed by
unbound kthreads or even detached pinned user tasks.
Make the fallback affinity setting aware of nohz_full. ARM64 is a
special case and its last resort EL0 32bits capable CPU can be updated
as housekeeping CPUs appear on boot.
Suggested-by: Michal Hocko
Signed-off-by: Frederic
the
same node or, as a last resort, to all housekeepers from other nodes.
Acked-by: Vlastimil Babka
Signed-off-by: Frederic Weisbecker
---
include/linux/cpuhotplug.h | 1 +
kernel/kthread.c | 106 -
2 files changed, 106 insertions(+), 1 deletio
Le Fri, Nov 08, 2024 at 07:14:41AM -0800, Paul E. McKenney a écrit :
> On Fri, Nov 08, 2024 at 02:46:16PM +0100, Frederic Weisbecker wrote:
> > Le Fri, Nov 08, 2024 at 12:29:40AM +0800, Mingcong Bai a écrit :
> > > Hi Frederic,
> > >
> > >
> > &
Le Fri, Nov 08, 2024 at 12:29:40AM +0800, Mingcong Bai a écrit :
> Hi Frederic,
>
>
>
> > Sorry for the lag, I still don't understand how this specific commit
> > can produce this issue. Can you please retry with and without this
> > commit
> > reverted?
>
> Just tested v6.12-rc6 with and witho
Le Thu, Nov 07, 2024 at 10:10:37AM +0100, Thorsten Leemhuis a écrit :
> On 05.11.24 08:17, Mingcong Bai wrote:
> > (CC-ing the laptop's owner so that she might help with further testing...)
> > 在 2024-10-23 18:22,Linux regression tracking (Thorsten Leemhuis) 写道:
> >>
s.
Signed-off-by: Uladzislau Rezki (Sony)
Reviewed-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/rcuscale.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
index de7d511e6be4..1d8bb603c289 100644
--- a/kernel/rcu/rcusca
situation, and notes that all CPUs
have passed through a quiescent state.
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/rcutorture.c | 9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/rcutorture.c b/kernel/rcu/rcu
free tests")
Signed-off-by: Uladzislau Rezki (Sony)
Reviewed-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/rcuscale.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
index 6d37596deb1f..de7d511e6
periods in the other rcuscale
guest OSes, and also allows the thermal warm-up period required to obtain
consistent results from one test to the next.
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/refscale.c | 17 +
1 file changed, 17 insertion
ptr() call.
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/refscale.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/refscale.c b/kernel/rcu/refscale.c
index 0db9db73f57f..25910ebe95c0 100644
--- a/kernel/rcu/refscale.c
+++ b/
prevent it from running
taskset on its guest OSes.
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
.../rcutorture/bin/kvm-test-1-run-batch.sh| 43 ++-
tools/testing/selftests/rcutorture/bin/kvm.sh | 6 +++
2 files changed, 29 insertions(+), 20 deletion
Hello,
Please find below the RCU NOCB patches targeted for the upcoming
merge window.
Paul E. McKenney (4):
torture: Add --no-affinity parameter to kvm.sh
refscale: Correct affinity check
rcuscale: Add guest_os_delay module parameter
rcutorture: Avoid printing cpu=-1 for no-fault RCU boos
o 0 and spare
the callback enqueue, or rcuo will observe the new callback and keep
rdp->nocb_cb_sleep to false.
Therefore check rdp->nocb_cb_sleep before parking to make sure no
further rcu_barrier() is waiting on the rdp.
Fixes: 1fcb932c8b5c ("rcu/nocb: Simplify (de-)offloading state machin
From: Yue Haibing
Commit 17351eb59abd ("rcu/nocb: Simplify (de-)offloading state machine")
removed the implementation but leave declaration.
Signed-off-by: Yue Haibing
Reviewed-by: Frederic Weisbecker
Reviewed-by: "Paul E. McKenney"
Signed-off-by: Neeraj Upadhyay
Sig
Hello,
Please find below the RCU NOCB patches targeted for the upcoming
merge window.
Yue Haibing (1):
rcu: Remove unused declaration rcu_segcblist_offload()
Zqiang (1):
rcu/nocb: Fix missed RCU barrier on deoffloading
kernel/rcu/rcu_segcblist.h | 1 -
kernel/rcu/tree_nocb.h | 13
Reported-by: syzbot+061d370693bdd99f9...@syzkaller.appspotmail.com
Link: https://lore.kernel.org/lkml/ZxZ68KmHDQYU0yfD@pc636/T/
Fixes: 8fc5494ad5fa ("rcu/kvfree: Move need_offload_krc() out of krcp->lock")
Signed-off-by: Uladzislau Rezki (Sony)
Signed-off-by: Frederic Weisbecker
---
kerne
by: Paul E. McKenney
Signed-off-by: Michal Schmidt
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/srcutiny.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 549c03336ee9..4dcbf8aa80ff 100644
--- a/kernel/rcu/srcutiny.c
+++ b/
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tasks.h | 17 +
1 file changed, 1 insertion(+), 16 deletions(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index dd9730fd44fb..c789d994e7eb 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -1541,22 +1541,7
Cc: Alexei Starovoitov
Cc: Andrii Nakryiko
Cc: Peter Zijlstra
Cc: Kent Overstreet
Cc:
Signed-off-by: Frederic Weisbecker
---
Documentation/admin-guide/kernel-parameters.txt | 5 -
1 file changed, 5 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt
b/Documentation/a
From: "Paul E. McKenney"
This commit tests the ->start_poll() and ->start_poll_full() functions
with interrupts disabled, but only for RCU variants setting the
->start_poll_irqsoff flag.
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
kernel/rc
estriction. However,
there is no need for this restrictions, as can be seen in call_rcu(),
which does wakeups when interrupts are disabled.
This commit therefore removes the lockdep assertion and the comments.
Reported-by: Kent Overstreet
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic
Signed-off-by: Paul E. McKenney
Cc: Peter Zijlstra
Signed-off-by: Neeraj Upadhyay
Signed-off-by: Frederic Weisbecker
---
kernel/rcu/tasks.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 6333f4ccf024..dd9730fd44fb 100644
--- a/
Hello,
Please find below the general RCU fixes targeted for the upcoming
merge window.
Michal Schmidt (1):
rcu/srcutiny: don't return before reenabling preemption
Paul E. McKenney (6):
doc: Add rcuog kthreads to kernel-per-CPU-kthreads.rst
rcu: Allow short-circuiting of synchronize_rcu_tas
From: "Paul E. McKenney"
This commit adds the rcuog kthreads to the list of callback-offloading
kthreads that can be affinitied away from worker CPUs.
Signed-off-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
---
Documentation/admin-guide/kernel-per-CPU-kthreads.rst | 2
Le Tue, Oct 15, 2024 at 09:11:05AM -0700, Paul E. McKenney a écrit :
> This patch adds srcu_read_lock_lite() and srcu_read_unlock_lite(), which
> dispense with the read-side smp_mb() but also are restricted to code
> regions that RCU is watching. If a given srcu_struct structure uses
> srcu_read_l
On Thu, Oct 31, 2024 at 10:42:45AM +0100, Thomas Gleixner wrote:
> On Thu, Oct 31 2024 at 14:10, Naresh Kamboju wrote:
> > The QEMU-ARM64 boot has failed with the Linux next-20241031 tag.
> > The boot log shows warnings at clockevents_register_device and followed
> > by rcu_preempt detected stalls.
Hi,
On Thu, Oct 31, 2024 at 02:10:14PM +0530, Naresh Kamboju wrote:
> The QEMU-ARM64 boot has failed with the Linux next-20241031 tag.
> The boot log shows warnings at clockevents_register_device and followed
> by rcu_preempt detected stalls.
>
> However, the system did not proceed far enough to
Le Tue, Oct 29, 2024 at 02:52:31PM +0100, Sebastian Andrzej Siewior a écrit :
> On 2024-10-28 15:01:55 [+0100], Frederic Weisbecker wrote:
> > > diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> > > index 457151f9f263d..9637af78087f3 100644
> > > -
disturbed by
unbound kthreads or even detached pinned user tasks.
Make the fallback affinity setting aware of nohz_full. ARM64 is a
special case and its last resort EL0 32bits capable CPU can be updated
as housekeeping CPUs appear on boot.
Suggested-by: Michal Hocko
Signed-off-by: Frederic
this possible and more flexible, drive the offlineable decision
from the cpuhotplug callbacks themselves.
Signed-off-by: Frederic Weisbecker
---
arch/arm64/kernel/cpufeature.c | 32 ++--
1 file changed, 18 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/kernel
Le Mon, Oct 28, 2024 at 04:25:15PM +, Will Deacon a écrit :
> > If nohz_full= isn't used then
> > it's cpu_possible_mask). If there is a housekeeping CPU supporting el0
> > 32bits
> > then it will be disallowed to be ever offlined. But if the first mismatching
> > CPU supporting el0 that pops
Le Thu, Oct 24, 2024 at 01:28:24PM -0700, Paul E. McKenney a écrit :
> On Thu, Oct 24, 2024 at 06:45:58PM +0200, Uladzislau Rezki (Sony) wrote:
> > There are two places where WARN_ON_ONCE() is called two times
> > in the error paths. One which is encapsulated into if() condition
> > and another one
>
> [ junxiao.ch...@intel.com: Ensure ktimersd gets woken up even if a
> softirq is currently served. ]
>
> Reviewed-by: Paul E. McKenney [rcutorture]
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Frederic Weisbecker
Just a few nits:
> ---
> i
g and let
> softirq be invoked on return from interrupt.
>
> Use __raise_softirq_irqoff() to raise the softirq.
>
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Frederic Weisbecker
turn from
> interrupt.
>
> Use __raise_softirq_irqoff() to raise the softirq.
>
> Signed-off-by: Sebastian Andrzej Siewior
Reviewed-by: Frederic Weisbecker
Le Wed, Oct 23, 2024 at 12:52:57PM +0200, Sebastian Andrzej Siewior a écrit :
> On 2024-10-23 08:30:18 [+0200], To Frederic Weisbecker wrote:
> > > > > > +void raise_timer_softirq(void)
> > > > > > +{
> > > > > > + unsigned long
Le Tue, Oct 22, 2024 at 12:53:07PM +0200, Uladzislau Rezki (Sony) a écrit :
> KCSAN reports a data race when access the krcp->monitor_work.timer.expires
> variable in the schedule_delayed_monitor_work() function:
>
>
> BUG: KCSAN: data-race in __mod_timer / kvfree_call_rcu
>
> read to 0x8882
Le Wed, Oct 23, 2024 at 08:30:14AM +0200, Sebastian Andrzej Siewior a écrit :
> On 2024-10-23 00:27:34 [+0200], Frederic Weisbecker wrote:
> > > Try again without the "ksoftirqd will collect it all" since this won't
> > > happen since the revert I mentioned.
Hi Thorsten,
First, thanks for letting us know.
Le Wed, Oct 23, 2024 at 10:27:18AM +0200, Linux regression tracking (Thorsten
Leemhuis) a écrit :
> Hi, Thorsten here, the Linux kernel's regression tracker.
>
> Frederic, I noticed a report about a regression in bugzilla.kernel.org
> that appears
Le Tue, Oct 22, 2024 at 05:34:21PM +0200, Sebastian Andrzej Siewior a écrit :
> On 2024-10-22 15:28:56 [+0200], Frederic Weisbecker wrote:
> > > Once the ksoftirqd is marked as pending (or is running) it will collect
> > > all raised softirqs. This in turn means that a sof
ock on cpus_write_lock
>
> The above scenario will not only trigger WARN_ON_ONCE(), but also
> trigger deadlock, this commit therefore check rdp->nocb_cb_sleep
> flags before invoke kthread_parkme(), and the kthread_parkme() is
> not invoke until there are no pending callbacks and
Le Fri, Oct 04, 2024 at 12:17:04PM +0200, Sebastian Andrzej Siewior a écrit :
> A timer/ hrtimer softirq is raised in-IRQ context. With threaded
> interrupts enabled or on PREEMPT_RT this leads to waking the ksoftirqd
> for the processing of the softirq.
It took me some time to understand the actu
1 - 100 of 3114 matches
Mail list logo