run is the value one, that is single-call runs.
This facility is intended for diagnostic use only, and should be avoided
on production systems.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
[ paulmck: Apply Ri
le marking will be apparent.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
Reported-by: Chris Mason
[ paulmck: Per-clocksource retries per Neeraj Upadhyay feedback. ]
[ paulmck: Don't reset injectfail per
determine whether to add or
to subtract.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
Reported-by: Chris Mason
[ paulmck: Apply Randy Dunlap feedback. ]
Signed-off-by: Paul E. McKenney
---
Documentation/admin-gu
athan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
Reported-by: Chris Mason
[ paulmck: Add "static" to clocksource_verify_one_cpu() per kernel test robot
feedback. ]
Signed-off-by: Paul E. McKenney
---
arch/x86/kernel/kvmclock.c | 2 +-
arch/x86/kernel/tsc.c |
From: "Paul E. McKenney"
Although smp_call_function() has the advantage of simplicity, using
it to check for cross-CPU clock desynchronization means that any CPU
being slow reduces the sensitivity of the checking across all CPUs.
And it is not uncommon for smp_call_function() latencies to be in t
le marking will be apparent.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
Reported-by: Chris Mason
[ paulmck: Per-clocksource retries per Neeraj Upadhyay feedback. ]
[ paulmck: Don't reset injectfail per
From: "Paul E. McKenney"
Code that checks for clock desynchronization must itself be tested, so
this commit creates a new clocksource.inject_delay_shift_percpu= kernel
boot parameter that adds or subtracts a large value from the check read,
using the specified bit of the CPU ID to determine wheth
athan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
Reported-by: Chris Mason
[ paulmck: Add "static" to clocksource_verify_one_cpu() per kernel test robot
feedback. ]
Signed-off-by: Paul E. McKenney
---
arch/x86/kernel/kvmclock.c | 2 +-
arch/x86/kernel/tsc.c |
run is the value one, that is single-call runs.
This facility is intended for diagnostic use only, and should be avoided
on production systems.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Cc: Andi Kleen
[ paulmck: Apply Ri
From: "Paul E. McKenney"
Although smp_call_function() has the advantage of simplicity, using
it to check for cross-CPU clock desynchronization means that any CPU
being slow reduces the sensitivity of the checking across all CPUs.
And it is not uncommon for smp_call_function() latencies to be in t
From: "Paul E. McKenney"
A kernel built with CONFIG_RCU_STRICT_GRACE_PERIOD=y needs a quiescent
state to appear very shortly after a CPU has noticed a new grace period.
Placing an RCU reader immediately after this point is ineffective because
this normally happens in softirq context, which acts a
From: "Paul E. McKenney"
The ->rcu_read_unlock_special.b.need_qs field in the task_struct
structure indicates that the RCU core needs a quiscent state from the
corresponding task. The __rcu_read_unlock() function checks this (via
an eventual call to rcu_preempt_deferred_qs_irqrestore()), and if
From: "Paul E. McKenney"
The goal of this series is to increase the probability of tools like
KASAN detecting that an RCU-protected pointer was used outside of its
RCU read-side critical section. Thus far, the approach has been to make
grace periods and callback processing happen faster. Anothe
uested via rcutree.rcu_unlock_delay. This commit also adds a call
to rcu_read_unlock_strict() from the CONFIG_PREEMPT=n instance of
__rcu_read_unlock().
[ paulmck: Fixed bug located by kernel test robot ]
Reported-by Jann Horn
Signed-off-by: Paul E. McKenney
---
include/linux/rcupdate.h | 7 +++
kernel/
From: "Paul E. McKenney"
If there are idle CPUs, RCU's grace-period kthread will wait several
jiffies before even thinking about polling them. This promotes
efficiency, which is normally a good thing, but when the kernel
has been built with CONFIG_RCU_STRICT_GRACE_PERIOD=y, we care more
about sh
From: "Paul E. McKenney"
Currently, each CPU discovers the end of a given grace period on its
own time, which is again good for efficiency but bad for fast grace
periods, given that it is things like kfree() within the RCU callbacks
that will cause trouble for pointers leaked from RCU read-side c
From: "Paul E. McKenney"
Currently, each CPU discovers the beginning of a given grace period
on its own time, which is again good for efficiency but bad for fast
grace periods. This commit therefore uses on_each_cpu() to IPI each
CPU after grace-period initialization in order to inform each CPU
From: "Paul E. McKenney"
The rcu_preempt_deferred_qs_irqrestore() function is invoked at
the end of an RCU read-side critical section (for example, directly
from rcu_read_unlock()) and, if .need_qs is set, invokes rcu_qs() to
report the new quiescent state. This works, except that rcu_qs() only
From: "Paul E. McKenney"
The value of DEFAULT_RCU_BLIMIT is normally set to 10, the idea being to
avoid needless response-time degradation due to RCU callback invocation.
However, when CONFIG_RCU_STRICT_GRACE_PERIOD=y it is better to avoid
throttling callback execution in order to better detect p
From: "Paul E. McKenney"
Because strict RCU grace periods will complete more quickly, they will
experience greater lock contention on each leaf rcu_node structure's
->lock. This commit therefore reduces the leaf fanout in order to reduce
this lock contention.
Note that this also has the effect
From: "Paul E. McKenney"
A given CPU normally notes a new grace period during one RCU_SOFTIRQ,
but avoids reporting the corresponding quiescent state until some later
RCU_SOFTIRQ. This leisurly approach improves efficiency by increasing
the number of update requests served by each grace period,
From: "Paul E. McKenney"
People running automated tests have asked for a way to make RCU minimize
grace-period duration in order to increase the probability of KASAN
detecting a pointer being improperly leaked from an RCU read-side critical
section, for example, like this:
rcu_read_lock(
From: "Paul E. McKenney"
Although smp_call_function() has the advantage of simplicity, using
it to check for cross-CPU clock desynchronization means that any CPU
being slow reduces the sensitivity of the checking across all CPUs.
And it is not uncommon for smp_call_function() latencies to be in t
athan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Reported-by: Chris Mason
[ paulmck: Add "static" to clocksource_verify_one_cpu() per kernel test robot
feedback. ]
Signed-off-by: Paul E. McKenney
---
arch/x86/kernel/kvmclock.c | 2 +-
arch/x86/kernel/tsc.c | 3 +-
include/
From: "Paul E. McKenney"
Code that checks for clock desynchronization must itself be tested, so
this commit creates a new clocksource.inject_delay_shift_percpu= kernel
boot parameter that adds or subtracts a large value from the check read,
using the specified bit of the CPU ID to determine wheth
le marking will be apparent.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
Reported-by: Chris Mason
[ paulmck: Per-clocksource retries per Neeraj Upadhyay feedback. ]
[ paulmck: Don't reset injectfail per Neeraj Upadhyay fee
run is the value one, that is single-call runs.
This facility is intended for diagnostic use only, and should be avoided
on production systems.
Cc: John Stultz
Cc: Thomas Gleixner
Cc: Stephen Boyd
Cc: Jonathan Corbet
Cc: Mark Rutland
Cc: Marc Zyngier
[ paulmck: Apply Rik van Riel feedback.
ing of the allocation-time stack trace.
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientjes
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc:
Reported-by: Andrii Nakryiko
[ paulmck: Convert to printing and change names per Joonsoo Kim. ]
[ paulmck: Move slab definition per Stephen Rothwell and kbuild
From: "Paul E. McKenney"
The debug-object double-free checks in __call_rcu() print out the
RCU callback function, which is usually sufficient to track down the
double free. However, all uses of things like queue_rcu_work() will
have the same RCU callback function (rcu_work_rcufn() in this case),
From: "Paul E. McKenney"
This commit makes mem_dump_obj() call out NULL and zero-sized pointers
specially instead of classifying them as non-paged memory.
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientjes
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc:
Reported-by: Andrii Nakryiko
Acked-by:
From: "Paul E. McKenney"
This commit adds vmalloc() support to mem_dump_obj(). Note that the
vmalloc_dump_obj() function combines the checking and dumping, in
contrast with the split between kmem_valid_obj() and kmem_dump_obj().
The reason for the difference is that the checking in the vmalloc()
From: "Paul E. McKenney"
This commit adds the starting address and number of pages to the vmalloc()
information dumped by way of vmalloc_dump_obj().
Cc: Andrew Morton
Cc: Joonsoo Kim
Cc:
Reported-by: Andrii Nakryiko
Suggested-by: Vlastimil Babka
Signed-off-by: Paul E. McKenney
---
mm/vmal
From: "Paul E. McKenney"
Reference-count underflow for percpu_ref is detected in the RCU callback
percpu_ref_switch_to_atomic_rcu(), and the resulting warning does not
print anything allowing easy identification of which percpu_ref use
case is underflowing. This is of course not normally a probl
From: "Paul E. McKenney"
This commit makes mem_dump_obj() call out NULL and zero-sized pointers
specially instead of classifying them as non-paged memory.
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientjes
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc:
Reported-by: Andrii Nakryiko
Signed-of
From: "Paul E. McKenney"
The debug-object double-free checks in __call_rcu() print out the
RCU callback function, which is usually sufficient to track down the
double free. However, all uses of things like queue_rcu_work() will
have the same RCU callback function (rcu_work_rcufn() in this case),
From: "Paul E. McKenney"
Reference-count underflow for percpu_ref is detected in the RCU callback
percpu_ref_switch_to_atomic_rcu(), and the resulting warning does not
print anything allowing easy identification of which percpu_ref use
case is underflowing. This is of course not normally a probl
From: "Paul E. McKenney"
This commit adds vmalloc() support to mem_dump_obj(). Note that the
vmalloc_dump_obj() function combines the checking and dumping, in
contrast with the split between kmem_valid_obj() and kmem_dump_obj().
The reason for the difference is that the checking in the vmalloc()
ing of the allocation-time stack trace.
Cc: Christoph Lameter
Cc: Pekka Enberg
Cc: David Rientjes
Cc: Joonsoo Kim
Cc: Andrew Morton
Cc:
Reported-by: Andrii Nakryiko
[ paulmck: Convert to printing and change names per Joonsoo Kim. ]
[ paulmck: Move slab definition per Stephen Rothwell and kbuild
From: "Paul E. McKenney"
The priority level of the rcuo kthreads is the system administrator's
responsibility, but kernels that priority-boost RCU readers probably need
the rcuo kthreads running at the rcutree.kthread_prio level. This commit
therefore sets these kthreads to that priority level a
From: "Paul E. McKenney"
Currently, rcutorture refuses to test RCU priority boosting in
CONFIG_HOTPLUG_CPU=y kernels, which are the only kind normally built on
x86 these days. This commit therefore updates rcutorture's tests of RCU
priority boosting to make them safe for CPU hotplug. However, t
From: "Paul E. McKenney"
Historically, a task that has been subjected to RCU priority boosting is
deboosted at rcu_read_unlock() time. However, with the advent of deferred
quiescent states, if the outermost rcu_read_unlock() was invoked with
either bottom halves, interrupts, or preemption disabl
From: "Paul E. McKenney"
TREE03 tests RCU priority boosting, which is a real-time feature.
It would also be good if it tested something closer to what is
actually used by the real-time folks. This commit therefore adds
tree.use_softirq=0 to the TREE03 kernel boot parameters in TREE03.boot.
Sign
From: Akira Yokosawa
The hlist_nulls_for_each_entry_rcu() docbook header references the
atomic_ops.rst file, which was removed in commit f0400a77ebdc ("atomic:
Delete obsolete documentation"). This commit therefore substitutes a
section in memory-barriers.txt discussing the use of barrier() in l
From: Frederic Weisbecker
Cc: Rafael J. Wysocki
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Signed-off-by: Frederic Weisbecker
Signed-off-by: Paul E. McKenney
---
kernel/rcu/tree.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index da6f
From: Zhouyi Zhou
In rcu_nmi_enter(), there is an erroneous instrumentation_end() in the
second branch of the "if" statement. Oddly enough, "objtool check -f
vmlinux.o" fails to complain because it is unable to correctly cover
all cases. Instead, objtool visits the third branch first, which mar
From: Neeraj Upadhyay
The condition in the trace_rcu_grace_period() in rcutree_dying_cpu() is
backwards, so that it uses the string "cpuofl" when the offline CPU is
blocking the current grace period and "cpuofl-bgp" otherwise. Given that
the "-bgp" stands for "blocking grace period", this is at
From: Mauro Carvalho Chehab
After commit 5130b8fd0690 ("rcu: Introduce kfree_rcu() single-argument macro"),
kernel-doc now emits two warnings:
./include/linux/rcupdate.h:884: warning: Excess function parameter
'ptr' description in 'kfree_rcu'
./include/linux/rcupdate.h:884: warn
From: "Paul E. McKenney"
After interrupts have enabled at boot but before some random point
in early_initcall() processing, softirq processing is unreliable.
If softirq sees a need to push softirq-handler invocation to ksoftirqd
during this time, then those handlers can be delayed until the ksoft
From: Sangmoon Kim
This commit adds a trace event which allows tracing the beginnings of RCU
CPU stall warnings on systems where sysctl_panic_on_rcu_stall is disabled.
The first parameter is the name of RCU flavor like other trace events.
The second parameter indicates whether this is a stall of
From: "Paul E. McKenney"
Because preemptible RCU's __rcu_read_unlock() is an external function,
the rough equivalent of an implicit barrier() is inserted by the compiler.
Except that there is a direct call to __rcu_read_unlock() in that same
file, and compilers are getting to the point where they
From: "Paul E. McKenney"
This commit replaces "Steve" with the his real name, which is "Stephen".
Reported-by: Stephen Hemminger
Signed-off-by: Paul E. McKenney
---
Documentation/RCU/RTFP.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/Documentation/RCU/RTFP.txt b/Docu
au Rezki
Cc: Peter Zijlstra
Cc: Thomas Gleixner
[ paulmck: Remove unneeded check per Sebastian Siewior feedback. ]
Signed-off-by: Paul E. McKenney
---
kernel/softirq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 9908ec4a..bad14ca 10
From: "Uladzislau Rezki (Sony)"
The single-argument variant of kfree_rcu() is currently not
tested by any member of the rcutoture test suite. This
commit therefore adds rcuscale code to test it. This
testing is controlled by two new boolean module parameters,
kfree_rcu_test_single and kfree_rcu
From: "Paul E. McKenney"
This commit applies the __GFP_NOMEMALLOC gfp flag to memory allocations
carried out by the single-argument variant of kvfree_rcu(), thus avoiding
this can-sleep code path from dipping into the emergency reserves.
Acked-by: Michal Hocko
Suggested-by: Michal Hocko
Signed
From: "Uladzislau Rezki (Sony)"
__GFP_RETRY_MAYFAIL can spend quite a bit of time reclaiming, and this
can be wasted effort given that there is a fallback code path in case
memory allocation fails.
__GFP_NORETRY does perform some light-weight reclaim, but it will fail
under OOM conditions, allow
From: "Uladzislau Rezki (Sony)"
Running an rcuscale stress-suite can lead to "Out of memory" of a
system. This can happen under high memory pressure with a small amount
of physical memory.
For example, a KVM test configuration with 64 CPUs and 512 megabytes
can result in OOM when running rcuscal
From: "Paul E. McKenney"
The krc_this_cpu_unlock() function does a raw_spin_unlock() immediately
followed by a local_irq_restore(). This commit saves a line of code by
merging them into a raw_spin_unlock_irqrestore(). This transformation
also reduces scheduling latency because raw_spin_unlock_i
u(), given that the double-argument variant cannot directly
invoke the allocator.
[ paulmck: Add add_ptr_to_bulk_krc_lock header comment per Michal Hocko. ]
Signed-off-by: Uladzislau Rezki (Sony)
Signed-off-by: Paul E. McKenney
---
kernel/rcu/tree.c | 42 ++-
From: Paul Gortmaker
There are inputs to bitmap_parselist() that would probably never
be entered manually by a person, but might result from some kind of
automated input generator. Things like ranges of length 1, or group
lengths longer than nbits, overlaps, or offsets of zero.
Adding these tes
From: Paul Gortmaker
While this is done for all bitmaps, the original use case in mind was
for CPU masks and cpulist_parse() as described below.
It seems that a common configuration is to use the 1st couple cores for
housekeeping tasks. This tends to leave the remaining ones to form a
pool of s
From: Paul Gortmaker
With the core bitmap support now accepting "N" as a placeholder for
the end of the bitmap, "all" can be represented as "0-N" and has the
advantage of not being specific to RCU (or any other subsystem).
So deprecate the use of "all" by removing documentation references
to it.
From: "Paul E. McKenney"
This commit uses the shiny new "all" and "N" cpumask options to decouple
the "nohz_full" and "rcu_nocbs" kernel boot parameters in the TREE04.boot
and TREE08.boot files from the CONFIG_NR_CPUS options in the TREE04 and
TREE08 files.
Reported-by: Paul Gortmaker
Signed-of
From: Paul Gortmaker
These are copies of existing tests, with just 31 --> N. This ensures
the recently added "N" alias transparently works in any normally
numeric fields of a region specification.
Cc: Yury Norov
Cc: Rasmus Villemoes
Cc: Andy Shevchenko
Acked-by: Yury Norov
Signed-off-by: Pa
From: Paul Gortmaker
This block of tests was meant to find/flag incorrect use of the ":"
and "/" separators (syntax errors) and invalid (zero) group len.
However they were specified with an 8 bit width and 32 bit operations,
so they really contained two errors (EINVAL and ERANGE).
Promote them
From: Paul Gortmaker
Add tests that specify a valid range, but one that is outside the
width of the bitmap for which it is to be applied to. These should
trigger an -ERANGE response from the code.
Cc: Yury Norov
Cc: Rasmus Villemoes
Cc: Andy Shevchenko
Acked-by: Yury Norov
Reviewed-by: Andy
From: Paul Gortmaker
This will reduce parameter passing and enable using nbits as part
of future dynamic region parameter parsing.
Cc: Yury Norov
Cc: Rasmus Villemoes
Cc: Andy Shevchenko
Suggested-by: Yury Norov
Acked-by: Yury Norov
Reviewed-by: Andy Shevchenko
Signed-off-by: Paul Gortmake
From: Paul Gortmaker
It makes sense to do all the checks in check_region() and not 1/2
in check_region and 1/2 in set_region.
Since set_region is called immediately after check_region, the net
effect on runtime is zero, but it gets rid of an if (...) return...
Cc: Yury Norov
Cc: Rasmus Villemo
From: Frederic Weisbecker
Enqueuing a local timer after the tick has been stopped will result in
the timer being ignored until the next random interrupt.
Perform sanity checks to report these situations.
Cc: Peter Zijlstra
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Rafael J. Wysocki
Signed-off-
From: Frederic Weisbecker
Provide CONFIG_PROVE_RCU sanity checks to ensure we are always reading
the offloaded state of an rdp in a safe and stable way and prevent from
its value to be changed under us. We must either hold the barrier mutex,
the cpu-hotplug lock (read or write) or the nocb lock.
From: Frederic Weisbecker
The nocb_cb_wait() function first sets the rdp->nocb_cb_sleep flag to
true by after invoking the callbacks, and then sets it back to false if
it finds more callbacks that are ready to invoke.
This is confusing and will become unsafe if this flag is ever read
locklessly.
From: Frederic Weisbecker
This commit explains why softirqs need to be disabled while invoking
callbacks, even when callback processing has been offloaded. After
all, invoking callbacks concurrently is one thing, but concurrently
invoking the same callback is quite another.
Reported-by: Boqun F
From: Frederic Weisbecker
At the start of a CPU-hotplug operation, the incoming CPU's callback
list can be in a number of states:
1. Disabled and empty. This is the case when the boot CPU has
not invoked call_rcu(), when a non-boot CPU first comes online,
and when a non-off
From: Frederic Weisbecker
It makes no sense to de-offload an offline CPU because that CPU will never
invoke any remaining callbacks. It also makes little sense to offload an
offline CPU because any pending RCU callbacks were migrated when that CPU
went offline. Yes, it is in theory possible to
From: Frederic Weisbecker
The name nocb_gp_update_state() is unenlightening, so this commit changes
it to nocb_gp_update_state_deoffloading(). This function now does what
its name says, updates state and returns true if the CPU corresponding to
the specified rcu_data structure is in the process
From: Frederic Weisbecker
Currently, the bypass is flushed at the very last moment in the
deoffloading procedure. However, this approach leads to a larger state
space than would be preferred. This commit therefore disables the
bypass at soon as the deoffloading procedure begins, then flushes it
From: Frederic Weisbecker
Those tracing calls don't need to be under ->nocb_lock. This commit
therefore moves them outside of that lock.
Signed-off-by: Frederic Weisbecker
Cc: Josh Triplett
Cc: Lai Jiangshan
Cc: Joel Fernandes
Cc: Neeraj Upadhyay
Cc: Boqun Feng
Signed-off-by: Paul E. McKe
From: Jiapeng Chong
RCU triggerse the following sparse warning:
kernel/rcu/tree_plugin.h:1497:5: warning: symbol
'nocb_nobypass_lim_per_jiffy' was not declared. Should it be static?
This commit therefore makes this variable static.
Reported-by: Abaci Robot
Frederic Weisbecker
Signed-off-by:
From: Frederic Weisbecker
This sequence of events can lead to a failure to requeue a CPU's
->nocb_timer:
1. There are no callbacks queued for any CPU covered by CPU 0-2's
->nocb_gp_kthread. Note that ->nocb_gp_kthread is associated
with CPU 0.
2. CPU 1 enqueues its fi
From: Frederic Weisbecker
This commit removes a stale comment claiming that the cblist must be
empty before changing the offloading state. This claim was correct back
when the offloaded state was defined exclusively at boot.
Reported-by: Paul E. McKenney
Signed-off-by: Frederic Weisbecker
Cc:
From: "Paul E. McKenney"
There is a need for a non-blocking polling interface for RCU grace
periods, so this commit supplies start_poll_synchronize_rcu() and
poll_state_synchronize_rcu() for this purpose. Note that the existing
get_state_synchronize_rcu() may be used if future grace periods are
From: "Paul E. McKenney"
This commit causes rcutorture to test the new start_poll_synchronize_rcu()
and poll_state_synchronize_rcu() functions. Because of the difficulty of
determining the nature of a synchronous RCU grace (expedited or not),
the test that insisted that poll_state_synchronize_rc
From: "Paul E. McKenney"
There is a need for a non-blocking polling interface for RCU grace
periods, so this commit supplies start_poll_synchronize_rcu() and
poll_state_synchronize_rcu() for this purpose. Note that the existing
get_state_synchronize_rcu() may be used if future grace periods are
From: "Paul E. McKenney"
In kernels built with CONFIG_RCU_STRICT_GRACE_PERIOD=y, every grace
period is an expedited grace period. However, rcu_read_unlock_special()
does not treat them that way, instead allowing the deferred quiescent
state to be reported whenever. This commit therefore adds a
From: "Paul E. McKenney"
Historically, a task that has been subjected to RCU priority boosting is
deboosted at rcu_read_unlock() time. However, with the advent of deferred
quiescent states, if the outermost rcu_read_unlock() was invoked with
either bottom halves, interrupts, or preemption disabl
From: "Paul E. McKenney"
Currently, rcutorture refuses to test RCU priority boosting in
CONFIG_HOTPLUG_CPU=y kernels, which are the only kind normally built on
x86 these days. This commit therefore updates rcutorture's tests of RCU
priority boosting to make them safe for CPU hotplug. However, t
From: "Paul E. McKenney"
TREE03 tests RCU priority boosting, which is a real-time feature.
It would also be good if it tested something closer to what is
actually used by the real-time folks. This commit therefore adds
tree.use_softirq=0 to the TREE03 kernel boot parameters in TREE03.boot.
Sign
From: Lukas Bulwahn
The command 'find ./kernel/rcu/ | xargs ./scripts/kernel-doc -none'
reported an issue with the kernel-doc of struct rcu_tasks.
This commit rectifies the kernel-doc, such that no issues remain for
./kernel/rcu/.
Signed-off-by: Lukas Bulwahn
Signed-off-by: Paul E. McKenney
-
y: Mathieu Desnoyers
[ paulmck: Fix commit log per Mathieu Desnoyers feedback. ]
Signed-off-by: Paul E. McKenney
---
kernel/rcu/tasks.h | 36
1 file changed, 36 insertions(+)
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 17c8ebe..f818357 10
From: Stephen Zhang
This commit replaces a hard-coded "rcu_torture_stall" string in a
pr_alert() format with "%s" and __func__.
Signed-off-by: Stephen Zhang
Signed-off-by: Paul E. McKenney
---
kernel/rcu/rcutorture.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/ke
From: Stephen Zhang
This commit replaces a hard-coded "torture_init_begin" string in
a pr_alert() format with "%s" and __func__.
Signed-off-by: Stephen Zhang
Signed-off-by: Paul E. McKenney
---
kernel/torture.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/kernel/t
From: "Paul E. McKenney"
If the build fails when running multiple instances of a given rcutorture
scenario, for example, using the kvm.sh --configs "8*RUDE01" argument,
the build will be rerun an additional seven times. This is in some sense
correct, but it can waste significant time. This comm
From: "Paul E. McKenney"
The current jitter.sh script expects cpumask bits to fit into whatever
the awk interpreter uses for an integer, which clearly does not hold for
even medium-sized systems these days. This means that on a large system,
only the first 32 or 64 CPUs (depending) are subjected
From: "Paul E. McKenney"
Yes, I do recall a time when 512MB of memory was a lot of mass storage,
much less main memory, but the rcuscale kvfree_rcu() testing invoked by
torture.sh can sometimes exceed it on large systems, resulting in OOM.
This commit therefore causes torture.sh to pase the "--me
From: "Paul E. McKenney"
In some environments, the torture-testing use of virtualization is
inconvenient. In such cases, the modprobe and rmmod commands may be used
to do torture testing, but significant setup is required to build, boot,
and modprobe a kernel so as to match a given torture-test
From: "Paul E. McKenney"
Given large numbers of threads, the quantity of torture-test output is
sufficient to sometimes result in RCU CPU stall warnings. The probability
of these stall warnings was greatly reduced by batching the output,
but the warnings were not eliminated. However, the actual
From: "Paul E. McKenney"
The testid.txt file was intended for occasional in extremis use, but
now that the new "bare-metal" file references it, it might see more use.
This commit therefore labels sections of output and adds spacing to make
it easier to see what needs to be done to make a bare-met
From: "Paul E. McKenney"
This commit records the process IDs of the kvm-test-1-run.sh and
kvm-test-1-run-qemu.sh scripts to ease monitoring of remotely running
instances of these scripts.
Signed-off-by: Paul E. McKenney
---
tools/testing/selftests/rcutorture/bin/kvm-test-1-run-qemu.sh | 2 ++
From: "Paul E. McKenney"
Given large numbers of threads, the quantity of torture-test output is
sufficient to sometimes result in RCU CPU stall warnings. The probability
of these stall warnings was greatly reduced by batching the output,
but the warnings were not eliminated. However, the actual
From: "Paul E. McKenney"
Currently the bN.ready and bN.wait files are placed in the
rcutorture directory, which really is not at all a good place
for run-specific files. This commit therefore renames these
files to build.ready and build.wait and then moves them into the
scenario directories with
From: "Paul E. McKenney"
Remote rcutorture testing requires that jitter.sh continue to be
invoked from the generated script for local runs, but that it instead
be invoked on the remote system for distributed runs. This argues
for common jitterstart and jitterstop scripts. But it would be good
f
1 - 100 of 732 matches
Mail list logo