Viro
Signed-off-by: Boqun Feng
---
drivers/staging/lustre/lustre/llite/dir.c | 60 ++
.../staging/lustre/lustre/llite/llite_internal.h | 2 +-
drivers/staging/lustre/lustre/llite/namei.c| 2 +-
3 files changed, 18 insertions(+), 46 deletions(-)
diff --
kernel file name copy from userland, it's better to use
getname and putname if possible.
To be able to use these functions all over the kernel, symbols 'getname'
and 'putname' are exported and comments of their behaviors and
constraints are added.
Suggested-by: Al Viro
do path name lookups.
OK, good point. Thank you!
Regards,
Boqun Feng
pgpHqu16snS7T.pgp
Description: PGP signature
Hi Al,
On Sun, Apr 12, 2015 at 02:13:18AM +0100, Al Viro wrote:
>
> BTW, looking at the __getname() callers... Lustre one sure as hell looks
> bogus:
> char *tmp = __getname();
>
> if (!tmp)
> return ERR_PTR(-ENOMEM);
>
> len = strncpy_from_user(tmp, fil
On Fri, Apr 17, 2015 at 08:35:30PM +0800, Boqun Feng wrote:
> Hi Al,
>
> On Sun, Apr 12, 2015 at 02:13:18AM +0100, Al Viro wrote:
> >
> > BTW, looking at the __getname() callers... Lustre one sure as hell looks
> > bogus:
> > char *tmp = __get
Hi Al,
On Sun, Apr 12, 2015 at 12:56:55AM +0100, Al Viro wrote:
> On Tue, Apr 07, 2015 at 04:38:26PM +0800, Boqun Feng wrote:
> > Ping again...
>
> What exactly does it buy us? You need a pathname just a bit under 4Kb, which,
> with all due respect, is an extremely rare case.
t,
> no matter what.
Thank you for your response. As long as the actual size of result is not
a power of 2, the problem will not happen.(Maybe add a comment before
struct filename)
Regards,
Boqun Feng
pgp4jJyrP9Cp3.pgp
Description: PGP signature
Hi Yuyang,
We have tested your V7 patchset as follow:
On Intel(R) Xeon(R) CPU X5690 (12 cores), run 12 stress and 6 dbench.
Results show that usages of some CPUs are less than 50% sometimes.
We would like to test your V8 patchset, but I can neither find it in a
lkml archive, nor in my lkml subsc
On Wed, Jun 17, 2015 at 11:06:50AM +0800, Boqun Feng wrote:
> Hi Yuyang,
>
> I've run the test as follow on tip/master without and with your
> patchset:
>
> On a 12-core system (Intel(R) Xeon(R) CPU X5690 @ 3.47GHz)
> run stress --cpu 12
> run dbench 1
Sorry, I for
Hi Yuyang,
On Wed, Jun 17, 2015 at 11:11:01AM +0800, Yuyang Du wrote:
> Hi,
>
> The sched_debug is informative, lets first give it some analysis.
>
> The workload is 12 CPU hogging tasks (always runnable) and 1 dbench
> task doing fs ops (70% runnable) running at the same time.
>
> Actually, th
Hi Yuyang,
On Tue, Jun 16, 2015 at 03:26:05AM +0800, Yuyang Du wrote:
> @@ -5977,36 +5786,6 @@ static void attach_tasks(struct lb_env *env)
> }
>
> #ifdef CONFIG_FAIR_GROUP_SCHED
> -/*
> - * update tg->load_weight by folding this cpu's load_avg
> - */
> -static void __update_blocked_averages_c
Hi Yuyang,
On Fri, Jun 19, 2015 at 07:05:54AM +0800, Yuyang Du wrote:
> On Fri, Jun 19, 2015 at 02:00:38PM +0800, Boqun Feng wrote:
> > However, update_cfs_rq_load_avg() only updates cfs_rq->avg, the change
> > won't be contributed or aggregated to cfs_rq's parent in
Hi Yuyang,
On Fri, Jun 19, 2015 at 11:11:16AM +0800, Yuyang Du wrote:
> On Fri, Jun 19, 2015 at 03:57:24PM +0800, Boqun Feng wrote:
> > >
> > > This rewrite patch does not NEED to aggregate entity's load to cfs_rq,
> > > but rather directly update the cfs_r
On Wed, Feb 28, 2018 at 03:13:54PM -0500, Alan Stern wrote:
> This patch reorganizes the definition of rb in the Linux Kernel Memory
> Consistency Model. The relation is now expressed in terms of
> rcu-fence, which consists of a sequence of gp and rscs links separated
> by rcu-link links, in which
On Wed, Feb 28, 2018 at 08:49:37PM -0800, Paul E. McKenney wrote:
> On Thu, Mar 01, 2018 at 09:55:31AM +0800, Boqun Feng wrote:
> > On Wed, Feb 28, 2018 at 03:13:54PM -0500, Alan Stern wrote:
> > > This patch reorganizes the definition of rb in the Linux Kernel Memory
> > &
On Tue, Mar 27, 2018 at 12:05:23PM -0400, Mathieu Desnoyers wrote:
[...]
> Changes since v11:
>
> - Replace task struct rseq_preempt, rseq_signal, and rseq_migrate
> bool by u32 rseq_event_mask.
[...]
> @@ -979,6 +980,17 @@ struct task_struct {
> unsigned long numa_pages_
On Tue, Apr 17, 2018 at 07:01:10AM -0700, Matthew Wilcox wrote:
> On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote:
> > I observed the following deadlock between them:
> >
> > [task 1] [task 2] [task 3]
> > kill_fasync()
(Copy more people)
On Wed, Apr 11, 2018 at 09:50:51PM +0800, Boqun Feng wrote:
> This patch add the documentation piece for the reasoning of deadlock
> detection related to recursive read lock. The following sections are
> added:
>
> * Explain what is a recursive read lock, an
On Wed, May 16, 2018 at 04:13:16PM -0400, Mathieu Desnoyers wrote:
> - On May 16, 2018, at 12:18 PM, Peter Zijlstra pet...@infradead.org wrote:
>
> > On Mon, Apr 30, 2018 at 06:44:26PM -0400, Mathieu Desnoyers wrote:
> >> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> >> index c32a
On Fri, Apr 26, 2019, at 3:06 PM, Yuyang Du wrote:
> Thanks for review.
>
> On Fri, 26 Apr 2019 at 04:03, Peter Zijlstra wrote:
> >
> > On Wed, Apr 24, 2019 at 06:19:30PM +0800, Yuyang Du wrote:
> > > In mark_lock_irq(), the following checks are performed:
> > >
> > >--
On Wed, Jan 27, 2021 at 12:23:45PM -0800, Michael Kelley wrote:
> STIMER0 interrupts are most naturally modeled as per-cpu IRQs. But
> because x86/x64 doesn't have per-cpu IRQs, the core STIMER0 interrupt
> handling machinery is done in code under arch/x86 and Linux IRQs are
> not used. Adding supp
On Thu, Feb 18, 2021 at 03:16:29PM -0800, Michael Kelley wrote:
[...]
> +
> +/*
> + * Get the value of a single VP register. One version
> + * returns just 64 bits and another returns the full 128 bits.
> + * The two versions are separate to avoid complicating the
> + * calling sequence for the mo
On Thu, Aug 06, 2020 at 10:05:44AM -0700, Peter Oskolkov wrote:
> Based on Google-internal RSEQ work done by
> Paul Turner and Andrew Hunter.
>
> This patch adds a selftest for MEMBARRIER_CMD_PRIVATE_RESTART_RSEQ_ON_CPU.
> The test quite often fails without the previous patch in this patchset,
> b
, and this is useful, especially for the lockdep
development selftest, so we keep this via a variable to force switching
lock annotation for read_lock().
Signed-off-by: Boqun Feng
---
include/linux/lockdep.h | 23 ++-
kernel/locking/lockdep.c | 14 ++
lib/lo
Hi Peter and Waiman,
As promised, this is the updated version of my previous lockdep patchset
for recursive read lock support. It's based on v5.8. Previous versions
can be found at:
V1: https://marc.info/?l=linux-kernel&m=150393341825453
V2: https://marc.info/?l=linux-kernel&m=150468649417950
V3:
structure.
Suggested-by: Peter Zijlstra
Signed-off-by: Boqun Feng
---
include/linux/lockdep.h | 2 +-
kernel/locking/lockdep.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index 6b7cb390f19f..b85973515f84 100644
--- a
s in mark_lock_accessed() and lock_accessed() are
removed, because after this modification, we may call these two
functions on @source_entry of __bfs(), which may not be the entry in
"list_entries"
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 61 +++-
types of dependencies, and
the definition of strong paths.
* Proof for a closed strong path is both sufficient and necessary
for deadlock detections with recursive read locks involved. The
proof could also explain why we call the path "strong"
Signed-off-by:
return value
of __bfs() and its friends, this improves the code readability of the
code, and further, could help if we want to extend the BFS.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 155 ++-
1 file changed, 89 insertions(+), 66 deletions
, so if either
B -> A is -(E*)-> or A -> .. -> B is -(*N)->, the circle A -> .. -> B ->
A is strong, otherwise not. So we introduce a new match function
hlock_conflict() to replace the class_equal() for the deadlock check in
check_noncircular().
Si
aintain the addition of different types of
dependencies.
Signed-off-by: Boqun Feng
---
include/linux/lockdep.h | 2 +
kernel/locking/lockdep.c | 92 ++--
2 files changed, 90 insertions(+), 4 deletions(-)
diff --git a/include/linux/lockdep.h b/include/linux
ggested-by: Peter Zijlstra
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
index bb8b7e42c154..62f7f88e3673 100644
--- a/kernel/locking/lockdep.c
++
y (with a correct ->only_xr), to do so, we introduce some
helper functions, which also cleans up a little bit for the __bfs() root
initialization code.
Signed-off-by: Boqun Feng
---
include/linux/lockdep.h | 2 +
kernel/locking/lockdep.c | 113 ---
2 files cha
Since we have all the fundamental to handle recursive read locks, we now
add them into the dependency graph.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 19 ++-
1 file changed, 2 insertions(+), 17 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking
weaker than it) or the end point of the path is N
(which is not weaker than anything).
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 47 +---
1 file changed, 44 insertions(+), 3 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockd
n the new usage of the
lock is READ.
Besides, adjust usage_match() and usage_acculumate() to recursive read
lock changes.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 183 +--
1 file changed, 138 insertions(+), 45 deletions(-)
diff --git a
or
on detecting recursive read lock related deadlocks.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 47 ++
1 file changed, 47 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index caadc4dd3368..002d1ec09852 100644
---
e chainkeys, the chain_hlocks
array now store the "hlock_id"s rather than lock_class indexes.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 53 ++--
1 file changed, 35 insertions(+), 18 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kerne
Now since we can handle recursive read related irq inversion deadlocks
correctly, uncomment the irq_read_recursion2 and add more testcases.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 59 +-
1 file changed, 47 insertions(+), 12 deletions
not deadlock.
Those self testcases are valuable for the development of supporting
recursive read related deadlock detection.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 161 +
1 file changed, 161 insertions(+)
diff --git a/lib/locking-selft
This reverts commit d82fed75294229abc9d757f08a4817febae6c4f4.
Since we now could handle mixed read-write deadlock detection well, the
self tests could be detected as expected, no need to use this
work-around.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 8
1 file changed, 8
Add a test case shows that USED_IN_*_READ and ENABLE_*_READ can cause
deadlock too.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 55 ++
1 file changed, 55 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index
read_lock(&rwlock)
write_lock(&rwlock)
read_lock(&rwlock) spin_lock_irq(&slock)
, which is not a deadlock, as the read_lock() on P0 can get the lock
because it could use the unfair fastpass.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 104 +
Hi Dmitry,
On Fri, Dec 18, 2020 at 12:27:04PM +0100, Dmitry Vyukov wrote:
> On Fri, Dec 18, 2020 at 2:30 AM Boqun Feng wrote:
> >
> > On Thu, Dec 17, 2020 at 07:21:18AM -0800, Paul E. McKenney wrote:
> > > On Thu, Dec 17, 2020 at 11:03:20AM +0100, Daniel Vetter wrote
On Tue, Dec 22, 2020 at 01:52:50PM +1000, Nicholas Piggin wrote:
> Excerpts from Boqun Feng's message of November 14, 2020 1:30 am:
> > Hi Nicholas,
> >
> > On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
> >> All the cool kids are doing it.
> >>
> >> Signed-off-by: Nicholas Pigg
Hi Frederic,
On Fri, Nov 13, 2020 at 01:13:15PM +0100, Frederic Weisbecker wrote:
> This keeps growing up. Rest assured, most of it is debug code and sanity
> checks.
>
> Boqun Feng found that holding rnp lock while updating the offloaded
> state of an rdp isn't needed, and
annotations can be added to detect bugs like
using mutex in a RCU callback.
Signed-off-by: Boqun Feng
---
kernel/rcu/update.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 39334d2d2b37..dd59e6412f61 100644
--- a/kernel
-by: Boqun Feng
---
lib/locking-selftest.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index afa7d4bb291f..0af91a07fd18 100644
--- a/lib/locking-selftest.c
+++ b/lib/locking-selftest.c
@@ -186,6 +186,7 @@ static void init_shared_classes(void
look at the the comment of patch #4 in case I
miss something subtle.
Suggestion and comments are welcome!
Regards,
Boqun
Boqun Feng (4):
lockdep/selftest: Make HARDIRQ context threaded
lockdep: Allow wait context checking with empty ->held_locks
rcu/lockdep: Annotate the rcu_callback_
n
why it's still working without it. The idea is that if we currently
don't hold any lock, then the current context is the only one we should
use to check.
Signed-off-by: Boqun Feng
---
kernel/locking/lockdep.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a
context checking makes more sense for that configuration.
Signed-off-by: Boqun Feng
---
lib/locking-selftest.c | 232 +
1 file changed, 232 insertions(+)
diff --git a/lib/locking-selftest.c b/lib/locking-selftest.c
index 0af91a07fd18..c00ef4e69637 100644
On Tue, Dec 08, 2020 at 03:33:24PM +0100, Peter Zijlstra wrote:
> On Tue, Dec 08, 2020 at 06:31:12PM +0800, Boqun Feng wrote:
> > These tests are added for two purposes:
> >
> > * Test the implementation of wait context checks and related
> > annotations.
>
On Wed, Jan 06, 2021 at 08:49:32PM +, Dexuan Cui wrote:
> > From: Michael Kelley
> > Sent: Wednesday, January 6, 2021 9:38 AM
> > From: Dexuan Cui
> > Sent: Tuesday, December 22, 2020 4:12 PM
> > >
> > > When a Linux VM runs on Hyper-V, if the host toolstack doesn't support
> > > hibernation
which selects CONFIG_PCI_DOMAINS_GENERIC), we
introduce arch-specific pci sysdata (similar to the one for x86) for
ARM64, and allow the core PCI code to detect the type of sysdata at the
runtime. The latter is achieved by adding a pci_ops::use_arch_sysdata
field.
Originally-by: Sunil Muthuswamy
Signed-off-b
rch-specific sysdata (rather
than pci_config_window) for CONFIG_PCI_DOMAINS_GENERIC=y architectures.
This allows us to reuse the existing code for Hyper-V PCI controller.
This is simply a proposal, I'm open to any suggestion.
Thanks!
Regards,
Boqun
Boqun Feng (2):
arm64: PCI: Allow use arch-s
Use the newly introduced ->use_arch_sysdata to tell PCI core, we still
use the arch-specific sysdata way to set up root PCI buses on
CONFIG_PCI_DOMAINS_GENERIC=y architectures, this is preparation fo
Hyper-V ARM64 guest virtual PCI support.
Signed-off-by: Boqun Feng (Microsoft)
---
drivers/
On Thu, Mar 04, 2021 at 11:11:42AM -0500, Alan Stern wrote:
> On Thu, Mar 04, 2021 at 02:33:32PM +0800, Boqun Feng wrote:
>
> > Right, I was thinking about something unrelated.. but how about the
> > following case:
> >
> > local_v = &y;
> > r1 =
On Wed, Mar 03, 2021 at 03:22:46PM -0500, Alan Stern wrote:
> On Wed, Mar 03, 2021 at 09:40:22AM -0800, Paul E. McKenney wrote:
> > On Wed, Mar 03, 2021 at 12:12:21PM -0500, Alan Stern wrote:
>
> > > Local variables absolutely should be treated just like CPU registers, if
> > > possible. In fact
On Wed, Mar 03, 2021 at 10:13:22PM -0500, Alan Stern wrote:
> On Thu, Mar 04, 2021 at 09:26:31AM +0800, Boqun Feng wrote:
> > On Wed, Mar 03, 2021 at 03:22:46PM -0500, Alan Stern wrote:
>
> > > Which brings us back to the case of the
> > >
> > > dep
Hi Arnd,
On Sat, Mar 20, 2021 at 05:09:10PM +0100, Arnd Bergmann wrote:
> On Sat, Mar 20, 2021 at 1:54 PM Arnd Bergmann wrote:
> > I actually still have a (not really tested) patch series to clean up
> > the pci host bridge registration, and this should make this a lot easier
> > to add on t
han me ;-) Or maybe we can use Rust-for-Linux or
linux-toolchains list to discuss.
[...]
> > - Boqun Feng is working hard on the different options for
> > threading abstractions and has reviewed most of the `sync` PRs.
>
> Boqun, I know you're familiar with LKMM, can you plea
Hi,
On Wed, Mar 31, 2021 at 02:30:32PM +, guo...@kernel.org wrote:
> From: Guo Ren
>
> Some architectures don't have sub-word swap atomic instruction,
> they only have the full word's one.
>
> The sub-word swap only improve the performance when:
> NR_CPUS < 16K
> * 0- 7: locked byte
> *
On Wed, Mar 31, 2021 at 11:22:35PM +0800, Guo Ren wrote:
> On Mon, Mar 29, 2021 at 8:50 PM Peter Zijlstra wrote:
> >
> > On Mon, Mar 29, 2021 at 08:01:41PM +0800, Guo Ren wrote:
> > > u32 a = 0x55aa66bb;
> > > u16 *ptr = &a;
> > >
> > > CPU0 CPU1
> > > = =
Hi Waiman,
Just a question out of curiosity: how does this problem hide so long?
;-) Because IIUC, both locktorture and ww_mutex_lock have been there for
a while, so why didn't we spot this earlier?
I ask just to make sure we don't introduce the problem because of some
subtle problems in lock(dep
On Wed, Mar 17, 2021 at 10:54:17PM -0400, Waiman Long wrote:
> On 3/17/21 10:24 PM, Boqun Feng wrote:
> > Hi Waiman,
> >
> > Just a question out of curiosity: how does this problem hide so long?
> > ;-) Because IIUC, both locktorture and ww_mutex_lock have been th
On Sun, Apr 04, 2021 at 09:30:38PM -0700, Paul E. McKenney wrote:
> On Sun, Apr 04, 2021 at 09:01:25PM -0700, Paul E. McKenney wrote:
> > On Mon, Apr 05, 2021 at 04:08:55AM +0100, Matthew Wilcox wrote:
> > > On Sun, Apr 04, 2021 at 02:40:30PM -0700, Paul E. McKenney wrote:
> > > > On Sun, Apr 04, 2
On Mon, Apr 05, 2021 at 10:27:52AM -0700, Paul E. McKenney wrote:
> On Mon, Apr 05, 2021 at 01:23:30PM +0800, Boqun Feng wrote:
> > On Sun, Apr 04, 2021 at 09:30:38PM -0700, Paul E. McKenney wrote:
> > > On Sun, Apr 04, 2021 at 09:01:25PM -0700, Paul E. McKenney wrote:
> >
On Mon, Apr 05, 2021 at 04:38:07PM -0700, Paul E. McKenney wrote:
> On Tue, Apr 06, 2021 at 07:25:44AM +0800, Boqun Feng wrote:
> > On Mon, Apr 05, 2021 at 10:27:52AM -0700, Paul E. McKenney wrote:
> > > On Mon, Apr 05, 2021 at 01:23:30PM +0800, Boqun Feng wrote:
> > > &
nctional change.
>
> Signed-off-by: Michael Kelley
Reviewed-by: Boqun Feng
Regards,
Boqun
> ---
> arch/x86/hyperv/hv_init.c | 22 --
> arch/x86/include/asm/mshyperv.h | 5 -
> drivers/hv/hv.c | 36
rch specific module. But C doesn't provide
> a way to extend enum types. As a compromise, move the entire
> definition into an arch neutral module, to avoid duplicating the
> arch neutral values for x86/x64 and for ARM64.
>
> No functional change.
>
> Signed-off-by: Mi
gt; code) have been cleaned up in a separate patch series.
>
> No functional change.
>
> Signed-off-by: Michael Kelley
Reviewed-by: Boqun Feng
Regards,
Boqun
> ---
> arch/x86/hyperv/hv_init.c | 2 +-
> arch/x86/include/asm/hyperv-tlfs.h | 102
> +++
; under arch/arm64.
>
> No functional change.
>
> Signed-off-by: Michael Kelley
Reviewed-by: Boqun Feng
Regards,
Boqun
> ---
> arch/x86/hyperv/hv_init.c | 27 ---
> drivers/hv/vmbus_drv.c | 24 +++-
> includ
No functional change.
>
> Signed-off-by: Michael Kelley
Reviewed-by: Boqun Feng
Regards,
Boqun
> ---
> arch/x86/include/asm/mshyperv.h | 3 ---
> drivers/hv/hv.c | 12 +++-
> 2 files changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x
inux per-cpu IRQ allocation into the main VMbus
> driver, and bypassing it in the x86/x64 exception case. For x86/x64,
> special case code is retained under arch/x86, but no VMbus interrupt
> handling code is needed under arch/arm64.
>
> No functional change.
>
> Signed-off-b
o the architecture.
>
> No functional change.
>
> Signed-off-by: Michael Kelley
Reviewed-by: Boqun Feng
Regards,
Boqun
> ---
> arch/x86/include/asm/mshyperv.h| 4
> drivers/clocksource/hyperv_timer.c | 10 --
> 2 files changed, 8 insertions(+), 6
on than to the architecture.
>
> No functional change.
>
> Signed-off-by: Michael Kelley
Reviewed-by: Boqun Feng
Regards,
Boqun
> ---
> arch/x86/include/asm/mshyperv.h| 11 ---
> drivers/clocksource/hyperv_timer.c | 21 +
> 2 files changed,
On Wed, Jan 27, 2021 at 12:23:44PM -0800, Michael Kelley wrote:
> On x86/x64, the TSC clocksource is available in a Hyper-V VM only if
> Hyper-V provides the TSC_INVARIANT flag. The rating on the Hyper-V
> Reference TSC page clocksource is currently set so that it will not
> override the TSC clocks
7;t have the problem.
Regards,
Boqun
On Tue, Nov 17, 2020 at 07:56:44AM -0500, Sasha Levin wrote:
> From: Boqun Feng
>
> [ Upstream commit d61fc96a37603384cd531622c1e89de1096b5123 ]
>
> Chris Wilson reported a problem spotted by check_chain_key(): a chain
> key got changed in validat
Hi Matthew,
On Mon, Nov 16, 2020 at 03:37:29PM +, Matthew Wilcox wrote:
[...]
>
> It's not just about lockdep for semaphores. Mutexes will spin if the
> current owner is still running, so to convert an interrupt-released
> semaphore to a mutex, we need a way to mark the mutex as being releas
Hi Frederic,
On Tue, Dec 08, 2020 at 11:04:38PM +0100, Frederic Weisbecker wrote:
> On Tue, Dec 08, 2020 at 10:24:09AM -0800, Paul E. McKenney wrote:
> > > It reduces the code scope running with BH disabled.
> > > Also narrowing down helps to understand what it actually protects.
> >
> > I though
On Wed, Dec 09, 2020 at 05:21:58PM -0800, Paul E. McKenney wrote:
> On Tue, Dec 08, 2020 at 01:51:04PM +0100, Frederic Weisbecker wrote:
> > Hi Boqun Feng,
> >
> > On Tue, Dec 08, 2020 at 10:41:31AM +0800, Boqun Feng wrote:
> > > Hi Frederic,
> > >
> &g
Hi Steven,
On Mon, Nov 23, 2020 at 01:42:27PM -0500, Steven Rostedt wrote:
> On Mon, 23 Nov 2020 11:28:12 -0500
> Steven Rostedt wrote:
>
> > I noticed:
> >
> >
> > [ 237.650900] enabling event benchmark_event
> >
> > In both traces. Could you disable CONFIG_TRACEPOINT_BENCHMARK and see if
>
Hi Richard,
On Sun, Dec 06, 2020 at 11:59:16PM +0100, Richard Weinberger wrote:
> Hi!
>
> With both KMEMLEAK and KASAN enabled, I'm facing the following lockdep
> splat at random times on Linus' tree as of today.
> Sometimes it happens at bootup, sometimes much later when userspace has
> started
start_secondary+0x11d/0x150
> > secondary_startup_64_no_verify+0xa6/0xab
> >
> > This lockdep splat is reported after:
> > commit e918188611f0 ("locking: More accurate annotations for read_lock()")
> >
> > To clarify:
> > - read-locks are re
On Thu, Dec 17, 2020 at 07:21:18AM -0800, Paul E. McKenney wrote:
> On Thu, Dec 17, 2020 at 11:03:20AM +0100, Daniel Vetter wrote:
> > On Wed, Dec 16, 2020 at 5:16 PM Paul E. McKenney wrote:
> > >
> > > On Wed, Dec 16, 2020 at 10:52:06AM +0100, Daniel Vetter wrote:
> > > > On Wed, Dec 16, 2020 at
Hi Naresh,
On Wed, Dec 02, 2020 at 10:15:44AM +0530, Naresh Kamboju wrote:
> While running kselftests on arm64 db410c platform "BUG: Invalid wait context"
> noticed at different runs this specific platform running stable-rc 5.9.12-rc1.
>
> While running these two test cases we have noticed this B
gp_wait()
> > > is going to check again the bypass state and rearm the bypass timer if
> > > necessary.
> > >
> > > Signed-off-by: Frederic Weisbecker
> > > Cc: Josh Triplett
> > > Cc: Lai Jiangshan
> > > Cc: Joel Fernandes
> &g
On Sat, Mar 06, 2021 at 09:39:54PM +0100, Marc Kleine-Budde wrote:
> Hello *,
>
> On 02.11.2020 11:41:52, Andrea Righi wrote:
> > We have the following potential deadlock condition:
> >
> >
> > WARNING: possible irq lock inversion depende
Hi Paul,
On Tue, Jan 19, 2021 at 08:32:33PM -0800, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> Historically, a task that has been subjected to RCU priority boosting is
> deboosted at rcu_read_unlock() time. However, with the advent of deferred
> quiescent states, if the outermost r
On Tue, Jan 26, 2021 at 08:40:24PM -0800, Paul E. McKenney wrote:
> On Wed, Jan 27, 2021 at 10:42:35AM +0800, Boqun Feng wrote:
> > Hi Paul,
> >
> > On Tue, Jan 19, 2021 at 08:32:33PM -0800, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney"
> &g
On Wed, Jan 27, 2021 at 11:18:31AM -0800, Paul E. McKenney wrote:
> On Wed, Jan 27, 2021 at 03:05:24PM +0800, Boqun Feng wrote:
> > On Tue, Jan 26, 2021 at 08:40:24PM -0800, Paul E. McKenney wrote:
> > > On Wed, Jan 27, 2021 at 10:42:35AM +0800, Boqun Feng wrot
> Fixes: d21987d709e8 ("video: hyperv: hyperv_fb: Support deferred IO for
> Hyper-V frame buffer driver")
> Fixes: 3a6fb6c4255c ("video: hyperv: hyperv_fb: Use physical memory for fb on
> HyperV Gen 1 VMs.")
> Cc: Wei Hu
> Cc: Boqun Feng
> Signed-off-by: Dexuan
, such as {cmp,}xchg, may be built in
linux/atomic.h, which means simply including asm/cmpxchg.h may not get
the definitions of all the{cmp,}xchg variants. Therefore, we should
privatize the inclusions of asm/cmpxchg.h to keep it only included in
arch/* and replace the inclusions outside wit
Hi Will,
On Wed, Aug 26, 2015 at 11:41:00AM +0100, Will Deacon wrote:
> Hi Boqun,
>
> On Wed, Aug 26, 2015 at 05:28:34AM +0100, Boqun Feng wrote:
> > On Thu, Aug 06, 2015 at 05:54:36PM +0100, Will Deacon wrote:
> > > Will Deacon (8):
> > > atomics: add acquire
definitions of all the{cmp,}xchg variants. Therefore, we should
privatize the inclusions of asm/cmpxchg.h to keep it only included in
arch/* and replace the inclusions outside with linux/atomic.h
Acked-by: Will Deacon
Signed-off-by: Boqun Feng
---
Documentation/atomic_ops.txt| 4
Commit ("rcu: Create a synchronize_rcu_mult()") in linux-rcu.git#rcu/next
branch has introduced rcu_callback_t as the type for rcu callback
functions and call_rcu_func_t has been introduced for a while. This patch
series uses the rcu_callback_t and call_rcu_func_t to save a few lines of
code.
This
can also help cscope to generate a better database for
code reading.
Signed-off-by: Boqun Feng
---
include/linux/rcupdate.h | 10 +-
include/linux/rcutiny.h | 2 +-
include/linux/rcutree.h | 2 +-
kernel/rcu/rcutorture.c | 4 ++--
kernel/rcu/srcu.c| 2 +-
kernel/rcu/tiny.c
We have call_rcu_func_t for a quite while, but we still use explicit
function pointer type equivalents in some places, this patch replace
these equivalent types with call_rcu_func_t to gain better readability.
Signed-off-by: Boqun Feng
---
kernel/rcu/rcutorture.c | 2 +-
kernel/rcu/tree.h
Hi Peter and Ingo,
I'm now learning the code of lockdep and find that nested lock may not
be handled correctly because we fail to take held_lock merging into
consideration. I come up with an example and hope that could explain my
concern.
Please consider this lock/unlock sequence, I also put a pa
Hi Yuyang,
On Mon, Jul 27, 2015 at 02:43:25AM +0800, Yuyang Du wrote:
> Hi Boqun,
>
> On Tue, Jul 21, 2015 at 06:29:56PM +0800, Boqun Feng wrote:
> > The point is that you have already tracked the sum of runnable_load_avg
> > and blocked_load_avg in cfs_rq->avg.load_avg.
201 - 300 of 1122 matches
Mail list logo