> > kernel/sched/core.c:8294 Illegal context switch in RCU-bh read-side
> > critical section!
> >
> > other info that might help us debug this:
> >
> >
> > rcu_scheduler_active = 2, debug_locks = 0
> > no locks held by systemd-udevd
t; > 5.12.0-rc5-syzkaller #0 Not tainted
> > > > -
> > > > kernel/sched/core.c:8294 Illegal context switch in RCU-bh read-side
> > > > critical section!
> > > >
> > > > other info that might help us debug thi
; > critical section!
> > >
> > > other info that might help us debug this:
> > >
> > >
> > > rcu_scheduler_active = 2, debug_locks = 0
> > > no locks held by systemd-udevd/4825.
> >
> > I think we have something that's ta
> > kernel/sched/core.c:8294 Illegal context switch in RCU-bh read-side
> > critical section!
> >
> > other info that might help us debug this:
> >
> >
> > rcu_scheduler_active = 2, debug_locks = 0
> > no locks held by systemd-udevd
info that might help us debug this:
>
>
> rcu_scheduler_active = 2, debug_locks = 0
> no locks held by systemd-udevd/4825.
I think we have something that's taking the RCU read lock in
(soft?) interrupt context and not releasing it properly in all
situations. This thread doesn't ha
If a perf interrupt hits under a spin lock and
if we end up in calling selinux hook functions in PMI handler, this
could cause a dead lock.
Since the purpose of this security hook is to control access to
perf_event_open(), it is not right to call this in interrupt context.
The paranoid checks in
If a perf interrupt hits under a spin lock and
if we end up in calling selinux hook functions in PMI handler, this
could cause a dead lock.
Since the purpose of this security hook is to control access to
perf_event_open(), it is not right to call this in interrupt context.
The paranoid checks in
From: James Smart
[ Upstream commit 19fce0470f05031e6af36e49ce222d0f0050d432 ]
Recent patches changed calling sequences. nvme_fc_abort_outstanding_ios
used to be called from a timeout or work context. Now it is being called
in an io completion context, which can be an interrupt handler.
Unfortun
From: Lars-Peter Clausen
[ Upstream commit 0178297c1e6898e2197fe169ef3be723e019b971 ]
On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft
interrupt expiry mode by default.
The IIO hrtimer-trigger needs to run in hard interrupt context since it
will end up calling
From: Lars-Peter Clausen
[ Upstream commit 0178297c1e6898e2197fe169ef3be723e019b971 ]
On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft
interrupt expiry mode by default.
The IIO hrtimer-trigger needs to run in hard interrupt context since it
will end up calling
the current context, and if
+ * the TRANSITION bit is already set, it will fail the recursion.
+ * This is needed because there's a lag between the changing of
+ * interrupt context and updating the preempt count. In this case,
+ * a false positive will be found. To handle this, one extra recu
the current context, and if
+ * the TRANSITION bit is already set, it will fail the recursion.
+ * This is needed because there's a lag between the changing of
+ * interrupt context and updating the preempt count. In this case,
+ * a false positive will be found. To handle this, one extra recu
the current context, and if
+ * the TRANSITION bit is already set, it will fail the recursion.
+ * This is needed because there's a lag between the changing of
+ * interrupt context and updating the preempt count. In this case,
+ * a false positive will be found. To handle this, one extra recu
w the TRANSITION bit breaks the above slightly. The TRANSITION bit
+ * is set when a recursion is detected at the current context, and if
+ * the TRANSITION bit is already set, it will fail the recursion.
+ * This is needed because there's a lag between the changing of
+ * interrupt context
context.
+ *
+ * Now the TRANSITION bit breaks the above slightly. The TRANSITION bit
+ * is set when a recursion is detected at the current context, and if
+ * the TRANSITION bit is already set, it will fail the recursion.
+ * This is needed because there's a lag between the changing of
+ * inter
context.
+ *
+ * Now the TRANSITION bit breaks the above slightly. The TRANSITION bit
+ * is set when a recursion is detected at the current context, and if
+ * the TRANSITION bit is already set, it will fail the recursion.
+ * This is needed because there's a lag between the changing of
+ * inter
on.
+ * This is needed because there's a lag between the changing of
+ * interrupt context and updating the preempt count. In this case,
+ * a false positive will be found. To handle this, one extra recursion
+ * is allowed, and this is done by the TRANSITION bit. If the TRANSITION
+ * bit is a
From: He Zhe
[ Upstream commit edec6e015a02003c2af0ce82c54ea016b5a9e3f0 ]
apic->lapic_timer.timer was initialized with HRTIMER_MODE_ABS_HARD but
started later with HRTIMER_MODE_ABS, which may cause the following warning
in PREEMPT_RT kernel.
WARNING: CPU: 1 PID: 2957 at kernel/time/hrtimer.c:11
From: He Zhe
[ Upstream commit edec6e015a02003c2af0ce82c54ea016b5a9e3f0 ]
apic->lapic_timer.timer was initialized with HRTIMER_MODE_ABS_HARD but
started later with HRTIMER_MODE_ABS, which may cause the following warning
in PREEMPT_RT kernel.
WARNING: CPU: 1 PID: 2957 at kernel/time/hrtimer.c:11
it
> be subject to throttling. What are we going to run when the idle task
> is no longer elegible to run.
>
> (it might all work out by accident, but ISTR we had a whole bunch of fun
> in the earlier days of RT due to things like that)
I'm thinking if a mutex_trylock() happens
s thread,
> not for a broad review.
> A short summary: In the rt kernel, a panic in an interrupt context does
> not start the dump-capture kernel, because there is a mutex_trylock in
> __crash_kexe. If this is called in interrupt context, it always fails.
> In the non-rt kernel calling
Hi Eric,
What is this patch supposed to be doing?
What bug is it fixing?
This information is part in the first message of this mail thread.
The patch was intendedfor the active discussion in this thread,
not for a broad review.
A short summary: In the rt kernel, a panic in an interrupt context
>
> The mutex_trylock can still be used, because it is only in syscall context and
> no interrupt context.
What is this patch supposed to be doing?
What bug is it fixing?
A BUG_ON that triggers inside of BUG_ONs seems not just suspect but
outright impossible to make use of.
I get the fe
From: Jussi Kivilinna
[ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ]
batadv_bla_send_claim() gets called from worker thread context through
batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that
case. This fixes "NOHZ: local_softirq_pending 08" log messages seen
when
From: Jussi Kivilinna
[ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ]
batadv_bla_send_claim() gets called from worker thread context through
batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that
case. This fixes "NOHZ: local_softirq_pending 08" log messages seen
when
only in syscall
context and no interrupt context.
Jörg
---
kernel/kexec.c | 8 ++--
kernel/kexec_core.c | 86 +++--
kernel/kexec_file.c | 4 +-
kernel/kexec_internal.h | 6 ++-
4 files changed, 69 insertions(+), 35 deletions(-)
diff --git a
From: Jussi Kivilinna
[ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ]
batadv_bla_send_claim() gets called from worker thread context through
batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that
case. This fixes "NOHZ: local_softirq_pending 08" log messages seen
when
From: Jussi Kivilinna
[ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ]
batadv_bla_send_claim() gets called from worker thread context through
batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that
case. This fixes "NOHZ: local_softirq_pending 08" log messages seen
when
From: Jussi Kivilinna
[ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ]
batadv_bla_send_claim() gets called from worker thread context through
batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that
case. This fixes "NOHZ: local_softirq_pending 08" log messages seen
when
From: Jussi Kivilinna
[ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ]
batadv_bla_send_claim() gets called from worker thread context through
batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that
case. This fixes "NOHZ: local_softirq_pending 08" log messages seen
when
Hi Peter
On 9/7/2020 6:23 PM, pet...@infradead.org wrote:
According to the original comment in __crash_kexec, the mutex was used to
prevent a sys_kexec_load, while crash_kexec is executed. Your proposed patch
does not lock the mutex in crash_kexec.
Sure, but any mutex taker will (spin) wait for
On 07/09/20 12:41, pet...@infradead.org wrote:
> So cenceptually there's the problem that idle must always be runnable,
> and the moment you boost it, it becomes subject to a different
> scheduling class.
>
> Imagine for example what happens when we boost it to RT and then have it
> be subject to
On Mon, Sep 07, 2020 at 02:03:09PM +0200, Joerg Vehlow wrote:
>
>
> On 9/7/2020 1:46 PM, pet...@infradead.org wrote:
> > I think it's too complicated for that is needed, did you see my
> > suggestion from a year ago? Did i miss something obvious?
> >
> This one?
> https://lore.kernel.org/linux-
On 9/7/2020 1:46 PM, pet...@infradead.org wrote:
I think it's too complicated for that is needed, did you see my
suggestion from a year ago? Did i miss something obvious?
This one?
https://lore.kernel.org/linux-fsdevel/20191219090535.gv2...@hirez.programming.kicks-ass.net/
I think it may b
On Mon, Sep 07, 2020 at 12:51:37PM +0200, Joerg Vehlow wrote:
> Hi,
>
> I guess there is currently no other way than to use something like Steven
> proposed. I implemented and tested the attached patch with a module,
> that triggers the soft lockup detection and it works as expected.
> I did not u
On Sat, Aug 22, 2020 at 07:49:28PM -0400, Steven Rostedt wrote:
> From this email:
>
> > The problem happens when that owner is the idle task, this can happen
> > when the irq/softirq hits the idle task, in that case the contending
> > mutex_lock() will try and PI boost the idle task, and that is
Hi,
I guess there is currently no other way than to use something like Steven
proposed. I implemented and tested the attached patch with a module,
that triggers the soft lockup detection and it works as expected.
I did not use inline functions, but normal function implemented in
kexec_core,
bec
This patchset implements memcg-based memory accounting of
allocations made from an interrupt context.
Historically, such allocations were passed unaccounted mostly
because charging the memory cgroup of the current process wasn't
an option. Also performance reasons were likely a reason too.
This patchset implements memcg-based memory accounting of
allocations made from an interrupt context.
Historically, such allocations were passed unaccounted mostly
because charging the memory cgroup of the current process wasn't
an option. Also performance reasons were likely a reason too.
On Sat, 22 Aug 2020 14:32:52 +0200
pet...@infradead.org wrote:
> On Fri, Aug 21, 2020 at 05:03:34PM -0400, Steven Rostedt wrote:
>
> > > Sigh. Is it too hard to make mutex_trylock() usable from interrupt
> > > context?
> >
> >
> > That's a que
On Fri, Aug 21, 2020 at 05:03:34PM -0400, Steven Rostedt wrote:
> > Sigh. Is it too hard to make mutex_trylock() usable from interrupt
> > context?
>
>
> That's a question for Thomas and Peter Z.
You should really know that too, the TL;DR answer is it's fun
ting locks and I guess that would require spinning now,
> > > if we do this with bare xchg.
> > >
> > > Instead I thought about using a spinlock, because they are supposed
> > > to be used in interrupt context as well, if I understand the documentation
ing to the xchg approach, but that seems to be
> > not a good solution anymore, because the mutex is used in many places,
> > a lot with waiting locks and I guess that would require spinning now,
> > if we do this with bare xchg.
> >
> > Instead I thought about using
t; a lot with waiting locks and I guess that would require spinning now,
> if we do this with bare xchg.
>
> Instead I thought about using a spinlock, because they are supposed
> to be used in interrupt context as well, if I understand the documentation
> correctly ([1]).
> @
cause they are supposed
to be used in interrupt context as well, if I understand the documentation
correctly ([1]).
@RT developers
Unfortunately the rt patches seem to interpret it a bit different and
spin_trylock uses __rt_mutex_trylock again, with the same consequences as
with the current code
On Wed, 22 Jul 2020 06:30:53 +0200 Joerg Vehlow wrote:
> >> About 12 years ago this was not implemented using a mutex, but using xchg.
> >> See: 8c5a1cf0ad3ac5fcdf51314a63b16a440870f6a2
> > Yes, that commit is wrong, because mutex_trylock() is not to be taken in
>
On Wed, 22 Jul 2020 06:30:53 +0200
Joerg Vehlow wrote:
> Hi Andrew,
>
> it's been two month now and no reaction from you. Maybe you did not see
> this mail from Steven.
> Please look at this issue.
>
Perhaps you need to send the report again without the RT (just [BUG])
to get Andrew's attenti
x_trylock can be called from everywhere. Actually even
mutex_trylock has
the comment, that it is not supposed to be used from interrupt context,
but it
still locks the mutex. I guess this could also be a bug in the non-rt
kernel.
I found this problem using a test module, that triggers the softlock
king failed.
>
> According to rt_mutex_trylock documentation, it is not allowed to call this
> function from an irq handler, but panic can be called from everywhere
> and thus
> rt_mutex_trylock can be called from everywhere. Actually even
> mutex_trylock has
> the comment, that it
rom interrupt context,
but it
still locks the mutex. I guess this could also be a bug in the non-rt
kernel.
I found this problem using a test module, that triggers the softlock
detection.
It is a pretty simple module, that creates a kthread, that disables
preemption,
spins 60 seconds in an endless l
On Sun, May 3, 2020 at 2:31 PM Chris Wilson wrote:
>
> Query whether or not we are in a legal context for using SIMD, before
> using SSE4.2 registers.
>
> Suggested-by: Jason A. Donenfeld
> Signed-off-by: Chris Wilson
> ---
> drivers/gpu/drm/i915/i915_memcpy.c | 4
> 1 file changed, 4 inse
.kernel.org/tip/71fed982d63cb2bb88db6f36059e3b14a7913846
> Author:Sebastian Andrzej Siewior
> AuthorDate:Fri, 23 Aug 2019 13:38:45 +02:00
> Committer: Thomas Gleixner
> CommitterDate: Wed, 28 Aug 2019 13:01:26 +02:00
>
> tick: Mark sched_timer to expire in hard interrupt context
>
&g
+02:00
Committer: Thomas Gleixner
CommitterDate: Wed, 28 Aug 2019 13:01:26 +02:00
tick: Mark sched_timer to expire in hard interrupt context
sched_timer must be initialized with the _HARD mode suffix to ensure expiry
in hard interrupt context on RT.
The previous conversion to HARD expiry mode
The sched_timer should be initialized with the _HARD suffix. Most of
this already happened in commit
902a9f9c50905 ("tick: Mark tick related hrtimers to expiry in hard
interrupt context")
but this one instance has been missed.
Signed-off-by: Sebastian Andrzej Siewior
---
k
On Thu, Aug 15, 2019 at 08:26:43PM +0200, Paolo Bonzini wrote:
> Oh, I see. Sorry I didn't understand the question. In the case of KVM,
> there's simply no code that runs in interrupt context and needs to use
> virtual addresses.
>
> In fact, there's no code that ru
even in non-atomic context, since
> > hva_to_pfn_fast is much faster than hva_to_pfn_slow).
> >
> > My question is can this be executed in an interrupt context?
>
> No, it cannot for the reason you mention below.
>
> Paolo
hmm.. Well I expected the answer to be kvm specific.
Because
estion is can this be executed in an interrupt context?
No, it cannot for the reason you mention below.
Paolo
> The motivation for this question is that in an interrupt context, we cannot
> assume "current" to be the task_struct of the process of interest.
> __get_user_pages_fast ass
Hi all,
I was looking at the function hva_to_pfn_fast(in virt/kvm/kvm_main) which is
executed in an atomic context(even in non-atomic context, since
hva_to_pfn_fast is much faster than hva_to_pfn_slow).
My question is can this be executed in an interrupt context?
The motivation for this
: Mark tick related hrtimers to expiry in hard interrupt context
The tick related hrtimers, which drive the scheduler tick and hrtimer based
broadcasting are required to expire in hard interrupt context for obvious
reasons.
Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry.
Make
: LAPIC: Mark hrtimer to expire in hard interrupt context
On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft
interrupt expiry mode by default.
While that's not a functional requirement for the KVM local APIC timer
emulation, it's a latency issue which can be avoided by marking
: Mark watchdog_hrtimer to expire in hard interrupt context
The watchdog hrtimer must expire in hard interrupt context even on
PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would
not work.
No functional change.
[ tglx: Split out from larger combo patch. Added changelog
: Mark hrtimers to expire in hard interrupt context
To guarantee that the multiplexing mechanism and the hrtimer driven events
work on PREEMPT_RT enabled kernels it's required that the related hrtimers
expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont
defer them to
: Mark hrtimers to expire in hard interrupt context
The scheduler related hrtimers need to expire in hard interrupt context
even on PREEMPT_RT enabled kernels. Mark then as such.
No functional change.
[ tglx: Split out from larger combo patch. Add changelog. ]
Signed-off-by: Sebastian Andrzej
: Mark tick related hrtimers to expiry in hard interrupt context
The tick related hrtimers, which drive the scheduler tick and hrtimer based
broadcasting are required to expire in hard interrupt context for obvious
reasons.
Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry.
Make
: LAPIC: Mark hrtimer to expire in hard interrupt context
On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft
interrupt expiry mode by default.
While that's not a functional requirement for the KVM local APIC timer
emulation, it's a latency issue which can be avoided by marking
: Mark watchdog_hrtimer to expire in hard interrupt context
The watchdog hrtimer must expire in hard interrupt context even on
PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would
not work.
No functional change.
[ tglx: Split out from larger combo patch. Added changelog
: Mark hrtimers to expire in hard interrupt context
To guarantee that the multiplexing mechanism and the hrtimer driven events
work on PREEMPT_RT enabled kernels it's required that the related hrtimers
expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont
defer them to
: Mark hrtimers to expire in hard interrupt context
The scheduler related hrtimers need to expire in hard interrupt context
even on PREEMPT_RT enabled kernels. Mark then as such.
No functional change.
[ tglx: Split out from larger combo patch. Add changelog. ]
Signed-off-by: Sebastian Andrzej
: LAPIC: Mark hrtimer to expire in hard interrupt context
On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft
interrupt expiry mode by default.
While that's not a functional requirement for the KVM local APIC timer
emulation, it's a latency issue which can be avoided by m
: Mark tick related hrtimers to expiry in hard interrupt context
The tick related hrtimers, which drive the scheduler tick and hrtimer based
broadcasting are required to expire in hard interrupt context for obvious
reasons.
Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry.
Make
: Mark watchdog_hrtimer to expire in hard interrupt context
The watchdog hrtimer must expire in hard interrupt context even on
PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would
not work.
No functional change.
[ tglx: Split out from larger combo patch. Added changelog
hrtimers to expire in hard interrupt context
To guarantee that the multiplexing mechanism and the hrtimer driven events
work on PREEMPT_RT enabled kernels it's required that the related hrtimers
expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont
defer them to soft inte
hrtimers to expire in hard interrupt context
The scheduler related hrtimers need to expire in hard interrupt context
even on PREEMPT_RT enabled kernels. Mark then as such.
No functional change.
[ tglx: Split out from larger combo patch. Add changelog. ]
Signed-off-by: Sebastian Andrzej Siewior
Signed
tion, it's a latency issue which can be avoided by marking the timer
> so hard interrupt context expiry is enforced.
>
> No functional change.
>
> [ tglx: Split out from larger combo patch. Add changelog. ]
>
> Signed-off-by: Sebastian Andrzej Siewior
> Signed-off-by:
From: Thomas Gleixner
The scheduler related hrtimers need to expire in hard interrupt context
even on PREEMPT_RT enabled kernels. Mark then as such.
No functional change.
[ tglx: Split out from larger combo patch. Add changelog. ]
Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by
From: Thomas Gleixner
To guarantee that the multiplexing mechanism and the hrtimer driven events
work on PREEMPT_RT enabled kernels it's required that the related hrtimers
expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont
defer them to soft interrupt context.
No funct
From: Sebastian Andrzej Siewior
The tick related hrtimers, which drive the scheduler tick and hrtimer based
broadcasting are required to expire in hard interrupt context for obvious
reasons.
Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry.
No functional change.
[ tglx
er
so hard interrupt context expiry is enforced.
No functional change.
[ tglx: Split out from larger combo patch. Add changelog. ]
Signed-off-by: Sebastian Andrzej Siewior
Signed-off-by: Thomas Gleixner
Cc: k...@vger.kernel.org
Cc: Paolo Bonzini
---
arch/x86/kvm/lapic.c |2 +-
1 file changed, 1
From: Sebastian Andrzej Siewior
The watchdog hrtimer must expire in hard interrupt context even on
PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would
not work.
No functional change.
[ tglx: Split out from larger combo patch. Added changelog ]
Signed-off-by: Sebastian
From: Thierry Escande
When the remote DSP invocation is interrupted by the user, the
associated DMA buffer can be freed in interrupt context causing a kernel
BUG.
This patch adds a worker thread associated to the fastrpc context. It
is scheduled in the rpmsg callback to decrease its refcount
4.17-stable review patch. If anyone has any objections, please let me know.
--
From: Kirill Tkhai
[ Upstream commit 7a107c0f55a3b4c6f84a4323df5610360bde1684 ]
I observed the following deadlock between them:
[task 1] [task 2] [t
4.9-stable review patch. If anyone has any objections, please let me know.
--
From: Kirill Tkhai
[ Upstream commit 7a107c0f55a3b4c6f84a4323df5610360bde1684 ]
I observed the following deadlock between them:
[task 1] [task 2] [ta
4.14-stable review patch. If anyone has any objections, please let me know.
--
From: Kirill Tkhai
[ Upstream commit 7a107c0f55a3b4c6f84a4323df5610360bde1684 ]
I observed the following deadlock between them:
[task 1] [task 2] [t
Lockdep warning is seen when driver is handling a topology change
event after drive removal.
Starting queue eventually enables irq, which throws lockdep warning in
scsi_request_fn. This change makes starting queues async.
[ cut here ]
WARNING: CPU: 0 PID: 0 at kernel/lockde
On Tue, Apr 17, 2018 at 07:01:10AM -0700, Matthew Wilcox wrote:
> On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote:
> > I observed the following deadlock between them:
> >
> > [task 1] [task 2] [task 3]
> > kill_fasync()
On 18.04.2018 23:00, Jeff Layton wrote:
> On Tue, 2018-04-17 at 17:15 +0300, Kirill Tkhai wrote:
>> On 17.04.2018 17:01, Matthew Wilcox wrote:
>>> On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote:
I observed the following deadlock between them:
[task 1]
On Tue, 2018-04-17 at 17:15 +0300, Kirill Tkhai wrote:
> On 17.04.2018 17:01, Matthew Wilcox wrote:
> > On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote:
> > > I observed the following deadlock between them:
> > >
> > > [task 1] [task 2]
On 17.04.2018 17:01, Matthew Wilcox wrote:
> On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote:
>> I observed the following deadlock between them:
>>
>> [task 1] [task 2] [task 3]
>> kill_fasync() mm_update_next_owner()
On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote:
> I observed the following deadlock between them:
>
> [task 1] [task 2] [task 3]
> kill_fasync() mm_update_next_owner()
> copy_process()
> spin_lock_irqsav
;>*fp = fa->fa_next;
>>>>call_rcu(&fa->fa_rcu, fasync_free_rcu);
>>>> @@ -912,13 +912,13 @@ struct fasync_struct *fasync_insert_entry(int fd,
>>>> struct file *filp, struct fasy
>>>>if (fa->fa_file != filp)
>
> continue;
> > >
> > > - spin_lock_irq(&fa->fa_lock);
> > > + write_lock_irq(&fa->fa_lock);
> > > fa->fa_fd = fd;
> > > - spin_unlock_irq(&fa->fa_lock);
> > > +
in_lock_init(&new->fa_lock);
>> +rwlock_init(&new->fa_lock);
>> new->magic = FASYNC_MAGIC;
>> new->fa_file = filp;
>> new->fa_fd = fd;
>> @@ -981,14 +981,13 @@ static void kill_fasync_rcu(struct fasync_struct
On Thu, 2018-04-05 at 14:58 +0300, Kirill Tkhai wrote:
> I observed the following deadlock between them:
>
> [task 1] [task 2] [task 3]
> kill_fasync() mm_update_next_owner()
> copy_process()
> spin_lock_irqsave(&fa->
Hi,
almost two weeks passed, while there is no reaction.
Jeff, Bruce, what is your point of view?
Just to underline, the problem is because of rw_lock fairness, which does not
allow a reader to take a read locked lock in case of there is already a writer
called write_lock(). See queued_read_lock
I observed the following deadlock between them:
[task 1] [task 2] [task 3]
kill_fasync() mm_update_next_owner()
copy_process()
spin_lock_irqsave(&fa->fa_lock) read_lock(&tasklist_lock)
write_lock_irq(&taskli
Driver writers can benefit in knowing if/when callbacks are called in
interrupt context. But it is not completely obvious here, so document
it.
Signed-off-by: Luca Ceresoli
Cc: Laurent Pinchart
Cc: Pawel Osciak
Cc: Marek Szyprowski
Cc: Kyungmin Park
Cc: Mauro Carvalho Chehab
---
Changes v1
The rockchip DRM driver is quite careful to disable interrupts
when taking a lock that is also taken in interrupt context,
which is a good thing.
What is a bit over the top is to use spin_lock_irqsave when
already in interrupt context, as you cannot take another
interrupt again, and disabling
3.16.51-rc1 review patch. If anyone has any objections, please let me know.
--
From: Jorgen Hansen
commit 4ef7ea9195ea73262cd9730fb54e1eb726da157b upstream.
In the vsock vmci_transport driver, sock_put wasn't safe to call
in interrupt context, since that may call the
From: Jorgen Hansen
commit 4ef7ea9195ea73262cd9730fb54e1eb726da157b upstream.
In the vsock vmci_transport driver, sock_put wasn't safe to call
in interrupt context, since that may call the vsock destructor
which in turn calls several functions that should only be called
from process co
device. This happens
because irqfd_wakeup() calls srcu_read_lock(&kvm->irq_srcu) in interrupt
context, while a worker thread does the same inside kvm_set_irq(). If the
interrupt happens while the worker thread is executing __srcu_read_lock(),
updates to the Classic SRCU ->lock_count[] field o
1 - 100 of 449 matches
Mail list logo