Re: Something is leaking RCU holds from interrupt context

2021-04-06 Thread Peter Zijlstra
> > kernel/sched/core.c:8294 Illegal context switch in RCU-bh read-side > > critical section! > > > > other info that might help us debug this: > > > > > > rcu_scheduler_active = 2, debug_locks = 0 > > no locks held by systemd-udevd

Re: Something is leaking RCU holds from interrupt context

2021-04-04 Thread Paul E. McKenney
t; > 5.12.0-rc5-syzkaller #0 Not tainted > > > > - > > > > kernel/sched/core.c:8294 Illegal context switch in RCU-bh read-side > > > > critical section! > > > > > > > > other info that might help us debug thi

Re: Something is leaking RCU holds from interrupt context

2021-04-04 Thread Matthew Wilcox
; > critical section! > > > > > > other info that might help us debug this: > > > > > > > > > rcu_scheduler_active = 2, debug_locks = 0 > > > no locks held by systemd-udevd/4825. > > > > I think we have something that's ta

Re: Something is leaking RCU holds from interrupt context

2021-04-04 Thread Paul E. McKenney
> > kernel/sched/core.c:8294 Illegal context switch in RCU-bh read-side > > critical section! > > > > other info that might help us debug this: > > > > > > rcu_scheduler_active = 2, debug_locks = 0 > > no locks held by systemd-udevd

Something is leaking RCU holds from interrupt context

2021-04-04 Thread Matthew Wilcox
info that might help us debug this: > > > rcu_scheduler_active = 2, debug_locks = 0 > no locks held by systemd-udevd/4825. I think we have something that's taking the RCU read lock in (soft?) interrupt context and not releasing it properly in all situations. This thread doesn't ha

[PATCH 5.10 002/290] powerpc/perf: Fix handling of privilege level checks in perf interrupt context

2021-03-15 Thread gregkh
If a perf interrupt hits under a spin lock and if we end up in calling selinux hook functions in PMI handler, this could cause a dead lock. Since the purpose of this security hook is to control access to perf_event_open(), it is not right to call this in interrupt context. The paranoid checks in

[PATCH 5.11 003/306] powerpc/perf: Fix handling of privilege level checks in perf interrupt context

2021-03-15 Thread gregkh
If a perf interrupt hits under a spin lock and if we end up in calling selinux hook functions in PMI handler, this could cause a dead lock. Since the purpose of this security hook is to control access to perf_event_open(), it is not right to call this in interrupt context. The paranoid checks in

[PATCH 5.10 103/152] nvme-fc: avoid calling _nvme_fc_abort_outstanding_ios from interrupt context

2021-01-18 Thread Greg Kroah-Hartman
From: James Smart [ Upstream commit 19fce0470f05031e6af36e49ce222d0f0050d432 ] Recent patches changed calling sequences. nvme_fc_abort_outstanding_ios used to be called from a timeout or work context. Now it is being called in an io completion context, which can be an interrupt handler. Unfortun

[PATCH 5.4 214/453] iio: hrtimer-trigger: Mark hrtimer to expire in hard interrupt context

2020-12-28 Thread Greg Kroah-Hartman
From: Lars-Peter Clausen [ Upstream commit 0178297c1e6898e2197fe169ef3be723e019b971 ] On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft interrupt expiry mode by default. The IIO hrtimer-trigger needs to run in hard interrupt context since it will end up calling

[PATCH 5.10 290/717] iio: hrtimer-trigger: Mark hrtimer to expire in hard interrupt context

2020-12-28 Thread Greg Kroah-Hartman
From: Lars-Peter Clausen [ Upstream commit 0178297c1e6898e2197fe169ef3be723e019b971 ] On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft interrupt expiry mode by default. The IIO hrtimer-trigger needs to run in hard interrupt context since it will end up calling

[PATCH 4.14 02/85] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-17 Thread Greg Kroah-Hartman
the current context, and if + * the TRANSITION bit is already set, it will fail the recursion. + * This is needed because there's a lag between the changing of + * interrupt context and updating the preempt count. In this case, + * a false positive will be found. To handle this, one extra recu

[PATCH 4.9 02/78] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-17 Thread Greg Kroah-Hartman
the current context, and if + * the TRANSITION bit is already set, it will fail the recursion. + * This is needed because there's a lag between the changing of + * interrupt context and updating the preempt count. In this case, + * a false positive will be found. To handle this, one extra recu

[PATCH 4.4 01/64] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-17 Thread Greg Kroah-Hartman
the current context, and if + * the TRANSITION bit is already set, it will fail the recursion. + * This is needed because there's a lag between the changing of + * interrupt context and updating the preempt count. In this case, + * a false positive will be found. To handle this, one extra recu

[PATCH 5.9 060/133] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-09 Thread Greg Kroah-Hartman
w the TRANSITION bit breaks the above slightly. The TRANSITION bit + * is set when a recursion is detected at the current context, and if + * the TRANSITION bit is already set, it will fail the recursion. + * This is needed because there's a lag between the changing of + * interrupt context

[PATCH 5.4 34/85] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-09 Thread Greg Kroah-Hartman
context. + * + * Now the TRANSITION bit breaks the above slightly. The TRANSITION bit + * is set when a recursion is detected at the current context, and if + * the TRANSITION bit is already set, it will fail the recursion. + * This is needed because there's a lag between the changing of + * inter

[PATCH 4.19 41/71] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-09 Thread Greg Kroah-Hartman
context. + * + * Now the TRANSITION bit breaks the above slightly. The TRANSITION bit + * is set when a recursion is detected at the current context, and if + * the TRANSITION bit is already set, it will fail the recursion. + * This is needed because there's a lag between the changing of + * inter

[for-linus][PATCH 2/4] ring-buffer: Fix recursion protection transitions between interrupt context

2020-11-05 Thread Steven Rostedt
on. + * This is needed because there's a lag between the changing of + * interrupt context and updating the preempt count. In this case, + * a false positive will be found. To handle this, one extra recursion + * is allowed, and this is done by the TRANSITION bit. If the TRANSITION + * bit is a

[PATCH 5.4 179/388] KVM: LAPIC: Mark hrtimer for period or oneshot mode to expire in hard interrupt context

2020-09-29 Thread Greg Kroah-Hartman
From: He Zhe [ Upstream commit edec6e015a02003c2af0ce82c54ea016b5a9e3f0 ] apic->lapic_timer.timer was initialized with HRTIMER_MODE_ABS_HARD but started later with HRTIMER_MODE_ABS, which may cause the following warning in PREEMPT_RT kernel. WARNING: CPU: 1 PID: 2957 at kernel/time/hrtimer.c:11

[PATCH AUTOSEL 5.4 185/330] KVM: LAPIC: Mark hrtimer for period or oneshot mode to expire in hard interrupt context

2020-09-17 Thread Sasha Levin
From: He Zhe [ Upstream commit edec6e015a02003c2af0ce82c54ea016b5a9e3f0 ] apic->lapic_timer.timer was initialized with HRTIMER_MODE_ABS_HARD but started later with HRTIMER_MODE_ABS, which may cause the following warning in PREEMPT_RT kernel. WARNING: CPU: 1 PID: 2957 at kernel/time/hrtimer.c:11

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-14 Thread Steven Rostedt
it > be subject to throttling. What are we going to run when the idle task > is no longer elegible to run. > > (it might all work out by accident, but ISTR we had a whole bunch of fun > in the earlier days of RT due to things like that) I'm thinking if a mutex_trylock() happens

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-14 Thread Eric W. Biederman
s thread, > not for a broad review. > A short summary: In the rt kernel, a panic in an interrupt context does > not start the dump-capture kernel, because there is a mutex_trylock in > __crash_kexe. If this is called in interrupt context, it always fails. > In the non-rt kernel calling

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-13 Thread Joerg Vehlow
Hi Eric, What is this patch supposed to be doing? What bug is it fixing? This information is part in the first message of this mail thread. The patch was intendedfor the active discussion in this thread, not for a broad review. A short summary: In the rt kernel, a panic in an interrupt context

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-11 Thread Eric W. Biederman
> > The mutex_trylock can still be used, because it is only in syscall context and > no interrupt context. What is this patch supposed to be doing? What bug is it fixing? A BUG_ON that triggers inside of BUG_ONs seems not just suspect but outright impossible to make use of. I get the fe

[PATCH 4.4 11/62] batman-adv: bla: use netif_rx_ni when not in interrupt context

2020-09-11 Thread Greg Kroah-Hartman
From: Jussi Kivilinna [ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ] batadv_bla_send_claim() gets called from worker thread context through batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that case. This fixes "NOHZ: local_softirq_pending 08" log messages seen when

[PATCH 4.9 11/71] batman-adv: bla: use netif_rx_ni when not in interrupt context

2020-09-11 Thread Greg Kroah-Hartman
From: Jussi Kivilinna [ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ] batadv_bla_send_claim() gets called from worker thread context through batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that case. This fixes "NOHZ: local_softirq_pending 08" log messages seen when

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-08 Thread Joerg Vehlow
only in syscall context and no interrupt context. Jörg ---  kernel/kexec.c  |  8 ++--  kernel/kexec_core.c | 86 +++--  kernel/kexec_file.c |  4 +-  kernel/kexec_internal.h |  6 ++-  4 files changed, 69 insertions(+), 35 deletions(-) diff --git a

[PATCH 5.8 042/186] batman-adv: bla: use netif_rx_ni when not in interrupt context

2020-09-08 Thread Greg Kroah-Hartman
From: Jussi Kivilinna [ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ] batadv_bla_send_claim() gets called from worker thread context through batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that case. This fixes "NOHZ: local_softirq_pending 08" log messages seen when

[PATCH 4.19 18/88] batman-adv: bla: use netif_rx_ni when not in interrupt context

2020-09-08 Thread Greg Kroah-Hartman
From: Jussi Kivilinna [ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ] batadv_bla_send_claim() gets called from worker thread context through batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that case. This fixes "NOHZ: local_softirq_pending 08" log messages seen when

[PATCH 5.4 025/129] batman-adv: bla: use netif_rx_ni when not in interrupt context

2020-09-08 Thread Greg Kroah-Hartman
From: Jussi Kivilinna [ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ] batadv_bla_send_claim() gets called from worker thread context through batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that case. This fixes "NOHZ: local_softirq_pending 08" log messages seen when

[PATCH 4.14 14/65] batman-adv: bla: use netif_rx_ni when not in interrupt context

2020-09-08 Thread Greg Kroah-Hartman
From: Jussi Kivilinna [ Upstream commit 279e89b2281af3b1a9f04906e157992c19c9f163 ] batadv_bla_send_claim() gets called from worker thread context through batadv_bla_periodic_work(), thus netif_rx_ni needs to be used in that case. This fixes "NOHZ: local_softirq_pending 08" log messages seen when

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread Joerg Vehlow
Hi Peter On 9/7/2020 6:23 PM, pet...@infradead.org wrote: According to the original comment in __crash_kexec, the mutex was used to prevent a sys_kexec_load, while crash_kexec is executed. Your proposed patch does not lock the mutex in crash_kexec. Sure, but any mutex taker will (spin) wait for

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread Valentin Schneider
On 07/09/20 12:41, pet...@infradead.org wrote: > So cenceptually there's the problem that idle must always be runnable, > and the moment you boost it, it becomes subject to a different > scheduling class. > > Imagine for example what happens when we boost it to RT and then have it > be subject to

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread peterz
On Mon, Sep 07, 2020 at 02:03:09PM +0200, Joerg Vehlow wrote: > > > On 9/7/2020 1:46 PM, pet...@infradead.org wrote: > > I think it's too complicated for that is needed, did you see my > > suggestion from a year ago? Did i miss something obvious? > > > This one? > https://lore.kernel.org/linux-

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread Joerg Vehlow
On 9/7/2020 1:46 PM, pet...@infradead.org wrote: I think it's too complicated for that is needed, did you see my suggestion from a year ago? Did i miss something obvious? This one? https://lore.kernel.org/linux-fsdevel/20191219090535.gv2...@hirez.programming.kicks-ass.net/ I think it may b

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread peterz
On Mon, Sep 07, 2020 at 12:51:37PM +0200, Joerg Vehlow wrote: > Hi, > > I guess there is currently no other way than to use something like Steven > proposed. I implemented and tested the attached patch with a module, > that triggers the soft lockup detection and it works as expected. > I did not u

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread peterz
On Sat, Aug 22, 2020 at 07:49:28PM -0400, Steven Rostedt wrote: > From this email: > > > The problem happens when that owner is the idle task, this can happen > > when the irq/softirq hits the idle task, in that case the contending > > mutex_lock() will try and PI boost the idle task, and that is

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-09-07 Thread Joerg Vehlow
Hi, I guess there is currently no other way than to use something like Steven proposed. I implemented and tested the attached patch with a module, that triggers the soft lockup detection and it works as expected. I did not use inline functions, but normal function implemented in kexec_core, bec

[PATCH v1 0/4] mm: kmem: kernel memory accounting in an interrupt context

2020-08-27 Thread Roman Gushchin
This patchset implements memcg-based memory accounting of allocations made from an interrupt context. Historically, such allocations were passed unaccounted mostly because charging the memory cgroup of the current process wasn't an option. Also performance reasons were likely a reason too.

[PATCH RFC 0/4] mm: kmem: kernel memory accounting in an interrupt context

2020-08-27 Thread Roman Gushchin
This patchset implements memcg-based memory accounting of allocations made from an interrupt context. Historically, such allocations were passed unaccounted mostly because charging the memory cgroup of the current process wasn't an option. Also performance reasons were likely a reason too.

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-08-22 Thread Steven Rostedt
On Sat, 22 Aug 2020 14:32:52 +0200 pet...@infradead.org wrote: > On Fri, Aug 21, 2020 at 05:03:34PM -0400, Steven Rostedt wrote: > > > > Sigh. Is it too hard to make mutex_trylock() usable from interrupt > > > context? > > > > > > That's a que

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-08-22 Thread peterz
On Fri, Aug 21, 2020 at 05:03:34PM -0400, Steven Rostedt wrote: > > Sigh. Is it too hard to make mutex_trylock() usable from interrupt > > context? > > > That's a question for Thomas and Peter Z. You should really know that too, the TL;DR answer is it's fun

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-08-21 Thread Steven Rostedt
ting locks and I guess that would require spinning now, > > > if we do this with bare xchg. > > > > > > Instead I thought about using a spinlock, because they are supposed > > > to be used in interrupt context as well, if I understand the documentation

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-08-21 Thread Andrew Morton
ing to the xchg approach, but that seems to be > > not a good solution anymore, because the mutex is used in many places, > > a lot with waiting locks and I guess that would require spinning now, > > if we do this with bare xchg. > > > > Instead I thought about using

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-08-21 Thread Steven Rostedt
t; a lot with waiting locks and I guess that would require spinning now, > if we do this with bare xchg. > > Instead I thought about using a spinlock, because they are supposed > to be used in interrupt context as well, if I understand the documentation > correctly ([1]). > @

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-08-21 Thread Joerg Vehlow
cause they are supposed to be used in interrupt context as well, if I understand the documentation correctly ([1]). @RT developers Unfortunately the rt patches seem to interpret it a bit different and spin_trylock uses __rt_mutex_trylock again, with the same consequences as with the current code

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-07-27 Thread Andrew Morton
On Wed, 22 Jul 2020 06:30:53 +0200 Joerg Vehlow wrote: > >> About 12 years ago this was not implemented using a mutex, but using xchg. > >> See: 8c5a1cf0ad3ac5fcdf51314a63b16a440870f6a2 > > Yes, that commit is wrong, because mutex_trylock() is not to be taken in >

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-07-22 Thread Steven Rostedt
On Wed, 22 Jul 2020 06:30:53 +0200 Joerg Vehlow wrote: > Hi Andrew, > > it's been two month now and no reaction from you. Maybe you did not see > this mail from Steven. > Please look at this issue. > Perhaps you need to send the report again without the RT (just [BUG]) to get Andrew's attenti

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-07-21 Thread Joerg Vehlow
x_trylock can be called from everywhere. Actually even mutex_trylock has the comment, that it is not supposed to be used from interrupt context, but it still locks the mutex. I guess this could also be a bug in the non-rt kernel. I found this problem using a test module, that triggers the softlock

Re: [BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-05-28 Thread Steven Rostedt
king failed. > > According to rt_mutex_trylock documentation, it is not allowed to call this > function from an irq handler, but panic can be called from everywhere > and thus > rt_mutex_trylock can be called from everywhere. Actually even > mutex_trylock has > the comment, that it

[BUG RT] dump-capture kernel not executed for panic in interrupt context

2020-05-28 Thread Joerg Vehlow
rom interrupt context, but it still locks the mutex. I guess this could also be a bug in the non-rt kernel. I found this problem using a test module, that triggers the softlock detection. It is a pretty simple module, that creates a kthread, that disables preemption, spins 60 seconds in an endless l

Re: [PATCH] drm/i915: Avoid using simd from interrupt context

2020-05-03 Thread Jason A. Donenfeld
On Sun, May 3, 2020 at 2:31 PM Chris Wilson wrote: > > Query whether or not we are in a legal context for using SIMD, before > using SSE4.2 registers. > > Suggested-by: Jason A. Donenfeld > Signed-off-by: Chris Wilson > --- > drivers/gpu/drm/i915/i915_memcpy.c | 4 > 1 file changed, 4 inse

Re: [tip: timers/core] tick: Mark sched_timer to expire in hard interrupt context

2019-08-28 Thread Frederic Weisbecker
.kernel.org/tip/71fed982d63cb2bb88db6f36059e3b14a7913846 > Author:Sebastian Andrzej Siewior > AuthorDate:Fri, 23 Aug 2019 13:38:45 +02:00 > Committer: Thomas Gleixner > CommitterDate: Wed, 28 Aug 2019 13:01:26 +02:00 > > tick: Mark sched_timer to expire in hard interrupt context > &g

[tip: timers/core] tick: Mark sched_timer to expire in hard interrupt context

2019-08-28 Thread tip-bot2 for Sebastian Andrzej Siewior
+02:00 Committer: Thomas Gleixner CommitterDate: Wed, 28 Aug 2019 13:01:26 +02:00 tick: Mark sched_timer to expire in hard interrupt context sched_timer must be initialized with the _HARD mode suffix to ensure expiry in hard interrupt context on RT. The previous conversion to HARD expiry mode

[PATCH 2/2] tick: Mark sched_timer in hard interrupt context

2019-08-23 Thread Sebastian Andrzej Siewior
The sched_timer should be initialized with the _HARD suffix. Most of this already happened in commit 902a9f9c50905 ("tick: Mark tick related hrtimers to expiry in hard interrupt context") but this one instance has been missed. Signed-off-by: Sebastian Andrzej Siewior --- k

Re: [Question-kvm] Can hva_to_pfn_fast be executed in interrupt context?

2019-08-20 Thread Bharath Vedartham
On Thu, Aug 15, 2019 at 08:26:43PM +0200, Paolo Bonzini wrote: > Oh, I see. Sorry I didn't understand the question. In the case of KVM, > there's simply no code that runs in interrupt context and needs to use > virtual addresses. > > In fact, there's no code that ru

Re: [Question-kvm] Can hva_to_pfn_fast be executed in interrupt context?

2019-08-15 Thread Bharath Vedartham
even in non-atomic context, since > > hva_to_pfn_fast is much faster than hva_to_pfn_slow). > > > > My question is can this be executed in an interrupt context? > > No, it cannot for the reason you mention below. > > Paolo hmm.. Well I expected the answer to be kvm specific. Because

Re: [Question-kvm] Can hva_to_pfn_fast be executed in interrupt context?

2019-08-13 Thread Paolo Bonzini
estion is can this be executed in an interrupt context? No, it cannot for the reason you mention below. Paolo > The motivation for this question is that in an interrupt context, we cannot > assume "current" to be the task_struct of the process of interest. > __get_user_pages_fast ass

[Question-kvm] Can hva_to_pfn_fast be executed in interrupt context?

2019-08-13 Thread Bharath Vedartham
Hi all, I was looking at the function hva_to_pfn_fast(in virt/kvm/kvm_main) which is executed in an atomic context(even in non-atomic context, since hva_to_pfn_fast is much faster than hva_to_pfn_slow). My question is can this be executed in an interrupt context? The motivation for this

[tip:timers/core] tick: Mark tick related hrtimers to expiry in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark tick related hrtimers to expiry in hard interrupt context The tick related hrtimers, which drive the scheduler tick and hrtimer based broadcasting are required to expire in hard interrupt context for obvious reasons. Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry. Make

[tip:timers/core] KVM: LAPIC: Mark hrtimer to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: LAPIC: Mark hrtimer to expire in hard interrupt context On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft interrupt expiry mode by default. While that's not a functional requirement for the KVM local APIC timer emulation, it's a latency issue which can be avoided by marking

[tip:timers/core] watchdog: Mark watchdog_hrtimer to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark watchdog_hrtimer to expire in hard interrupt context The watchdog hrtimer must expire in hard interrupt context even on PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would not work. No functional change. [ tglx: Split out from larger combo patch. Added changelog

[tip:timers/core] perf/core: Mark hrtimers to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark hrtimers to expire in hard interrupt context To guarantee that the multiplexing mechanism and the hrtimer driven events work on PREEMPT_RT enabled kernels it's required that the related hrtimers expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont defer them to

[tip:timers/core] sched: Mark hrtimers to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark hrtimers to expire in hard interrupt context The scheduler related hrtimers need to expire in hard interrupt context even on PREEMPT_RT enabled kernels. Mark then as such. No functional change. [ tglx: Split out from larger combo patch. Add changelog. ] Signed-off-by: Sebastian Andrzej

[tip:timers/core] tick: Mark tick related hrtimers to expiry in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark tick related hrtimers to expiry in hard interrupt context The tick related hrtimers, which drive the scheduler tick and hrtimer based broadcasting are required to expire in hard interrupt context for obvious reasons. Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry. Make

[tip:timers/core] KVM: LAPIC: Mark hrtimer to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: LAPIC: Mark hrtimer to expire in hard interrupt context On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft interrupt expiry mode by default. While that's not a functional requirement for the KVM local APIC timer emulation, it's a latency issue which can be avoided by marking

[tip:timers/core] watchdog: Mark watchdog_hrtimer to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark watchdog_hrtimer to expire in hard interrupt context The watchdog hrtimer must expire in hard interrupt context even on PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would not work. No functional change. [ tglx: Split out from larger combo patch. Added changelog

[tip:timers/core] perf/core: Mark hrtimers to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark hrtimers to expire in hard interrupt context To guarantee that the multiplexing mechanism and the hrtimer driven events work on PREEMPT_RT enabled kernels it's required that the related hrtimers expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont defer them to

[tip:timers/core] sched: Mark hrtimers to expire in hard interrupt context

2019-08-01 Thread tip-bot for Sebastian Andrzej Siewior
: Mark hrtimers to expire in hard interrupt context The scheduler related hrtimers need to expire in hard interrupt context even on PREEMPT_RT enabled kernels. Mark then as such. No functional change. [ tglx: Split out from larger combo patch. Add changelog. ] Signed-off-by: Sebastian Andrzej

[tip:timers/core] KVM: LAPIC: Mark hrtimer to expire in hard interrupt context

2019-07-30 Thread tip-bot for Sebastian Andrzej Siewior
: LAPIC: Mark hrtimer to expire in hard interrupt context On PREEMPT_RT enabled kernels unmarked hrtimers are moved into soft interrupt expiry mode by default. While that's not a functional requirement for the KVM local APIC timer emulation, it's a latency issue which can be avoided by m

[tip:timers/core] tick: Mark tick related hrtimers to expiry in hard interrupt context

2019-07-30 Thread tip-bot for Sebastian Andrzej Siewior
: Mark tick related hrtimers to expiry in hard interrupt context The tick related hrtimers, which drive the scheduler tick and hrtimer based broadcasting are required to expire in hard interrupt context for obvious reasons. Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry. Make

[tip:timers/core] watchdog: Mark watchdog_hrtimer to expire in hard interrupt context

2019-07-30 Thread tip-bot for Sebastian Andrzej Siewior
: Mark watchdog_hrtimer to expire in hard interrupt context The watchdog hrtimer must expire in hard interrupt context even on PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would not work. No functional change. [ tglx: Split out from larger combo patch. Added changelog

[tip:timers/core] perf/core: Mark hrtimers to expire in hard interrupt context

2019-07-30 Thread tip-bot for Thomas Gleixner
hrtimers to expire in hard interrupt context To guarantee that the multiplexing mechanism and the hrtimer driven events work on PREEMPT_RT enabled kernels it's required that the related hrtimers expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont defer them to soft inte

[tip:timers/core] sched: Mark hrtimers to expire in hard interrupt context

2019-07-30 Thread tip-bot for Thomas Gleixner
hrtimers to expire in hard interrupt context The scheduler related hrtimers need to expire in hard interrupt context even on PREEMPT_RT enabled kernels. Mark then as such. No functional change. [ tglx: Split out from larger combo patch. Add changelog. ] Signed-off-by: Sebastian Andrzej Siewior Signed

Re: [patch 07/12] KVM: LAPIC: Mark hrtimer to expire in hard interrupt context

2019-07-26 Thread Paolo Bonzini
tion, it's a latency issue which can be avoided by marking the timer > so hard interrupt context expiry is enforced. > > No functional change. > > [ tglx: Split out from larger combo patch. Add changelog. ] > > Signed-off-by: Sebastian Andrzej Siewior > Signed-off-by:

[patch 04/12] sched: Mark hrtimers to expire in hard interrupt context

2019-07-26 Thread Thomas Gleixner
From: Thomas Gleixner The scheduler related hrtimers need to expire in hard interrupt context even on PREEMPT_RT enabled kernels. Mark then as such. No functional change. [ tglx: Split out from larger combo patch. Add changelog. ] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by

[patch 05/12] perf/core: Mark hrtimers to expire in hard interrupt context

2019-07-26 Thread Thomas Gleixner
From: Thomas Gleixner To guarantee that the multiplexing mechanism and the hrtimer driven events work on PREEMPT_RT enabled kernels it's required that the related hrtimers expire in hard interrupt context. Mark them so PREEMPT_RT kernels wont defer them to soft interrupt context. No funct

[patch 08/12] tick: Mark tick related hrtimers to expiry in hard interrupt context

2019-07-26 Thread Thomas Gleixner
From: Sebastian Andrzej Siewior The tick related hrtimers, which drive the scheduler tick and hrtimer based broadcasting are required to expire in hard interrupt context for obvious reasons. Mark them so PREEMPT_RT kernels wont move them to soft interrupt expiry. No functional change. [ tglx

[patch 07/12] KVM: LAPIC: Mark hrtimer to expire in hard interrupt context

2019-07-26 Thread Thomas Gleixner
er so hard interrupt context expiry is enforced. No functional change. [ tglx: Split out from larger combo patch. Add changelog. ] Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: k...@vger.kernel.org Cc: Paolo Bonzini --- arch/x86/kvm/lapic.c |2 +- 1 file changed, 1

[patch 06/12] watchdog: Mark watchdog_hrtimer to expire in hard interrupt context

2019-07-26 Thread Thomas Gleixner
From: Sebastian Andrzej Siewior The watchdog hrtimer must expire in hard interrupt context even on PREEMPT_RT=y kernels as otherwise the hard/softlockup detection logic would not work. No functional change. [ tglx: Split out from larger combo patch. Added changelog ] Signed-off-by: Sebastian

[PATCH 1/8] misc: fastrpc: Avoid free of DMA buffer in interrupt context

2019-03-07 Thread Srinivas Kandagatla
From: Thierry Escande When the remote DSP invocation is interrupted by the user, the associated DMA buffer can be freed in interrupt context causing a kernel BUG. This patch adds a worker thread associated to the fastrpc context. It is scheduled in the rpmsg callback to decrease its refcount

[PATCH 4.17 241/336] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-08-01 Thread Greg Kroah-Hartman
4.17-stable review patch. If anyone has any objections, please let me know. -- From: Kirill Tkhai [ Upstream commit 7a107c0f55a3b4c6f84a4323df5610360bde1684 ] I observed the following deadlock between them: [task 1] [task 2] [t

[PATCH 4.9 115/144] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-08-01 Thread Greg Kroah-Hartman
4.9-stable review patch. If anyone has any objections, please let me know. -- From: Kirill Tkhai [ Upstream commit 7a107c0f55a3b4c6f84a4323df5610360bde1684 ] I observed the following deadlock between them: [task 1] [task 2] [ta

[PATCH 4.14 179/246] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-08-01 Thread Greg Kroah-Hartman
4.14-stable review patch. If anyone has any objections, please let me know. -- From: Kirill Tkhai [ Upstream commit 7a107c0f55a3b4c6f84a4323df5610360bde1684 ] I observed the following deadlock between them: [task 1] [task 2] [t

[RFC] [PATCH] Do not start queue from interrupt context

2018-04-30 Thread Vikram Auradkar
Lockdep warning is seen when driver is handling a topology change event after drive removal. Starting queue eventually enables irq, which throws lockdep warning in scsi_request_fn. This change makes starting queues async. [ cut here ] WARNING: CPU: 0 PID: 0 at kernel/lockde

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-27 Thread Boqun Feng
On Tue, Apr 17, 2018 at 07:01:10AM -0700, Matthew Wilcox wrote: > On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote: > > I observed the following deadlock between them: > > > > [task 1] [task 2] [task 3] > > kill_fasync()

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-18 Thread Kirill Tkhai
On 18.04.2018 23:00, Jeff Layton wrote: > On Tue, 2018-04-17 at 17:15 +0300, Kirill Tkhai wrote: >> On 17.04.2018 17:01, Matthew Wilcox wrote: >>> On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote: I observed the following deadlock between them: [task 1]

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-18 Thread Jeff Layton
On Tue, 2018-04-17 at 17:15 +0300, Kirill Tkhai wrote: > On 17.04.2018 17:01, Matthew Wilcox wrote: > > On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote: > > > I observed the following deadlock between them: > > > > > > [task 1] [task 2]

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Kirill Tkhai
On 17.04.2018 17:01, Matthew Wilcox wrote: > On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote: >> I observed the following deadlock between them: >> >> [task 1] [task 2] [task 3] >> kill_fasync() mm_update_next_owner()

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Matthew Wilcox
On Thu, Apr 05, 2018 at 02:58:06PM +0300, Kirill Tkhai wrote: > I observed the following deadlock between them: > > [task 1] [task 2] [task 3] > kill_fasync() mm_update_next_owner() > copy_process() > spin_lock_irqsav

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Kirill Tkhai
;>*fp = fa->fa_next; >>>>call_rcu(&fa->fa_rcu, fasync_free_rcu); >>>> @@ -912,13 +912,13 @@ struct fasync_struct *fasync_insert_entry(int fd, >>>> struct file *filp, struct fasy >>>>if (fa->fa_file != filp) >

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Jeff Layton
> continue; > > > > > > - spin_lock_irq(&fa->fa_lock); > > > + write_lock_irq(&fa->fa_lock); > > > fa->fa_fd = fd; > > > - spin_unlock_irq(&fa->fa_lock); > > > +

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Kirill Tkhai
in_lock_init(&new->fa_lock); >> +rwlock_init(&new->fa_lock); >> new->magic = FASYNC_MAGIC; >> new->fa_file = filp; >> new->fa_fd = fd; >> @@ -981,14 +981,13 @@ static void kill_fasync_rcu(struct fasync_struct

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Jeff Layton
On Thu, 2018-04-05 at 14:58 +0300, Kirill Tkhai wrote: > I observed the following deadlock between them: > > [task 1] [task 2] [task 3] > kill_fasync() mm_update_next_owner() > copy_process() > spin_lock_irqsave(&fa->

Re: [PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-17 Thread Kirill Tkhai
Hi, almost two weeks passed, while there is no reaction. Jeff, Bruce, what is your point of view? Just to underline, the problem is because of rw_lock fairness, which does not allow a reader to take a read locked lock in case of there is already a writer called write_lock(). See queued_read_lock

[PATCH] fasync: Fix deadlock between task-context and interrupt-context kill_fasync()

2018-04-05 Thread Kirill Tkhai
I observed the following deadlock between them: [task 1] [task 2] [task 3] kill_fasync() mm_update_next_owner() copy_process() spin_lock_irqsave(&fa->fa_lock) read_lock(&tasklist_lock) write_lock_irq(&taskli

[PATCH v2 3/3] media: vb2-core: vb2_ops: document non-interrupt-context calling

2018-03-08 Thread Luca Ceresoli
Driver writers can benefit in knowing if/when callbacks are called in interrupt context. But it is not completely obvious here, so document it. Signed-off-by: Luca Ceresoli Cc: Laurent Pinchart Cc: Pawel Osciak Cc: Marek Szyprowski Cc: Kyungmin Park Cc: Mauro Carvalho Chehab --- Changes v1

[PATCH 3/3] drm/rockchip: Don't use spin_lock_irqsave in interrupt context

2018-02-20 Thread Marc Zyngier
The rockchip DRM driver is quite careful to disable interrupts when taking a lock that is also taken in interrupt context, which is a good thing. What is a bit over the top is to use spin_lock_irqsave when already in interrupt context, as you cannot take another interrupt again, and disabling

[PATCH 3.16 129/133] VSOCK: sock_put wasn't safe to call in interrupt context

2017-11-21 Thread Ben Hutchings
3.16.51-rc1 review patch. If anyone has any objections, please let me know. -- From: Jorgen Hansen commit 4ef7ea9195ea73262cd9730fb54e1eb726da157b upstream. In the vsock vmci_transport driver, sock_put wasn't safe to call in interrupt context, since that may call the

[PATCH stable-3.16 1/3] VSOCK: sock_put wasn't safe to call in interrupt context

2017-09-13 Thread Michal Hocko
From: Jorgen Hansen commit 4ef7ea9195ea73262cd9730fb54e1eb726da157b upstream. In the vsock vmci_transport driver, sock_put wasn't safe to call in interrupt context, since that may call the vsock destructor which in turn calls several functions that should only be called from process co

[PATCH 3.16 174/233] srcu: Allow use of Classic SRCU from both process and interrupt context

2017-09-09 Thread Ben Hutchings
device. This happens because irqfd_wakeup() calls srcu_read_lock(&kvm->irq_srcu) in interrupt context, while a worker thread does the same inside kvm_set_irq(). If the interrupt happens while the worker thread is executing __srcu_read_lock(), updates to the Classic SRCU ->lock_count[] field o

  1   2   3   4   5   >