On Wed, 2008-07-16 at 15:51 +0900, Hidetoshi Seto wrote:
> If stop_machine() invoked while one of onlined cpu is locked up
> by some reason, stop_machine cannot finish its work because the
> locked cpu cannot stop. This means all other healthy cpus
> will be blocked infinitely by one dead cpu.
>
On Fri, 2008-07-25 at 10:55 -0700, Jeremy Fitzhardinge wrote:
> I'm thinking about ways to improve the Xen balloon driver. This is the
> driver which allows the guest domain to expand or contract by either
> asking for more memory from the hypervisor, or giving unneeded memory
> back. From the
On Thu, 2008-11-06 at 11:01 -0500, Vivek Goyal wrote:
> > Does this still require I use dm, or does it also work on regular block
> > devices? Patch 4/4 isn't quite clear on this.
>
> No. You don't have to use dm. It will simply work on regular devices. We
> shall have to put few lines of code fo
On Thu, 2008-11-06 at 10:30 -0500, [EMAIL PROTECTED] wrote:
> Hi,
>
> If you are not already tired of so many io controller implementations, here
> is another one.
>
> This is a very eary very crude implementation to get early feedback to see
> if this approach makes any sense or not.
>
> This c
On Thu, 2008-11-06 at 11:39 -0500, Vivek Goyal wrote:
> On Thu, Nov 06, 2008 at 05:16:13PM +0100, Peter Zijlstra wrote:
> > On Thu, 2008-11-06 at 11:01 -0500, Vivek Goyal wrote:
> >
> > > > Does this still require I use dm, or does it also work on regular block
>
On Thu, 2008-11-06 at 11:57 -0500, Rik van Riel wrote:
> Peter Zijlstra wrote:
>
> > The only real issue I can see is with linear volumes, but those are
> > stupid anyway - non of the gains but all the risks.
>
> Linear volumes may well be the most common ones.
>
On Fri, 2008-11-07 at 11:41 +1100, Dave Chinner wrote:
> On Thu, Nov 06, 2008 at 06:11:27PM +0100, Peter Zijlstra wrote:
> > On Thu, 2008-11-06 at 11:57 -0500, Rik van Riel wrote:
> > > Peter Zijlstra wrote:
> > >
> > > > The only real issue I can se
Hi,
Just wondering,.. have you lot looked at the recently posted BFQ
patches?
BFQ looks like a very promising elevator, its has tighter bounds than
CFQ and already does the cgroup thing.
___
Virtualization mailing list
Virtualization@lists.linux-found
On Fri, 2008-11-14 at 13:58 +0900, Satoshi UCHIDA wrote:
> > I think Satoshi's cfq controller patches also do not seem to be considering
> > A, B, C, D and E to be at same level, instead it treats cgroup "/" , D and
> > E
> > at same level and tries to do proportional BW division among these.
> >
These patches never seem to have made it onto LKML?!
On Mon, 2007-08-20 at 15:13 +0200, Laurent Vivier wrote:
> The aim of these four patches is to introduce Virtual Machine time accounting.
>
> _Ingo_, as these patches modify files of the scheduler, could you have a look
> to
> them, please ?
>
On Tue, 2009-08-04 at 17:07 +0200, Martin Schwidefsky wrote:
> On Tue, 04 Aug 2009 16:16:38 +0200
> Peter Zijlstra wrote:
>
> > These patches never seem to have made it onto LKML?!
> >
> > On Mon, 2007-08-20 at 15:13 +0200, Laurent Vivier wrote:
> > >
On Tue, 2009-08-04 at 19:29 +0200, Martin Schwidefsky wrote:
> > So its going to split user time into user and guest. Does that really
> > make sense? For the host kernel it really is just another user process,
> > no?
>
> The code (at least in parts) is already upstream. Look at the
> account_gu
On Wed, 2008-01-23 at 21:53 +0900, Ryo Tsuruta wrote:
> Hi everyone,
>
> I'm happy to announce that I've implemented a Block I/O bandwidth controller.
> The controller is designed to be of use in a cgroup or virtual machine
> environment. The current approach is that the controller is implemented
On Mon, Jul 24, 2023 at 05:43:10PM +0800, Qi Zheng wrote:
> +void shrinker_unregister(struct shrinker *shrinker)
> +{
> + struct dentry *debugfs_entry;
> + int debugfs_id;
> +
> + if (!shrinker || !(shrinker->flags & SHRINKER_REGISTERED))
> + return;
> +
> + down_write(
On Thu, Jun 08, 2023 at 04:03:31PM +0200, Juergen Gross wrote:
> As a preparation for replacing paravirt patching completely by
> alternative patching, move some backend functions and #defines to
> alternative code and header.
>
> Signed-off-by: Juergen Gross
Acked-by: Peter
On Thu, Jun 08, 2023 at 04:03:33PM +0200, Juergen Gross wrote:
> Instead of stacking alternative and paravirt patching, use the new
> ALT_FLAG_CALL flag to switch those mixed calls to pure alternative
> handling.
>
> This eliminates the need to be careful regarding the sequence of
> alternative an
tches, one
introducing ALT_NOT_XEN and then a second with the rest.
Regardless,
Acked-by: Peter Zijlstra (Intel)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
--
> arch/x86/tools/relocs.c | 2 +-
> 7 files changed, 3 insertions(+), 178 deletions(-)
More - more better! :-)
Acked-by: Peter Zijlstra (Intel)
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
ht
On Wed, Jul 10, 2013 at 01:33:25PM +0300, Gleb Natapov wrote:
Here's an idea, trim the damn email ;-) -- not only directed at gleb.
> > Ingo, Gleb,
> >
> > From the results perspective, Andrew Theurer, Vinod's test results are
> > pro-pvspinlock.
> > Could you please help me to know what will ma
On Tue, Jul 16, 2013 at 09:02:15AM +0300, Gleb Natapov wrote:
> BTW can NMI handler take spinlocks?
No -- that is, yes you can using trylock, but you still shouldn't.
> If it can what happens if NMI is
> delivered in a section protected by local_irq_save()/local_irq_restore()?
You deadlock.
___
On Fri, Mar 04, 2022 at 08:27:45PM +0200, Adrian Hunter wrote:
> On 04/03/2022 15:41, Peter Zijlstra wrote:
> > On Mon, Feb 14, 2022 at 01:09:06PM +0200, Adrian Hunter wrote:
> >> Currently, when Intel PT is used within a VM guest, it is not possible to
> >> make use o
On Mon, Mar 07, 2022 at 11:06:46AM +0100, Juergen Gross wrote:
> > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> > index 4420499f7bb4..a1f179ed39bf 100644
> > --- a/arch/x86/kernel/paravirt.c
> > +++ b/arch/x86/kernel/paravirt.c
> > @@ -145,6 +145,15 @@ DEFINE_STATIC_CALL(
On Mon, Mar 07, 2022 at 02:36:03PM +0200, Adrian Hunter wrote:
> > diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
> > index 4420499f7bb4..a1f179ed39bf 100644
> > --- a/arch/x86/kernel/paravirt.c
> > +++ b/arch/x86/kernel/paravirt.c
> > @@ -145,6 +145,15 @@ DEFINE_STATIC_CALL(
ug exception already do.
>
> Juergen Gross (4):
> x86/xen: use specific Xen pv interrupt entry for MCE
> x86/xen: use specific Xen pv interrupt entry for DF
> x86/pv: switch SWAPGS to ALTERNATIVE
> x86/xen: drop USERGS_SYSRET64 paravirt call
Looks 'sane
On Fri, Nov 20, 2020 at 12:46:23PM +0100, Juergen Gross wrote:
> +static __always_inline void arch_local_irq_restore(unsigned long flags)
> +{
> + if (!arch_irqs_disabled_flags(flags))
> + arch_local_irq_enable();
> +}
If someone were to write horrible code like:
local_irq
On Fri, Nov 20, 2020 at 12:46:24PM +0100, Juergen Gross wrote:
> The time pvops functions are the only ones left which might be
> used in 32-bit mode and which return a 64-bit value.
>
> Switch them to use the static_call() mechanism instead of pvops, as
> this allows quite some simplification of
On Fri, Nov 20, 2020 at 12:46:26PM +0100, Juergen Gross wrote:
> +#define PVOP_CALL(rettype, op, clbr, call_clbr, extra_clbr, ...) \
> ({ \
> PVOP_CALL_ARGS; \
>
On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
> 30 files changed, 325 insertions(+), 598 deletions(-)
Much awesome ! I'll try and get that objtool thing sorted.
___
Virtualization mailing list
Virtualization@lists.linux-foundation.org
h
On Fri, Nov 20, 2020 at 01:53:42PM +0100, Peter Zijlstra wrote:
> On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
> > 30 files changed, 325 insertions(+), 598 deletions(-)
>
> Much awesome ! I'll try and get that objtool thing sorted.
This seems to work
On Tue, Dec 15, 2020 at 12:42:45PM +0100, Jürgen Groß wrote:
> Peter,
>
> On 23.11.20 14:43, Peter Zijlstra wrote:
> > On Fri, Nov 20, 2020 at 01:53:42PM +0100, Peter Zijlstra wrote:
> > > On Fri, Nov 20, 2020 at 12:46:18PM +0100, Juergen Gross wrote:
> > > >
On Tue, Dec 15, 2020 at 03:18:34PM +0100, Peter Zijlstra wrote:
> Ah, I was waiting for Josh to have an opinion (and then sorta forgot
> about the whole thing again). Let me refresh and provide at least a
> Changelog.
How's this then?
---
Subject: objtool: Alternatives vs ORC, the
On Tue, Dec 15, 2020 at 06:38:02PM -0600, Josh Poimboeuf wrote:
> On Tue, Dec 15, 2020 at 03:54:08PM +0100, Peter Zijlstra wrote:
> > The problem is that a single instance of unwind information (ORC) must
> > capture and correctly unwind all alternatives. Since the trivially
> &g
On Wed, Dec 16, 2020 at 10:56:05AM -0600, Josh Poimboeuf wrote:
> On Wed, Dec 16, 2020 at 09:40:59AM +0100, Peter Zijlstra wrote:
> > > Could we make it easier by caching the shared
> > > per-alt-group CFI state somewhere along the way?
> >
> > Yes, but when
On Tue, Feb 09, 2021 at 02:16:49PM -0800, Nadav Amit wrote:
> @@ -816,8 +821,8 @@ STATIC_NOPV void native_flush_tlb_others(const struct
> cpumask *cpumask,
>* doing a speculative memory access.
>*/
> if (info->freed_tables) {
> - smp_call_function_many(cpumask, fl
th direct ones. In a further step this could
> be switched to static_call(), too.
Acked-by: Peter Zijlstra (Intel)
I've rebased my objtool/retpoline branch on top of this, will post
if/when this hits tip. Negative alternative works like a charm.
__
On Wed, May 19, 2021 at 03:52:48PM +0200, Joerg Roedel wrote:
> --- a/arch/x86/kernel/sev.c
> +++ b/arch/x86/kernel/sev.c
> @@ -1343,9 +1343,10 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
> return;
> }
>
> + instrumentation_begin();
> +
> irq_state = irqe
On Wed, May 19, 2021 at 09:13:08PM +0200, Joerg Roedel wrote:
> Hi Peter,
>
> thanks for your review.
>
> On Wed, May 19, 2021 at 07:54:50PM +0200, Peter Zijlstra wrote:
> > On Wed, May 19, 2021 at 03:52:48PM +0200, Joerg Roedel wrote:
> > > --- a/arch/x86/ker
On Tue, Jun 08, 2021 at 11:54:36AM +0200, Joerg Roedel wrote:
> From: Joerg Roedel
>
> Use irqentry_enter() and irqentry_exit() to track the runtime state of
> the #VC handler. The reason it ran in NMI mode was solely to make sure
> nothing interrupts the handler while the GHCB is in use.
>
> Th
Bah, I suppose the trouble is that this SEV crap requires PARAVIRT?
I should really get around to fixing noinstr validation with PARAVIRT on
:-(
On Thu, Jun 10, 2021 at 11:11:38AM +0200, Joerg Roedel wrote:
> +static void vc_handle_from_kernel(struct pt_regs *regs, unsigned long
> error_code)
On Mon, Jun 14, 2021 at 03:53:24PM +0200, Joerg Roedel wrote:
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -506,7 +506,7 @@ SYM_CODE_START(\asmsym)
>
> movq%rsp, %rdi /* pt_regs pointer */
>
> - call\cfunc
> + callkernel_\cfunc
On Wed, Jun 16, 2021 at 08:49:12PM +0200, Joerg Roedel wrote:
> static void sev_es_ap_hlt_loop(void)
> {
> struct ghcb_state state;
> + unsigned long flags;
> struct ghcb *ghcb;
>
> - ghcb = sev_es_get_ghcb(&state);
> + local_irq_save(flags);
> +
> + ghcb = __sev_ge
On Wed, Jun 16, 2021 at 08:49:12PM +0200, Joerg Roedel wrote:
> @@ -514,7 +523,7 @@ void noinstr __sev_es_nmi_complete(void)
> struct ghcb_state state;
> struct ghcb *ghcb;
>
> - ghcb = sev_es_get_ghcb(&state);
> + ghcb = __sev_get_ghcb(&state);
>
> vc_ghcb_invalidat
On Fri, Jun 18, 2021 at 10:17:54AM +0200, Joerg Roedel wrote:
> On Thu, Jun 17, 2021 at 05:38:46PM +0200, Peter Zijlstra wrote:
> > I'm getting (with all of v6.1 applied):
> >
> > vmlinux.o: warning: objtool: __sev_es_nmi_complete()+0x1bf: call to panic()
> &
On Fri, Jun 18, 2021 at 01:54:07PM +0200, Joerg Roedel wrote:
> Joerg Roedel (2):
> x86/sev: Make sure IRQs are disabled while GHCB is active
> x86/sev: Split up runtime #VC handler for correct state tracking
Acked-by: Peter Zijlst
On Mon, Sep 13, 2021 at 11:36:24PM +0200, Thomas Gleixner wrote:
> That's the real problem and for that your barrier is at the wrong place
> because you want to make sure that those stores are visible before the
> store to intx_soft_enabled becomes visible, i.e. this should be:
>
>
> /*
; arch/x86/xen/xen-ops.h| 4 +-
> 8 files changed, 53 insertions(+), 201 deletions(-)
That looks awesome, I'm totally in favour of deleting code :-)
Acked-by: Peter Zijlstra (Intel)
___
Virtualization mailing list
Virtualizatio
On Wed, Nov 09, 2022 at 02:44:18PM +0100, Juergen Gross wrote:
> There are some paravirt assembler functions which are sharing a common
> pattern. Introduce a macro DEFINE_PARAVIRT_ASM() for creating them.
>
> Note that this macro is including explicit alignment of the generated
> functions, leadi
Sorry; things keep getting in the way of finishing this :/
As such, I need a bit of time to get on-track again..
On Tue, Oct 04, 2022 at 01:03:57PM +0200, Ulf Hansson wrote:
> > --- a/drivers/acpi/processor_idle.c
> > +++ b/drivers/acpi/processor_idle.c
> > @@ -1200,6 +1200,8 @@ static int acp
Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.
That is, once implicitly through the cpu_pm_*() calls and once
explicitly doing ct_irq_*_irqon().
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Anup Patel
Reviewed-by
Ever since commit d3afc7f12987 ("arm64: Allow IPIs to be handled as
normal interrupts") this function is called in regular IRQ context.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Mark Rutland
Acked-by: Marc Zyngier
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
From: Tony Lindgren
OMAP4 uses full SoC suspend modes as idle states, as such it needs the
whole power-domain and clock-domain code from the idle path.
All that code is not suitable to run with RCU disabled, as such push
RCU-idle deeper still.
Signed-off-by: Tony Lindgren
Signed-off-by: Peter
cpuidle_state::enter() methods should be IRQ invariant.
Additionally make sure to use raw_local_irq_*() methods since this
cpuidle callback will be called with RCU already disabled.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Rafael J. Wysocki
Reviewed-by: Frederic Weisbecker
Tested-by
Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.
Notably once implicitly through the cpu_pm_*() calls and once
explicitly doing RCU_NONIDLE().
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Acked-by
Doing RCU-idle outside the driver, only to then temporarily enable it
again, some *four* times, before going idle is daft.
Notably three times explicitly using RCU_NONIDLE() and once implicitly
through cpu_pm_*().
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Reviewed
e for smp_cross_call() tracepoints"), that
cpuidle_enter_state_coupled() already had RCU disabled, but that's
long been fixed by commit 1098582a0f6c ("sched,idle,rcu: Push rcu_idle
deeper into the idle path").
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Ulf Hansson
Ac
Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.
Notably once implicitly through the cpu_pm_*() calls and once
explicitly doing ct_irq_*_irqon().
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Idle code is very like entry code in that RCU isn't available. As
such, add a little validation.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Geert Uytterhoeven
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/
() leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/include/asm/shared/io.h |4 ++--
drivers/acpi/processor_idle.c|2 +-
include/linux/cpumask.h
All the idle routines are called with RCU disabled, as such there must
not be any tracing inside.
While there; clean-up the io-port idle thing.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
Doing RCU-idle outside the driver, only to then temporarily enable it
again before going idle is daft.
Notably: this converts all dt_init_idle_driver() and
__CPU_PM_CPU_IDLE_ENTER() users for they are inextrably intertwined.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
() leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Srivatsa S. Bhat (VMware)
Reviewed-by: Juergen Gross
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/include/asm/paravirt.h |6
The PM notifiers should no longer be ran with RCU disabled (per the
previous patches), as such this hack is no longer required either.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
drivers
complicated idle states for the cpuidle driver.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Tony Lindgren
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/arm/mach-omap2/pm34xx.c |2 +-
1 file changed, 1 insertion(+), 1
section
vmlinux.o: warning: objtool: intel_idle+0x78: call to
test_ti_thread_flag.constprop.0() leaves .noinstr.text section
vmlinux.o: warning: objtool: acpi_safe_halt+0xf: call to
test_ti_thread_flag.constprop.0() leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by
__monitor.constprop.0()
leaves .noinstr.text section
vmlinux.o: warning: objtool: mwait_idle+0x88: call to clflush() leaves
.noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86
Doing RCU-idle outside the driver, only to then temporarily enable it
again, at least twice, before going idle is daft.
Notably both cpu_pm_enter() and cpu_cluster_pm_enter() implicity
re-enable RCU.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Acked-by: Rafael J
The perf_lopwr_cb() is called from the idle routines; there is no RCU
there, we must not enter tracing.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/events/amd/brs.c
Per commit 56e62a737028 ("s390: convert to generic entry") the last
and only callers of trace_hardirqs_{on,off}_caller() went away, clean
up.
Cc: Sven Schnelle
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/trace/trace_preemptirq.c | 29 -
1 file c
x false positive RCU splats due to
incorrect hardirqs state")
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
drivers/idle/intel_idle.c |8 +---
1 file changed, 1 insertion(+), 7 d
The problem, per commit fc98c3c8c9dc ("printk: use rcuidle console
tracepoint"), was printk usage from the cpuidle path where RCU was
already disabled.
Per the patches earlier in this series, this is no longer the case.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Sergey S
nable-disable' dance.
Therefore, push this IRQ disabling into the idle function, meaning
that those architectures can avoid the pointless IRQ state flipping.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Gautham R. Shenoy
Acked-by: Mark Rutland [arm64]
Acked-by: Rafael J. Wysocki
Ack
The whole disable-RCU, enable-IRQS dance is very intricate since
changing IRQ state is traced, which depends on RCU.
Add two helpers for the cpuidle case that mirror the entry code.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony
Make cpuidle_enter_state() consistent with the s2idle variant and
verify ->enter() always returns with interrupts disabled.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
drivers/cpui
Typical boot time setup; no need to suffer an indirect call for that.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Reviewed-by: Rafael J. Wysocki
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/kernel/process.c | 50
vmlinux.o: warning: objtool: intel_idle_ibrs+0x17: call to spec_ctrl_current()
leaves .noinstr.text section
vmlinux.o: warning: objtool: intel_idle_ibrs+0x27: call to wrmsrl.constprop.0()
leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by
Now that arch_cpu_idle() is expected to return with IRQs disabled,
avoid the useless STI/CLI dance.
Per the specs this is supposed to work, but nobody has yet relied up
this behaviour so broken implementations are possible.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
vmlinux.o: warning: objtool: acpi_idle_enter_s2idle+0x45: call to
__this_cpu_preempt_check() leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
include/linux/percpu
Doing RCU-idle outside the driver, only to then temporarily enable it
again before going idle is daft.
Notably the cpu_pm_*() calls implicitly re-enable RCU for a bit.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Acked-by: Rafael J. Wysocki
Tested-by: Tony Lindgren
rom NMI context again.
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/trace/trace_preemptirq.c | 21 +
1 file changed, 13 insertions(+), 8 deletions(-)
--- a/kernel/trace/trace_preemptirq.c
+++ b/kernel/trace/trace_preemptirq.c
@@ -20,6 +20,15 @@
static DEFINE_PER_CPU(int,
ARCH_WANTS_NO_INSTR (a superset of CONFIG_GENERIC_ENTRY) disallows any
and all tracing when RCU isn't enabled.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
include/linux/tracepo
All callers should still have RCU enabled.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Ulf Hansson
Acked-by: Mark Rutland
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
kernel/cpu_pm.c |9 -
1 file changed
/restore and whitelist the thing.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
lib/ubsan.c |5 -
tools/objtool/check.c |1 +
2 files changed, 5 insertions(+), 1 deletion
No callers left that have already disabled RCU.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Mark Rutland
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
kernel/time/tick-broadcast-hrtimer.c | 29
Doing RCU-idle outside the driver, only to then teporarily enable it
again before going idle is daft.
Notably the cpu_pm_*() calls implicitly re-enable RCU for a bit.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Frederic Weisbecker
Reviewed-by: Tony Lindgren
Acked-by: Rafael J. Wysocki
Tracing (kprobes included) and other compiler instrumentation relies
on a normal kernel runtime. Therefore all functions that disable RCU
should be noinstr, as should all functions that are called while RCU
is disabled.
Signed-off-by: Peter Zijlstra (Intel)
---
drivers/cpuidle/cpuidle.c | 37
For testing purposes.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
drivers/idle/intel_idle.c |7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
--- a/drivers/idle/intel_idle.c
ave a cpuidle driver; but adding one would be the
recourse to (re)gain the other idle states.
Suggested-by: Tony Lindgren
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/arm/mach-omap2/pm2
()
leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/include/asm/fpu/xcr.h |4 ++--
arch/x86/include/asm/special_insns.h |2 +-
arch/x86/kernel/fpu
memcpy() leaves
.noinstr.text section
Remove the weak aliases to ensure nobody hijacks these functions and
add them to the noinstr section.
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch
The __cpuidle functions will become a noinstr class, as such they need
explicit annotations.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
drivers/cpuidle/poll_state.c |6 +-
1
For all cpuidle drivers that use CPUIDLE_FLAG_RCU_IDLE, ensure that
all functions that call ct_cpuidle_enter() are marked __cpuidle.
( due to lack of noinstr validation on these platforms it is entirely
possible this isn't complete )
Signed-off-by: Peter Zijlstra (Intel)
---
arch/arm
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/include/asm/nospec-branch.h |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/arch/x86/include/asm/nospec-branch.h
+++ b
Hi All!
The (hopefully) final respin of cpuidle vs rcu cleanup patches. Barring any
objections I'll be queueing these patches in tip/sched/core in the next few
days.
v2: https://lkml.kernel.org/r/20220919095939.761690...@infradead.org
These here patches clean up the mess that is cpuidle vs rcuid
OMAP was the one and only user.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Ulf Hansson
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/arm/mach-omap2/powerdomain.c | 10 +-
drivers/base/power/runtime.c
For all cpuidle drivers that do not use CPUIDLE_FLAG_RCU_IDLE (iow,
the simple ones) make sure all the functions are marked __cpuidle.
( due to lack of noinstr validation on these platforms it is entirely
possible this isn't complete )
Signed-off-by: Peter Zijlstra (Intel)
---
arc
OMAP3 uses full SoC suspend modes as idle states, as such it needs the
whole power-domain and clock-domain code from the idle path.
All that code is not suitable to run with RCU disabled, as such push
RCU-idle deeper still.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Tony Lindgren
Acked
OMAP was the one and only user.
Signed-off-by: Peter Zijlstra (Intel)
Reviewed-by: Ulf Hansson
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
drivers/clk/clk.c |8
1 file changed, 4 insertions(+), 4 deletions
.noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
Acked-by: Rafael J. Wysocki
Acked-by: Frederic Weisbecker
Tested-by: Tony Lindgren
Tested-by: Ulf Hansson
---
arch/x86/boot/compressed/vmlinux.lds.S |1 +
arch/x86/coco/tdx/tdcall.S |2 ++
arch/x86/coco/tdx/tdx.c
Add a few words on noinstr / __cpuidle usage.
Signed-off-by: Peter Zijlstra (Intel)
---
drivers/cpuidle/cpuidle.c | 12
include/linux/compiler_types.h | 10 ++
2 files changed, 22 insertions(+)
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
vmlinux.o: warning: objtool: __ct_user_enter+0x72: call to
__kasan_check_write() leaves .noinstr.text section
vmlinux.o: warning: objtool: __ct_user_exit+0x47: call to __kasan_check_write()
leaves .noinstr.text section
Signed-off-by: Peter Zijlstra (Intel)
---
kernel/context_tracking.c | 12
1 - 100 of 591 matches
Mail list logo