On 22/01/25 01:22, Frederic Weisbecker wrote:
> Le Tue, Jan 14, 2025 at 06:51:35PM +0100, Valentin Schneider a écrit :
>> ct_nmi_{enter, exit}() only touches the RCU watching counter and doesn't
>> modify the actual CT state part context_tracking.state. This means that
>
On 21/01/25 18:00, Uladzislau Rezki wrote:
>> >
>> > As noted before, we defer flushing for vmalloc. We have a lazy-threshold
>> > which can be exposed(if you need it) over sysfs for tuning. So, we can add
>> > it.
>> >
>>
>> In a CPU isolation / NOHZ_FULL context, isolated CPUs will be running a
On 20/01/25 12:15, Uladzislau Rezki wrote:
> On Fri, Jan 17, 2025 at 06:00:30PM +0100, Valentin Schneider wrote:
>> On 17/01/25 17:11, Uladzislau Rezki wrote:
>> > On Fri, Jan 17, 2025 at 04:25:45PM +0100, Valentin Schneider wrote:
>> >> On 14/01/25 19:16, Jann Horn
On 17/01/25 09:15, Sean Christopherson wrote:
> On Fri, Jan 17, 2025, Valentin Schneider wrote:
>> On 14/01/25 13:13, Sean Christopherson wrote:
>> > On Tue, Jan 14, 2025, Valentin Schneider wrote:
>> >> +/**
>> >> + * is_kernel_noinstr_text - checks if th
On 17/01/25 17:11, Uladzislau Rezki wrote:
> On Fri, Jan 17, 2025 at 04:25:45PM +0100, Valentin Schneider wrote:
>> On 14/01/25 19:16, Jann Horn wrote:
>> > On Tue, Jan 14, 2025 at 6:51 PM Valentin Schneider
>> > wrote:
>> >> vunmap()'s issued fr
On 17/01/25 16:52, Jann Horn wrote:
> On Fri, Jan 17, 2025 at 4:25 PM Valentin Schneider
> wrote:
>> On 14/01/25 19:16, Jann Horn wrote:
>> > On Tue, Jan 14, 2025 at 6:51 PM Valentin Schneider
>> > wrote:
>> >> vunmap()'s issued from hous
On 14/01/25 19:16, Jann Horn wrote:
> On Tue, Jan 14, 2025 at 6:51 PM Valentin Schneider
> wrote:
>> vunmap()'s issued from housekeeping CPUs are a relatively common source of
>> interference for isolated NOHZ_FULL CPUs, as they are hit by the
>> flush_tlb_kernel_r
On 14/01/25 13:45, Dave Hansen wrote:
> On 1/14/25 09:51, Valentin Schneider wrote:
>> +cr4 = this_cpu_read(cpu_tlbstate.cr4);
>> +asm volatile("mov %0,%%cr4": : "r" (cr4 ^ X86_CR4_PGE) : "memory");
>> +asm volatile("mov %0,%%cr4&
On 14/01/25 13:48, Sean Christopherson wrote:
> On Tue, Jan 14, 2025, Sean Christopherson wrote:
>> On Tue, Jan 14, 2025, Valentin Schneider wrote:
>> > +/**
>> > + * is_kernel_noinstr_text - checks if the pointer address is located in
>> > the
>>
On 14/01/25 13:19, Sean Christopherson wrote:
> Please use "KVM: VMX:" for the scope.
>
> On Tue, Jan 14, 2025, Valentin Schneider wrote:
>> Later commits will cause objtool to warn about static keys being used in
>> .noinstr sections in order to safely defer instru
On 14/01/25 13:13, Sean Christopherson wrote:
> On Tue, Jan 14, 2025, Valentin Schneider wrote:
>> text_poke_bp_batch() sends IPIs to all online CPUs to synchronize
>> them vs the newly patched instruction. CPUs that are executing in userspace
>> do not need this sync
CT_STATE_KERNEL in the ct_state prevents queuing deferred work.
Later commits introduce the bit:callback mappings.
Link: https://lore.kernel.org/all/20210929151723.162004...@infradead.org/
Signed-off-by: Nicolas Saenz Julienne
Signed-off-by: Valentin Schneider
using __native_flush_tlb_global() / native_write_cr4() and have the
ASM directly inlined in the native function. For the Xen stuff,
__always_inline a handful of helpers.
Not-signed-off-by: Peter Zijlstra
[Changelog faff]
Signed-off-by: Valentin Schneider
---
arch/x86/include/asm/invpcid.h
Peter Zijlstra (Intel)
Signed-off-by: Nicolas Saenz Julienne
Signed-off-by: Valentin Schneider
---
arch/x86/include/asm/context_tracking_work.h | 6 ++--
arch/x86/include/asm/text-patching.h | 1 +
arch/x86/kernel/alternative.c| 38
arch/x86/kernel
ed-off-by: Valentin Schneider
---
include/linux/context_tracking_state.h | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/linux/context_tracking_state.h
b/include/linux/context_tracking_state.h
index 0b81248aa03e2..eb2149b20baef 100644
--- a/include
The static call is only ever updated in
__init xen_time_setup_guest()
so mark it appropriately as __ro_after_init.
Signed-off-by: Valentin Schneider
---
arch/arm/kernel/paravirt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm/kernel/paravirt.c b/arch/arm
_STATE_IDLE
[...] /!\ CT_STATE_IDLE here while we're really in kernelspace! /!\
ct_cpuidle_exit()
state = state + CT_STATE_KERNEL - CT_STATE_IDLE
Signed-off-by: Valentin Schneider
---
kernel/context_tracking.c | 22 +++---
1 file changed, 19 insertions(+), 3 deletions(-)
dif
/performance), but the backing
mechanism is identical.
Add a default-no option to enable IPI deferral with NO_HZ_IDLE.
Signed-off-by: Valentin Schneider
---
kernel/time/Kconfig | 16 +++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/kernel/time/Kconfig b/kernel/time
including user mappings), this only happens when reaching a
invalidation range threshold where it is cheaper to do a full flush than to
individually invalidate each page in the range via INVLPG. IOW, it doesn't
*require* invalidating user mappings, and thus remains safe to defer until
a later
Later patches will require issuing a __flush_tlb_all() from noinstr code.
Both __flush_tlb_local() and __flush_tlb_global() are now
noinstr-compliant, so __flush_tlb_all() can be made noinstr itself.
Signed-off-by: Valentin Schneider
---
arch/x86/include/asm/tlbflush.h | 2 +-
arch/x86/mm/tlb.c
imply __always_inline'ing
invalidate_user_asid() gets us there
Signed-off-by: Valentin Schneider
---
arch/x86/include/asm/paravirt.h | 2 +-
arch/x86/mm/tlb.c | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/
sched_clock_running is only ever enabled in the __init functions
sched_clock_init() and sched_clock_init_late(), and is never disabled. Mark
it __ro_after_init.
Signed-off-by: Valentin Schneider
---
kernel/sched/clock.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel
text_poke_sync()
IPI has little benefits for this key, as NOHZ_FULL is incompatible with an
unstable TSC anyway.
Mark it to let objtool know not to warn about it.
Signed-off-by: Valentin Schneider
---
kernel/sched/clock.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a
: Valentin Schneider
---
kernel/context_tracking.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/context_tracking.c b/kernel/context_tracking.c
index 938c48952d265..a61498a8425e2 100644
--- a/kernel/context_tracking.c
+++ b/kernel/context_tracking.c
@@ -31,7
: Valentin Schneider
---
include/linux/jump_label.h | 17 +++--
include/linux/objtool.h | 7 ++
include/linux/static_call.h | 3 +
tools/objtool/Documentation/objtool.txt | 34 +
tools/objtool/check.c | 92
is not expected that it
will be flipped during latency-sensitive operations, and thus shouldn't be
a source of interference wrt the text patching IPI.
Mark it to let objtool know not to warn about it.
Reported-by: Josh Poimboeuf
Signed-off-by: Valentin Schneider
---
kernel/stackleak.
they will be
flipped during latency-sensitive operations, and thus shouldn't be a source
of interference wrt the text patching IPI.
Mark it to let objtool know not to warn about it.
Reported-by: Josh Poimboeuf
Signed-off-by: Valentin Schneider
---
arch/x86/kvm/vmx/vmx.c | 11 +
little
benefits for this key, as hotplug implies eventually going through
takedown_cpu() -> stop_machine_cpuslocked() which is going to cause
interference on all online CPUs anyway.
Mark it to let objtool know not to warn about it.
Signed-off-by: Valentin Schneider
---
arch/x86/kernel/
The static call is only ever updated in
__init pv_time_init()
__init xen_time_setup_guest()
so mark it appropriately as __ro_after_init.
Signed-off-by: Valentin Schneider
---
arch/arm64/kernel/paravirt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/kernel
: Valentin Schneider
Acked-by: Josh Poimboeuf
---
tools/objtool/check.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 76060da755b5c..b35763f05a548 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
() <- __init kvmclock_init()
o hv_setup_sched_clock() <- __init hv_init_tsc_clocksource()
IOW purely init context, and can thus be marked as __ro_after_init.
Reported-by: Josh Poimboeuf
Signed-off-by: Valentin Schneider
---
arch/x86/kernel/paravirt.c | 2 +-
1 file changed, 1 insertion
I had to look into objtool itself to understand what this warning was
about; make it more explicit.
Signed-off-by: Valentin Schneider
Acked-by: Josh Poimboeuf
---
tools/objtool/check.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/objtool/check.c b/tools/objtool
The static call is only ever updated in
__init pv_time_init()
__init xen_time_setup_guest()
so mark it appropriately as __ro_after_init.
Signed-off-by: Valentin Schneider
---
arch/loongarch/kernel/paravirt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch
The static call is only ever updated in
__init pv_time_init()
__init xen_init_time_common()
__init vmware_paravirt_ops_setup()
__init xen_time_setup_guest(
so mark it appropriately as __ro_after_init.
Signed-off-by: Valentin Schneider
---
arch/x86/kernel/paravirt.c | 2 +-
1 file
RCU_EXPERT. While at it, add a comment to
explain the layout of context_tracking->state.
Link:
http://lore.kernel.org/r/4c2cb573-168f-4806-b1d9-164e8276e66a@paulmck-laptop
Suggested-by: Paul E. McKenney
Signed-off-by: Valentin Schneider
Reviewed-by: Paul E. McKenney
---
include/li
From: Josh Poimboeuf
Deferring a code patching IPI is unsafe if the patched code is in a
noinstr region. In that case the text poke code must trigger an
immediate IPI to all CPUs, which can rudely interrupt an isolated NO_HZ
CPU running in userspace.
Some noinstr static branches may really need
__ro_after_init.
Reported-by: Josh Poimboeuf
Signed-off-by: Valentin Schneider
---
arch/x86/events/amd/brs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/events/amd/brs.c b/arch/x86/events/amd/brs.c
index 780acd3dff22a..e2ff03af15d82 100644
--- a/arch/x86/events/amd
nit context, and can thus be marked as __ro_after_init.
Reported-by: Josh Poimboeuf
Signed-off-by: Valentin Schneider
---
arch/x86/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index f63f8fd00a
-laptop
Suggested-by: Paul E. McKenney
Signed-off-by: Valentin Schneider
Reviewed-by: Paul E. McKenney
---
tools/testing/selftests/rcutorture/configs/rcu/TREE04 | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/rcutorture/configs/rcu/TREE04
b/tools/testing/selftests
The static call is only ever updated in:
__init pv_time_init()
__init xen_time_setup_guest()
so mark it appropriately as __ro_after_init.
Signed-off-by: Valentin Schneider
---
arch/riscv/kernel/paravirt.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/riscv
ter, Frederic)
o Added an RCU_EXPERT config for the RCU dynticks counter size, and added an
rcutorture case for a low-size counter (Paul)
o Fixed flush_tlb_kernel_range_deferrable() definition
Josh Poimboeuf (3):
jump_label: Add annotations for validating noinstr usage
static_call: Add r
From: Josh Poimboeuf
Deferring a code patching IPI is unsafe if the patched code is in a
noinstr region. In that case the text poke code must trigger an
immediate IPI to all CPUs, which can rudely interrupt an isolated NO_HZ
CPU running in userspace.
If a noinstr static call only needs to be pa
On 24/11/21 00:19, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
wrote:
>> -Original Message-
>> From: Valentin Schneider [mailto:valentin.schnei...@arm.com]
>> For my own education, this is talking about *host* CPU hotplug, right?
>>
>
> It
oke
> cpu_initialize_context() again should it have have earlier. I *think*
> this is okay and would to bring up the CPU again should the memory
> allocation in cpu_initialize_context() fail.
Virt stuff notwithstanding, that looks OK to me.
Reviewed-by: Valentin Schneider
That said, AF
44 matches
Mail list logo