> On 20-Oct-2022, at 8:10 PM, Peter Zijlstra wrote:
>
> On Thu, Oct 20, 2022 at 12:36:56PM +0530, Athira Rajeev wrote:
>> commit 838d9bb62d13 ("perf: Use sample_flags for raw_data")
>> added check for PERF_SAMPLE_RAW in sample_flags in
>> perf_prepare_sample(). But while copying the sample in
Le 19/10/2022 à 08:34, ruanjinjie a écrit :
> When build Linux kernel, encounter the following warnings:
>
> ./arch/powerpc/sysdev/mpic_msgr.c:230:38: warning: cast removes address space
> '__iomem' of expression
> ./arch/powerpc/sysdev/mpic_msgr.c:230:27: warning: incorrect type in
> assignme
Le 20/10/2022 à 19:29, Naveen N. Rao a écrit :
> Many of these headers are not necessary since those are included
> indirectly, or the code using those headers has been removed.
It is usually not a good idea to not include headers because they are
already included indirectly. If one day for som
> On 18-Oct-2022, at 2:26 PM, Athira Rajeev wrote:
>
> Perf stat with CSV output option prints an extra empty
> string as first field in metrics output line.
> Sample output below:
>
> # ./perf stat -x, --per-socket -a -C 1 ls
> S0,1,1.78,msec,cpu-clock,1785146,100.00,0.973,CPUs u
> On 18-Oct-2022, at 2:26 PM, Athira Rajeev wrote:
>
> In perf stat with CSV output option, number of fields
> in metrics output is not matching with number of fields
> in other event output lines.
>
> Sample output below after applying patch to fix
> printing os->prefix.
>
> # ./perf
Le 24/10/2022 à 06:33, Russell Currey a écrit :
> On Sun, 2022-10-23 at 20:44 +0800, KaiLong Wang wrote:
>> Fix the following coccicheck warning:
>>
>> arch/powerpc/xmon/xmon.c:2987: WARNING opportunity for min()
>> arch/powerpc/xmon/xmon.c:2583: WARNING opportunity for min()
>>
>> Signed-off-by:
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> Detect and abort __do_patch_instruction() when there is no text_poke_area,
> which implies there is no patching address. This allows patch_instruction()
> to fail gracefully and let the caller decide what to do, as opposed to
> the current behaviou
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> BUG_ON() when failing to initialise the code patching window is
> excessive, as most critical patching happens during boot before strict
> RWX control is enabled. Failure to patch after boot is not inherently
> fatal, so aborting the kernel is bett
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> Verifies that if the instruction patching did not return an error then
> the value stored at the given address to patch is now equal to the
> instruction we patched it to.
Why do we need that verification ? Until now it wasn't necessary, can
you
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> Adds a local TLB flush operation that works given an mm_struct, VA to
> flush, and page size representation. Most implementations mirror the
> surrounding code. The book3s/32/tlbflush.h implementation is left as
> a WARN_ONCE_ON because it is more
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> From: "Christopher M. Riedl"
>
> x86 supports the notion of a temporary mm which restricts access to
> temporary PTEs to a single CPU. A temporary mm is useful for situations
> where a CPU needs to perform sensitive operations (such as patching a
Le 02/11/2022 à 10:43, Christophe Leroy a écrit :
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
Verifies that if the instruction patching did not return an error then
the value stored at the given address to patch is now equal to the
instruction we patched it to.
Why do we need that veri
Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> With the temp mm context support, there are CPU local variables to hold
> the patch address and pte. Use these in the non-temp mm path as well
> instead of adding a level of indirection through the text_poke_area
> vm_struct and pointer chasing the
On Fri, Oct 21, 2022 at 10:01:34PM +0300, Andy Shevchenko wrote:
> On Wed, Oct 05, 2022 at 06:29:45PM +0300, Andy Shevchenko wrote:
> > One more user outside of GPIO library and pin control folders needs
> > to be updated to use fwnode instead of of_node. To make this easier
> > introduce a helper
On Tue, Oct 18, 2022 at 03:40:14PM +0800, Kefeng Wang wrote:
> Most architectures(except arm64/x86/sparc) simply return 1 for
> kern_addr_valid(), which is only used in read_kcore(), and it
> calls copy_from_kernel_nofault() which could check whether the
> address is a valid kernel address, so no n
Le 28/10/2022 à 16:33, Sathvika Vasireddy a écrit :
> In a subsequent patch, we would want to annotate powerpc assembly functions
> with SYM_FUNC_START_LOCAL macro. This macro depends on __ALIGN macro.
>
> The default expansion of __ALIGN macro is:
> #define __ALIGN .align 4,0x90
>
Le 28/10/2022 à 16:33, Sathvika Vasireddy a écrit :
> This patchset enables and implements objtool --mcount
> option on powerpc. This applies atop powerpc/merge branch.
>
> Changelog:
>
>
> v5:
>
> * Patch 02/16 - Add Reviewed-by tag from Christophe Leroy
>
> * Patch 03/16 - Fix merge co
Le 28/10/2022 à 16:33, Sathvika Vasireddy a écrit :
> From: Christophe Leroy
>
> Fix several annotations in assembly files on PPC32.
>
> Tested-by: Naveen N. Rao
> Reviewed-by: Naveen N. Rao
> Acked-by: Josh Poimboeuf
> Signed-off-by: Christophe Leroy
> [Sathvika Vasireddy: Changed subject
Le 29/10/2022 à 14:26, Yang Yingliang a écrit :
> The of node returned by of_find_compatible_node() or for_each_child_of_node()
> with refcount decremented, of_node_put() need be called after using it to
> avoid
> refcount leak.
Is that necessary to do of_node_put() so often ? Can't it be done
Le 31/10/2022 à 01:45, chenlifu a écrit :
> [Vous ne recevez pas souvent de courriers de chenl...@huawei.com.
> Découvrez pourquoi ceci est important à
> https://aka.ms/LearnAboutSenderIdentification ]
>
> 在 2022/8/19 21:06, Chen Lifu 写道:
>> 1. ppc_override_l2cr and ppc_override_l2cr_value are
On 2022/11/2 21:45, Christophe Leroy wrote:
Le 29/10/2022 à 14:26, Yang Yingliang a écrit :
The of node returned by of_find_compatible_node() or for_each_child_of_node()
with refcount decremented, of_node_put() need be called after using it to avoid
refcount leak.
Is that necessary to do of_
Le 01/11/2022 à 02:54, Bo Liu a écrit :
> [Vous ne recevez pas souvent de courriers de liub...@inspur.com. Découvrez
> pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
>
> The current code provokes some kernel-doc warnings:
> arch/powerpc/kernel/process.c:1
Le 01/11/2022 à 23:12, Pali Rohár a écrit :
> On Sunday 09 October 2022 13:06:52 Pali Rohár wrote:
>> On Monday 29 August 2022 10:54:51 Pali Rohár wrote:
>>> On Sunday 28 August 2022 17:43:53 Christophe Leroy wrote:
Le 28/08/2022 à 19:41, Pali Rohár a écrit :
> On Sunday 28 August 2022 1
On Fri, Oct 28, 2022 at 6:14 PM Luck, Tony wrote:
>
> >> +vfrom = kmap_local_page(from);
> >> +vto = kmap_local_page(to);
> >> +ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> >
> > In copy_user_highpage(), kmsan_unpoison_memory(page_address(to), PAGE_SIZE)
> > is done after the cop
On Wed, Nov 2, 2022 at 3:27 PM Alexander Potapenko wrote:
>
> On Fri, Oct 28, 2022 at 6:14 PM Luck, Tony wrote:
> >
> > >> +vfrom = kmap_local_page(from);
> > >> +vto = kmap_local_page(to);
> > >> +ret = copy_mc_to_kernel(vto, vfrom, PAGE_SIZE);
> > >
> > > In copy_user_highpage(), km
Background
==
Detecting IPI *reception* is relatively easy, e.g. using
trace_irq_handler_{entry,exit} or even just function-trace
flush_smp_call_function_queue() for SMP calls.
Figuring out their *origin*, is trickier as there is no generic tracepoint tied
to e.g. smp_call_function():
From: "Steven Rostedt (Google)"
The trace events have a __bitmask field that can be used for anything
that requires bitmasks. Although currently it is only used for CPU
masks, it could be used in the future for any type of bitmasks.
There is some user space tooling that wants to know if a field
trace_ipi_raise() is unsuitable for generically tracing IPI sources due to
its "reason" argument being an uninformative string (on arm64 all you get
is "Function call interrupts" for SMP calls).
Add a variant of it that takes a exports a target CPU, a callsite and a
callback.
Signed-off-by: Valen
send_call_function_single_ipi() is the thing that sends IPIs at the bottom
of smp_call_function*() via either generic_exec_single() or
smp_call_function_many_cond(). Give it an IPI-related tracepoint.
Note that this ends up tracing any IPI sent via __smp_call_single_queue(),
which covers __ttwu_qu
This simply wraps around the arch function and prepends it with a
tracepoint, similar to send_call_function_single_ipi().
Signed-off-by: Valentin Schneider
---
kernel/smp.c | 9 -
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/kernel/smp.c b/kernel/smp.c
index e2ca1e2f31274
IPIs sent to remove CPUs via irq_work_queue_on() are now covered by
trace_ipi_send_cpumask(), add another instance of the tracepoint to cover
self-IPIs.
Signed-off-by: Valentin Schneider
---
kernel/irq_work.c | 14 +-
1 file changed, 13 insertions(+), 1 deletion(-)
diff --git a/kern
To be able to trace invocations of smp_send_reschedule(), rename the
arch-specific definitions of it to arch_smp_send_reschedule() and wrap it
into an smp_send_reschedule() that contains a tracepoint.
Signed-off-by: Valentin Schneider
[csky bits]
Acked-by: Guo Ren
---
arch/alpha/kernel/smp.c
Accessing the call_single_queue hasn't involved a spinlock since 2014:
6897fc22ea01 ("kernel: use lockless list for smp_call_function_single")
The llist operations (namely cmpxchg() and xchg()) provide similar ordering
guarantees, update the comment to lessen confusion.
Signed-off-by: Valentin
The newly-introduced ipi_send_cpumask tracepoint has a "callback" parameter
which so far has only been fed with NULL.
While CSD_TYPE_SYNC/ASYNC and CSD_TYPE_IRQ_WORK share a similar backing
struct layout (meaning their callback func can be accessed without caring
about the actual CSD type), CSD_TY
On Mon, Oct 31, 2022 at 11:03:27AM +0100, Andrew Jones wrote:
> Currently (after the revert of 78e5a3399421)
After the revert?
That commit is still in the latest Linus tree.
> with DEBUG_PER_CPU_MAPS we'll get a warning splat when the cpu is
> outside the range [-1, nr_cpu_ids)
Yah, that range
This series is based on mm-unstable.
As discussed in my talk at LPC, we can reuse the same mechanism for
deciding whether to map a pte writable when upgrading permissions via
mprotect() -- e.g., PROT_READ -> PROT_READ|PROT_WRITE -- to replace the
savedwrite infrastructure used for NUMA hinting fau
From: Nadav Amit
Anonymous pages might have the dirty bit clear, but this should not
prevent mprotect from making them writable if they are exclusive.
Therefore, skip the test whether the page is dirty in this case.
Note that there are already other ways to get a writable PTE mapping an
anonymou
We want to replicate this code for handling PMDs soon.
(1) No need to crash the kernel, warning and rejecting is good enough. As
this will no longer get optimized out, drop the pte_write() check: no
harm would be done.
(2) Add a comment why PROT_NONE mapped pages are excluded.
(3) Add a
Let's replicate what we have for PTEs in can_change_pte_writable() also
for PMDs.
While this might look like a pure performance improvement, we'll us this to
get rid of savedwrite handling in do_huge_pmd_numa_page() next. Place
do_huge_pmd_numa_page() strategically good for that purpose.
Note tha
commit b191f9b106ea ("mm: numa: preserve PTE write permissions across a
NUMA hinting fault") added remembering write permissions using ordinary
pte_write() for PROT_NONE mapped pages to avoid write faults when
remapping the page !PROT_NONE on NUMA hinting faults.
That commit noted:
The patch
NUMA hinting no longer uses savedwrite, let's rip it out.
... and while at it, drop __pte_write() and __pmd_write() on ppc64.
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 80 +---
arch/powerpc/kvm/book3s_hv_rm_mmu.c | 2 +-
includ
Let's extend the test to cover the possible mprotect() optimization when
removing write-protection. mprotect() must not allow write-access to a
COW-shared page by accident.
Signed-off-by: David Hildenbrand
---
tools/testing/selftests/vm/anon_cow.c | 49 +--
1 file changed
On Nov 2, 2022, at 12:12 PM, David Hildenbrand wrote:
> !! External Email
>
> commit b191f9b106ea ("mm: numa: preserve PTE write permissions across a
> NUMA hinting fault") added remembering write permissions using ordinary
> pte_write() for PROT_NONE mapped pages to avoid write faults when
> re
On Wed, 2022-11-02 at 09:36 +, Christophe Leroy wrote:
> Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> > Detect and abort __do_patch_instruction() when there is no
> > text_poke_area,
> > which implies there is no patching address. This allows
> > patch_instruction()
> > to fail gracefully a
On Wed, 2022-11-02 at 09:38 +, Christophe Leroy wrote:
> Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> > diff --git a/arch/powerpc/lib/code-patching.c
> > b/arch/powerpc/lib/code-patching.c
> > index 54e145247643..3b3b09d5d2e1 100644
> > --- a/arch/powerpc/lib/code-patching.c
> > +++ b/arch/
Zero user state in gprs (assign to zero) to reduce the influence of user
registers on speculation within kernel syscall handlers. Clears occur
at the very beginning of the sc and scv 0 interrupt handlers, with
restores occurring following the execution of the syscall handler.
Zero GPRS r0, r2-r11,
Add Kconfig option for enabling clearing of registers on arrival in an
interrupt handler. This reduces the speculation influence of registers
on kernel internals. The option will be consumed by 64-bit systems that
feature speculation and wish to implement this mitigation.
This patch only introduce
Zero GPRS r14-r31 on entry into the kernel for interrupt sources to
limit influence of user-space values in potential speculation gadgets.
Prior to this commit, all other GPRS are reassigned during the common
prologue to interrupt handlers and so need not be zeroised explicitly.
This may be done s
Cause pseries platforms to default to zeroising all potentially user-defined
registers when entering the kernel by means of any interrupt source,
reducing user-influence of the kernel and the likelihood or producing
speculation gadgets. Interrupt sources include syscalls.
Signed-off-by: Rohan McLu
On Wed, 2022-11-02 at 09:43 +, Christophe Leroy wrote:
> Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> > Verifies that if the instruction patching did not return an error
> > then
> > the value stored at the given address to patch is now equal to the
> > instruction we patched it to.
>
> Wh
On Wed, 2022-11-02 at 11:13 +0100, Christophe Leroy wrote:
> Le 02/11/2022 à 10:43, Christophe Leroy a écrit :
> > Le 25/10/2022 à 06:44, Benjamin Gray a écrit :
> > > Verifies that if the instruction patching did not return an error
> > > then
> > > the value stored at the given address to patch i
Non-x86 folks, please test on hardware when possible. I made a _lot_ of
mistakes when moving code around. Thankfully, x86 was the trickiest code
to deal with, and I'm fairly confident that I found all the bugs I
introduced via testing. But the number of mistakes I made and found on
x86 makes me
Register /dev/kvm, i.e. expose KVM to userspace, only after all other
setup has completed. Once /dev/kvm is exposed, userspace can start
invoking KVM ioctls, creating VMs, etc... If userspace creates a VM
before KVM is done with its configuration, bad things may happen, e.g.
KVM will fail to prop
Move initialization of KVM's IRQ FD workqueue below arch hardware setup
as a step towards consolidating arch "init" and "hardware setup", and
eventually towards dropping the hooks entirely. There is no dependency
on the workqueue being created before hardware setup, the workqueue is
used only when
Allocate cpus_hardware_enabled after arch hardware setup so that arch
"init" and "hardware setup" are called back-to-back and thus can be
combined in a future patch. cpus_hardware_enabled is never used before
kvm_create_vm(), i.e. doesn't have a dependency with hardware setup and
only needs to be
Move the call to kvm_vfio_ops_exit() further up kvm_exit() to try and
bring some amount of symmetry to the setup order in kvm_init(), and more
importantly so that the arch hooks are invoked dead last by kvm_exit().
This will allow arch code to move away from the arch hooks without any
change in ord
In preparation for folding kvm_arch_hardware_setup() into kvm_arch_init(),
unwind initialization one step at a time instead of simply calling
kvm_arch_exit(). Using kvm_arch_exit() regardless of which initialization
step failed relies on all affected state playing nice with being undone
even if sa
Now that kvm_arch_hardware_setup() is called immediately after
kvm_arch_init(), fold the guts of kvm_arch_hardware_(un)setup() into
kvm_arch_{init,exit}() as a step towards dropping one of the hooks.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/s390/kvm/kvm-s390.c
Move kvm_arch_init()'s call to kvm_timer_init() down a few lines below
the XCR0 configuration code. A future patch will move hardware setup
into kvm_arch_init() and slot in vendor hardware setup before the call
to kvm_timer_init() so that timer initialization (among other stuff)
doesn't need to be
Now that kvm_arch_hardware_setup() is called immediately after
kvm_arch_init(), fold the guts of kvm_arch_hardware_(un)setup() into
kvm_arch_{init,exit}() as a step towards dropping one of the hooks.
To avoid having to unwind various setup, e.g registration of several
notifiers, slot in the vendor
Drop kvm_arch_hardware_setup() and kvm_arch_hardware_unsetup() now that
all implementations are nops.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/arm64/include/asm/kvm_host.h | 1 -
arch/arm64/kvm/arm.c| 5 -
arch/mips/include/asm/kvm_host.
To make it obvious that KVM doesn't have a lurking bug, cleanup eVMCS
enabling if kvm_init() fails even though the enabling doesn't strictly
need to be unwound. eVMCS enabling only toggles values that are fully
contained in the VMX module, i.e. it's technically ok to leave the values
as-is since t
Move the guts of kvm_arch_init() to a new helper, kvm_x86_vendor_init(),
so that VMX can do _all_ arch and vendor initialization before calling
kvm_init(). Calling kvm_init() must be the _very_ last step during init,
as kvm_init() exposes /dev/kvm to userspace, i.e. allows creating VMs.
No functi
Call kvm_init() only after _all_ setup is complete, as kvm_init() exposes
/dev/kvm to userspace and thus allows userspace to create VMs (and call
other ioctls). E.g. KVM will encounter a NULL pointer when attempting to
add a vCPU to the per-CPU loaded_vmcss_on_cpu list if userspace is able to
crea
Acquire a new mutex, vendor_module_lock, in kvm_x86_vendor_init() while
doing hardware setup to ensure that concurrent calls are fully serialized.
KVM rejects attempts to load vendor modules if a different module has
already been loaded, but doesn't handle the case where multiple vendor
modules are
From: Marc Zyngier
For a number of historical reasons, the KVM/arm64 hotplug setup is pretty
complicated, and we have two extra CPUHP notifiers for vGIC and timers.
It looks pretty pointless, and gets in the way of further changes.
So let's just expose some helpers that can be called from the co
Teardown hypervisor mode if vector slot setup fails in order to avoid
leaking any allocations done by init_hyp_mode().
Fixes: b881cdce77b4 ("KVM: arm64: Allocate hyp vectors statically")
Signed-off-by: Sean Christopherson
---
arch/arm64/kvm/arm.c | 15 ---
1 file changed, 8 insertion
Undo everything done by init_subsystems() if a later initialization step
fails, i.e. unregister perf callbacks in addition to unregistering the
power management notifier.
Fixes: bfa79a805454 ("KVM: arm64: Elevate hypervisor mappings creation at EL2")
Signed-off-by: Sean Christopherson
---
arch/a
Move arm/arch specific initialization directly in arm's module_init(),
now called kvm_arm_init(), instead of bouncing through kvm_init() to
reach kvm_arch_init(). Invoking kvm_arch_init() is the very first action
performed by kvm_init(), i.e. this is a glorified nop.
Making kvm_arch_init() a nop
Tag kvm_arm_init() and its unique helper as __init, and tag data that is
only ever modified under the kvm_arm_init() umbrella as read-only after
init.
Opportunistically name the boolean param in kvm_timer_hyp_init()'s
prototype to match its definition.
Signed-off-by: Sean Christopherson
---
arc
Now that KVM no longer supports trap-and-emulate (see commit 45c7e8af4a5e
"MIPS: Remove KVM_TE support"), hardcode the MIPS callbacks to the
virtualization callbacks.
Harcoding the callbacks eliminates the technically-unnecessary check on
non-NULL kvm_mips_callbacks in kvm_arch_init(). MIPS has n
Invoke kvm_mips_emulation_init() directly from kvm_mips_init() instead
of bouncing through kvm_init()=>kvm_arch_init(). Functionally, this is
a glorified nop as invoking kvm_arch_init() is the very first action
performed by kvm_init().
Emptying kvm_arch_init() will allow dropping the hook entirel
Call kvm_init() only after _all_ setup is complete, as kvm_init() exposes
/dev/kvm to userspace and thus allows userspace to create VMs (and call
other ioctls).
Signed-off-by: Sean Christopherson
---
arch/mips/kvm/mips.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git
Fold the guts of kvm_arch_init() into riscv_kvm_init() instead of
bouncing through kvm_init()=>kvm_arch_init(). Functionally, this is a
glorified nop as invoking kvm_arch_init() is the very first action
performed by kvm_init().
Moving setup to riscv_kvm_init(), which is tagged __init, will allow
Now that KVM setup is handled directly in riscv_kvm_init(), tag functions
and data that are used/set only during init with __init/__ro_after_init.
Signed-off-by: Sean Christopherson
---
arch/riscv/include/asm/kvm_host.h | 6 +++---
arch/riscv/kvm/mmu.c | 12 ++--
arch/riscv
Move KVM PPC's compatibility checks to their respective module_init()
hooks, there's no need to wait until KVM's common compat check, nor is
there a need to perform the check on every CPU (provided by common KVM's
hook), as the compatibility checks operate on global data.
arch/powerpc/include/as
Move the guts of kvm_arch_init() into a new helper, __kvm_s390_init(),
and invoke the new helper directly from kvm_s390_init() instead of
bouncing through kvm_init(). Invoking kvm_arch_init() is the very
first action performed by kvm_init(), i.e. this is a glorified nop.
Moving setup to __kvm_s39
Tag __kvm_s390_init() and its unique helpers as __init. These functions
are only ever called during module_init(), but could not be tagged
accordingly while they were invoked from the common kvm_arch_init(),
which is not __init because of x86.
Signed-off-by: Sean Christopherson
---
arch/s390/kv
Drop kvm_arch_init() and kvm_arch_exit() now that all implementations
are nops.
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/arm64/kvm/arm.c| 11 ---
arch/mips/kvm/mips.c| 10 --
arch/powerpc/include/asm/kvm_host.h |
Tag vmcs_config and vmx_capability structs as __init, the canonical
configuration is generated during hardware_setup() and must never be
modified after that point.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/vmx/capabilities.h | 4 ++--
arch/x86/kvm/vmx/vmx.c | 4 ++--
2 files c
Move the CPU compatibility checks to pure x86 code, i.e. drop x86's use
of the common kvm_x86_check_cpu_compat() arch hook. x86 is the only
architecture that "needs" to do per-CPU compatibility checks, moving
the logic to x86 will allow dropping the common code, and will also
give x86 more control
Drop kvm_arch_check_processor_compat() and its support code now that all
architecture implementations are nops.
Signed-off-by: Sean Christopherson
---
arch/arm64/kvm/arm.c | 7 +--
arch/mips/kvm/mips.c | 7 +--
arch/powerpc/kvm/book3s.c | 2 +-
arch/powerpc/kvm/e500.c
Use KBUILD_MODNAME to specify the vendor module name instead of manually
writing out the name to make it a bit more obvious that the name isn't
completely arbitrary. A future patch will also use KBUILD_MODNAME to
define pr_fmt, at which point using KBUILD_MODNAME for kvm_x86_ops.name
further reinf
Define pr_fmt using KBUILD_MODNAME for all KVM x86 code so that printks
use consistent formatting across common x86, Intel, and AMD code. In
addition to providing consistent print formatting, using KBUILD_MODNAME,
e.g. kvm_amd and kvm_intel, allows referencing SVM and VMX (and SEV and
SGX and ...)
Do basic VMX/SVM support checks directly in vendor code instead of
implementing them via kvm_x86_ops hooks. Beyond the superficial benefit
of providing common messages, which isn't even clearly a net positive
since vendor code can provide more precise/detailed messages, there's
zero advantage to b
Reorder code in vmx.c so that the VMX support check helpers reside above
the hardware enabling helpers, which will allow KVM to perform support
checks during hardware enabling (in a future patch).
No functional change intended.
Signed-off-by: Sean Christopherson
---
arch/x86/kvm/vmx/vmx.c | 212
Check that SVM is supported and enabled in the processor compatibility
checks. SVM already checks for support during hardware enabling,
i.e. this doesn't really add new functionality. The net effect is that
KVM will refuse to load if a CPU doesn't have SVM fully enabled, as
opposed to failing KVM
From: Chao Gao
Do compatibility checks when enabling hardware to effectively add
compatibility checks when onlining a CPU. Abort enabling, i.e. the
online process, if the (hotplugged) CPU is incompatible with the known
good setup.
At init time, KVM does compatibility checks to ensure that all o
From: Chao Gao
The CPU STARTING section doesn't allow callbacks to fail. Move KVM's
hotplug callback to ONLINE section so that it can abort onlining a CPU in
certain cases to avoid potentially breaking VMs running on existing CPUs.
For example, when KVM fails to enable hardware virtualization on
From: Chao Gao
Disable CPU hotplug during hardware_enable_all() to prevent the corner
case where if the following sequence occurs:
1. A hotplugged CPU marks itself online in cpu_online_mask
2. The hotplugged CPU enables interrupt before invoking KVM's ONLINE
callback
3 hardware_enabl
From: Isaku Yamahata
Drop kvm_count_lock and instead protect kvm_usage_count with kvm_lock now
that KVM hooks CPU hotplug during the ONLINE phase, which can sleep.
Previously, KVM hooked the STARTING phase, which is not allowed to sleep
and thus could not take kvm_lock (a mutex).
Explicitly disa
From: Isaku Yamahata
Drop the superfluous invocation of hardware_disable_nolock() during
kvm_exit(), as it's nothing more than a glorified nop.
KVM automatically disables hardware on all CPUs when the last VM is
destroyed, and kvm_exit() cannot be called until the last VM goes
away as the callin
Use a per-CPU variable instead of a shared bitmap to track which CPUs
have successfully enabled virtualization hardware. Using a per-CPU bool
avoids the need for an additional allocation, and arguably yields easier
to read code. Using a bitmap would be advantageous if KVM used it to
avoid generat
From: Isaku Yamahata
Rework detecting hardware enabling errors to use a local variable in the
"enable all" path to track whether or not enabling was successful across
all CPUs. Using a global variable complicates paths that enable hardware
only on the current CPU, e.g. kvm_resume() and kvm_onlin
Register the suspend/resume notifier hooks at the same time KVM registers
its reboot notifier so that all the code in kvm_init() that deals with
enabling/disabling hardware is bundled together. Opportunstically move
KVM's implementations to reside near the reboot notifier code for the
same reason.
Allow architectures to opt out of the generic hardware enabling logic,
and opt out on both s390 and PPC, which don't need to manually enable
virtualization as it's always on (when available).
In addition to letting s390 and PPC drop a bit of dead code, this will
hopefully also allow ARM to clean u
On Mon, Oct 31, 2022 at 10:07:32PM +1000, Nicholas Piggin wrote:
> The elf_check_arch() function is also used to test compatibility of
> usermode binaries. Kernel modules may have more specific requirements,
> for example powerpc would like to test for ABI version compatibility.
>
> Add a weak mod
On Mon, Oct 31, 2022 at 10:07:31PM +1000, Nicholas Piggin wrote:
> Luis if you would be okay for patch 1 to be merged via powerpc or
> prefer to take it in the module tree (or maybe you object to the
> code in the first place).
Looks good to me, and nothing on my radar which would cause a conflict
Christophe Leroy writes:
> Le 28/10/2022 à 16:33, Sathvika Vasireddy a écrit :
>> In a subsequent patch, we would want to annotate powerpc assembly functions
>> with SYM_FUNC_START_LOCAL macro. This macro depends on __ALIGN macro.
>>
>> The default expansion of __ALIGN macro is:
>> #defi
Thorsten Leemhuis writes:
> [Note: this mail is primarily send for documentation purposes and/or for
> regzbot, my Linux kernel regression tracking bot. That's why I removed
> most or all folks from the list of recipients, but left any that looked
> like a mailing lists. These mails usually contai
1 - 100 of 106 matches
Mail list logo