Hi Marc,
On 08/04/2015 06:44 PM, Marc Zyngier wrote:
> On 04/08/15 17:21, Eric Auger wrote:
>> Hi Marc,
>> On 07/24/2015 05:55 PM, Marc Zyngier wrote:
>>> Virtual interrupts mapped to a HW interrupt should only be triggered
>>> from inside the kernel. Otherwise, you could end up confusing the
>>> k
On 05/08/15 08:32, Eric Auger wrote:
> Hi Marc,
> On 08/04/2015 06:44 PM, Marc Zyngier wrote:
>> On 04/08/15 17:21, Eric Auger wrote:
>>> Hi Marc,
>>> On 07/24/2015 05:55 PM, Marc Zyngier wrote:
Virtual interrupts mapped to a HW interrupt should only be triggered
from inside the kernel. O
On 04/08/2015 18:58, Alex Williamson wrote:
> The patch was munged on commit to re-order these tests resulting in
> excessive warnings when trying to do device assignment. Return to
> original ordering: https://lkml.org/lkml/2015/7/15/769
>
> Fixes: 3e5d2fdceda1 ("KVM: MTRR: simplify kvm_mtrr_g
On 05/08/2015 06:04, Xiao Guangrong wrote:
> - for_each_shadow_entry_lockless(vcpu, addr, iterator, spte)
> + for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
> + leaf = iterator.level;
> +
> + if (!root)
> + root = leaf;
> +
> +
On Wed, Aug 05, 2015 at 10:44:09AM +0100, Marc Zyngier wrote:
> On 05/08/15 08:32, Eric Auger wrote:
> > Hi Marc,
> > On 08/04/2015 06:44 PM, Marc Zyngier wrote:
> >> On 04/08/15 17:21, Eric Auger wrote:
> >>> Hi Marc,
> >>> On 07/24/2015 05:55 PM, Marc Zyngier wrote:
> Virtual interrupts mapp
Before commit 662d9715840aef44dcb573b0f9fab9e8319c868a is was possible to
compile the kernel without vGIC and vTimer support. Commit message says
about possibility to detect vGIC support in runtine, but this has never
been implemented.
This patch introdices runtime check, restoring the lost functi
This patch set brings back functionality which was broken in v4.0.
Unfortunately because of restrictions of such a hardware is is impossible
to take advantage of virtual architected timer, therefore guest, running
in such restricted mode, has to use some memory-mapped timer. But it is
still better
Now at least ARM is able to determine whether the machine has
virtualization support for irqchip or not at runtime. Obviously,
irqfd requires irqchip.
Signed-off-by: Pavel Fedin
---
This has to be verified with PowerPC and S390 maintainers. I've got an
impression that KVM_CAP_IRQCHIP is forgotten
Makes qemu working again with kernel-irqchip=off option, allowing to use
GIC emulation in userspace.
Previously kvm_vgic_map_resources() used to include irqchip_in_kernel()
check, and vgic_v2_map_resources() still has it, but now vm_ops are not
initialized before kvm_vgic_create(). Therefore kvm_v
On 08/05/2015 12:53 PM, Christoffer Dall wrote:
> On Wed, Aug 05, 2015 at 10:44:09AM +0100, Marc Zyngier wrote:
>> On 05/08/15 08:32, Eric Auger wrote:
>>> Hi Marc,
>>> On 08/04/2015 06:44 PM, Marc Zyngier wrote:
On 04/08/15 17:21, Eric Auger wrote:
> Hi Marc,
> On 07/24/2015 05:55 PM,
Hi Alex,
On 07/16/2015 11:26 PM, Alex Williamson wrote:
> When a physical I/O device is assigned to a virtual machine through
> facilities like VFIO and KVM, the interrupt for the device generally
> bounces through the host system before being injected into the VM.
> However, hardware technologies
When userspace wants KVM to exit to userspace, it sends a signal.
This has a disadvantage of requiring a change to the signal mask because
the signal needs to be blocked in userspace to stay pending when sending
to self.
Using a request flag allows us to shave 200-300 cycles from every
userspace e
VCPU with vcpu->vcpu_id has highest probability of being stored in
kvm->vcpus[vcpu->vcpu_id]. Other common case, sparse sequential
vcpu_id, is more likely to find a match downwards from vcpu->vcpu_id.
Random distribution does not matter so we first search slots
[vcpu->vcpu_id..0] and then slots (
We are still interested in the amount of exits userspace requested and
signal_exits doesn't cover that anymore.
Signed-off-by: Radim Krčmář
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.c | 2 ++
2 files changed, 3 insertions(+)
diff --git a/arch/x86/include/asm/kvm_h
We want to have requests abstracted from bit operations.
Signed-off-by: Radim Krčmář
---
arch/x86/kvm/vmx.c | 2 +-
include/linux/kvm_host.h | 7 ++-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 217f66343dc8..17514fe7d2cb
The guest can use KVM_USER_EXIT instead of a signal-based exiting to
userspace. Availability depends on KVM_CAP_USER_EXIT.
Only x86 is implemented so far.
It would be cleaner to use 'unsigned long' to store the vcpu_id, but I
really don't like its variable size and 'u64' will be same/bigger for
f
QEMU uses SIGUSR1 to force a userspace exit and also to queue an early
exit before calling VCPU_RUN -- the signal is blocked in user space and
temporarily unblocked in VCPU_RUN.
The temporal unblocking by sigprocmask() in kvm_arch_vcpu_ioctl_run()
takes a shared siglock, which leads to cacheline bo
On 05/08/2015 15:21, Radim Krčmář wrote:
> + kvm_for_each_vcpu(idx, vcpu, kvm)
> + if (vcpu->vcpu_id == vcpu_id) {
> + kvm_make_request(KVM_REQ_EXIT, vcpu);
> + kvm_vcpu_kick(vcpu);
> +
> + return 0;
> + }
> +
2015-08-05 15:29+0200, Paolo Bonzini:
> On 05/08/2015 15:21, Radim Krčmář wrote:
>> +kvm_for_each_vcpu(idx, vcpu, kvm)
>> +if (vcpu->vcpu_id == vcpu_id) {
>> +kvm_make_request(KVM_REQ_EXIT, vcpu);
>> +kvm_vcpu_kick(vcpu);
>> +
>> +
On 05/08/2015 15:34, Radim Krčmář wrote:
> 2015-08-05 15:29+0200, Paolo Bonzini:
>> On 05/08/2015 15:21, Radim Krčmář wrote:
>>> + kvm_for_each_vcpu(idx, vcpu, kvm)
>>> + if (vcpu->vcpu_id == vcpu_id) {
>>> + kvm_make_request(KVM_REQ_EXIT, vcpu);
>>> +
2015-08-05 15:38+0200, Paolo Bonzini:
> On 05/08/2015 15:34, Radim Krčmář wrote:
>> vcpu ioctl should only be issued by the vcpu thread so it would
>> significantly limit use.
>
> That's a general limitation, but you can lift it for particular ioctls.
>
> See in particular this:
>
> #if defined(
On Wed, Aug 05, 2015 at 01:47:27PM +0200, Eric Auger wrote:
> On 08/05/2015 12:53 PM, Christoffer Dall wrote:
> > On Wed, Aug 05, 2015 at 10:44:09AM +0100, Marc Zyngier wrote:
> >> On 05/08/15 08:32, Eric Auger wrote:
> >>> Hi Marc,
> >>> On 08/04/2015 06:44 PM, Marc Zyngier wrote:
> On 04/08/
On 05/08/2015 16:44, Nicholas Krause wrote:
> This fixes error handling in the function kvm_lapic_sync_from_vapic
> by checking if the call to kvm_read_guest_cached has returned a
> error code to signal to its caller the call to this function has
> failed and due to this we must immediately retur
Linus,
The following changes since commit 956325bd55bb020e574129c443a2c2c66a8316e7:
Merge tag 'for-linus' of
git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma (2015-07-28
14:20:16 -0700)
are available in the git repository at:
git://git.kernel.org/pub/scm/virt/kvm/kvm.git tags/
From: Steve Rutherford
First patch in a series which enables the relocation of the
PIC/IOAPIC to userspace.
Adds capability KVM_CAP_SPLIT_IRQCHIP;
KVM_CAP_SPLIT_IRQCHIP enables the construction of LAPICs without the
rest of the irqchip.
Compile tested for x86.
Signed-off-by: Steve Rutherford
From: Steve Rutherford
In order to enable userspace PIC support, the userspace PIC needs to
be able to inject local interrupts even when the APICs are in the
kernel.
KVM_INTERRUPT now supports sending local interrupts to an APIC when
APICs are in the kernel.
The ready_for_interrupt_request flag
From: Steve Rutherford
In order to support a userspace IOAPIC interacting with an in kernel
APIC, the EOI exit bitmaps need to be configurable.
If the IOAPIC is in userspace (i.e. the irqchip has been split), the
EOI exit bitmaps will be set whenever the GSI Routes are configured.
In particular,
From: Steve Rutherford
Adds KVM_EXIT_IOAPIC_EOI which allows the kernel to EOI
level-triggered IOAPIC interrupts.
Uses a per VCPU exit bitmap to decide whether or not the IOAPIC needs
to be informed (which is identical to the EOI_EXIT_BITMAP field used
by modern x86 processors, but can also be u
Avoid pointer chasing and memory barriers, and simplify the code
when split irqchip (LAPIC in kernel, IOAPIC/PIC in userspace)
is introduced.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/irq.c | 6 +++---
arch/x86/kvm/irq.h | 8
arch/x86/kvm/lapic.c | 4 ++--
arch/x86/kvm/mmu.c
This will avoid an unnecessary trip to ->kvm and from there to the VPIC.
Signed-off-by: Paolo Bonzini
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/irq.c | 2 +-
arch/x86/kvm/lapic.c| 4 ++--
arch/x86/kvm/lapic.h| 4 ++--
arch/x86/kvm/svm.c
The interrupt window is currently checked twice, once in vmx.c/svm.c and
once in dm_request_for_irq_injection. The only difference is the extra
check for kvm_arch_interrupt_allowed in dm_request_for_irq_injection,
and the different return value (EINTR/KVM_EXIT_INTR for vmx.c/svm.c vs.
0/KVM_EXIT_I
I am going to push the memory barrier fixes to kvm/next.
The rest of the series is here for review. This includes cleanups from
myself and the bulk of the code from Steve.
Paolo
Paolo Bonzini (5):
KVM: x86: set TMR when the interrupt is accepted
KVM: x86: store IOAPIC-handled vectors in eac
Do not compute TMR in advance. Instead, set the TMR just before the interrupt
is accepted into the IRR. This limits the coupling between IOAPIC and LAPIC.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/ioapic.c | 9 ++---
arch/x86/kvm/ioapic.h | 3 +--
arch/x86/kvm/lapic.c | 19 +
We can reuse the algorithm that computes the EOI exit bitmap to figure
out which vectors are handled by the IOAPIC. The only difference
between the two is for edge-triggered interrupts other than IRQ8
that have no notifiers active; however, the IOAPIC does not have to
do anything special for these
On 16/07/15 22:29, Mario Smarduch wrote:
> This patch only saves and restores FP/SIMD registers on Guest access. To do
> this cptr_el2 FP/SIMD trap is set on Guest entry and later checked on exit.
> lmbench, hackbench show significant improvements, for 30-50% exits FP/SIMD
> context is not saved/re
We are still interested in the amount of exits userspace requested and
signal_exits doesn't cover that anymore.
Signed-off-by: Radim Krčmář
---
v2: move request_exits debug counter patch right after introduction of
KVM_REQ_EXIT [3/5]
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/x86.
When userspace wants KVM to exit to userspace, it sends a signal.
This has a disadvantage of requiring a change to the signal mask because
the signal needs to be blocked in userspace to stay pending when sending
to self.
Using a request flag allows us to shave 200-300 cycles from every
userspace e
I find the switch easier to read and modify.
Signed-off-by: Radim Krčmář
---
v2: new
virt/kvm/kvm_main.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index d7ffe6090520..71598554deed 100644
--- a/virt/kvm/kvm_main.c
+++
The guest can use KVM_USER_EXIT instead of a signal-based exiting to
userspace. Availability depends on KVM_CAP_USER_EXIT.
Only x86 is implemented so far.
Signed-off-by: Radim Krčmář
---
v2:
* use vcpu ioctl instead of vm one [4/5]
* shrink kvm_user_exit from 64 to 32 bytes [4/5]
Document
On Wed, 2015-08-05 at 15:23 +0200, Eric Auger wrote:
> Hi Alex,
> On 07/16/2015 11:26 PM, Alex Williamson wrote:
> > When a physical I/O device is assigned to a virtual machine through
> > facilities like VFIO and KVM, the interrupt for the device generally
> > bounces through the host system befor
v2:
* move request_exits debug counter patch right after introduction of
KVM_REQ_EXIT [3/5]
* use vcpu ioctl instead of vm one [4/5]
* shrink kvm_user_exit from 64 to 32 bytes [4/5]
* new [5/5]
QEMU uses SIGUSR1 to force a userspace exit and also to queue an early
exit before calling VCPU_R
On 05/08/2015 18:33, Radim Krčmář wrote:
> The guest can use KVM_USER_EXIT instead of a signal-based exiting to
> userspace. Availability depends on KVM_CAP_USER_EXIT.
> Only x86 is implemented so far.
>
> Signed-off-by: Radim Krčmář
> ---
> v2:
> * use vcpu ioctl instead of vm one [4/5]
>
We want to have requests abstracted from bit operations.
Signed-off-by: Radim Krčmář
---
arch/x86/kvm/vmx.c | 2 +-
include/linux/kvm_host.h | 7 ++-
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 217f66343dc8..17514fe7d2cb
On 05/08/2015 18:48, Nicholas Krause wrote:
> This fixes the error handling in the function vgic_v3_probe
> for when calling the function kvm_register_device_ops to check
> if the call to this function has returned a error code and if
> so jump to the label out with goto to cleanup no longer requ
On Wed, Jul 08, 2015 at 05:19:10PM +0100, Marc Zyngier wrote:
> --- /dev/null
> +++ b/arch/arm64/kvm/vhe-macros.h
> @@ -0,0 +1,36 @@
> +/*
> + * Copyright (C) 2015 - ARM Ltd
> + * Author: Marc Zyngier
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under t
When a physical I/O device is assigned to a virtual machine through
facilities like VFIO and KVM, the interrupt for the device generally
bounces through the host system before being injected into the VM.
However, hardware technologies exist that often allow the host to be
bypassed for some of these
> -Original Message-
> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Thursday, August 06, 2015 6:08 AM
> To: linux-ker...@vger.kernel.org; kvm@vger.kernel.org
> Cc: eric.au...@st.com; eric.au...@linaro.org; j...@8bytes.org;
> avi.kiv...@gmail.com; pbonz...@redhat.com;
On 08/05/2015 06:12 PM, Paolo Bonzini wrote:
On 05/08/2015 06:04, Xiao Guangrong wrote:
- for_each_shadow_entry_lockless(vcpu, addr, iterator, spte)
+ for_each_shadow_entry_lockless(vcpu, addr, iterator, spte) {
+ leaf = iterator.level;
+
+ if (!root)
Hi Paolo & Juan,
I found some of the kvm_vcpu_ioctl operation takes about more than 10ms with
the 3.10.0-229.el7.x86_64 kernel, which prolong the VM service downtime when
doing live migration about 20~30ms.
This happened when doing the KVM_KVMCLOCK_CTRL ioctl. It's worse if more VCPUs
used by g
The synchronize_rcu() is a time consuming operation, the unpstream kernel
still have some issue, the KVM_RUN ioctl will take more then 10ms when resume
the VM after migration.
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Thursday, August 06, 2015 11:47 AM
> To: 'Paolo Bonzin
Please Ignore the following message in the brace
{ The synchronize_rcu() is a time consuming operation, the unpstream kernel
still have some issue, the KVM_RUN ioctl will take more then 10ms when
resume the VM after migration. }
The upstream kernel does not have such issue, only the rhel kerne
> -Original Message-
> From: linux-kernel-ow...@vger.kernel.org
> [mailto:linux-kernel-ow...@vger.kernel.org] On Behalf Of Paolo Bonzini
> Sent: Wednesday, August 05, 2015 11:24 PM
> To: linux-ker...@vger.kernel.org; kvm@vger.kernel.org
> Cc: Steve Rutherford; rkrc...@redhat.com
> Subject
52 matches
Mail list logo