Sean Christopherson writes:
...
>> -if ((emulation_type & EMULTYPE_VMWARE_GP) &&
>> -!is_vmware_backdoor_opcode(ctxt)) {
>> -kvm_queue_exception_e(vcpu, GP_VECTOR, 0);
>> -return 1;
>> +if (emulation_type & EMULTYPE_PARAVIRT_GP) {
>> +vminstr = i
Andy Lutomirski writes:
...
> #endif diff --git a/arch/x86/kvm/mmu/mmu.c
> b/arch/x86/kvm/mmu/mmu.c index 6d16481aa29d..c5c4aaf01a1a 100644
> --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@
> -50,6 +50,7 @@ #include #include #include
> +#include #include
>
ate->hdr.vmx.preemption_timer_deadline;
>> - }
>> + } else
>> + vmx->nested.has_preemption_timer_deadline = false;
>
> Doesn't the coding standard require braces around the else clause?
>
I think so... for if/else where at least one of them is multiline.
> Reviewed-by: Jim Mattson
Looks good to me,
Reviewed-by: Bandan Das
Linus Torvalds writes:
> On Sat, Sep 7, 2019 at 12:17 PM Linus Torvalds
> wrote:
>>
>> I'm really not clear on why it's a good idea to clear the LDR bits on
>> shutdown, and commit 558682b52919 ("x86/apic: Include the LDR when
>> clearing out APIC registers") just looks pointless. And now it has
Stephen Rothwell writes:
> Hi all,
>
> In commit
>
> bae3a8d3308e ("x86/apic: Do not initialize LDR and DFR for bigsmp")
>
> Fixes tag
>
> Fixes: db7b9e9f26b8 ("[PATCH] Clustered APIC setup for >8 CPU systems")
>
> has these problem(s):
>
> - Target SHA1 does not exist
>
I tried to dig thi
Thomas Gleixner writes:
> On Tue, 27 Aug 2019, Bandan Das wrote:
>> kbuild test robot writes:
>>
>> > tree:
>> > https://kernel.googlesource.com/pub/scm/linux/kernel/git/tip/tip.git
>> > x86/urgent
>> > head: c
kbuild test robot writes:
> tree: https://kernel.googlesource.com/pub/scm/linux/kernel/git/tip/tip.git
> x86/urgent
> head: cfa16294b1c5b320c0a0e1cac37c784b92366c87
> commit: cfa16294b1c5b320c0a0e1cac37c784b92366c87 [3/3] x86/apic: Include the
> LDR when clearing out APIC registers
> config
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: bae3a8d3308ee69a7dbdf145911b18dfda8ade0d
Gitweb:
https://git.kernel.org/tip/bae3a8d3308ee69a7dbdf145911b18dfda8ade0d
Author:Bandan Das
AuthorDate:Mon, 26 Aug 2019 06:15:12 -04:00
Committer
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 558682b5291937a70748d36fd9ba757fb25b99ae
Gitweb:
https://git.kernel.org/tip/558682b5291937a70748d36fd9ba757fb25b99ae
Author:Bandan Das
AuthorDate:Mon, 26 Aug 2019 06:15:13 -04:00
Committer
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: 9cfe98a6dbfb2a72ae29831e57b406eab7668da8
Gitweb:
https://git.kernel.org/tip/9cfe98a6dbfb2a72ae29831e57b406eab7668da8
Author:Bandan Das
AuthorDate:Mon, 26 Aug 2019 06:15:12 -04:00
Committer
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: cfa16294b1c5b320c0a0e1cac37c784b92366c87
Gitweb:
https://git.kernel.org/tip/cfa16294b1c5b320c0a0e1cac37c784b92366c87
Author:Bandan Das
AuthorDate:Mon, 26 Aug 2019 06:15:13 -04:00
Committer
ff-by: Bandan Das
---
arch/x86/kernel/apic/apic.c | 4
1 file changed, 4 insertions(+)
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index aa5495d0f478..e75f3782b915 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1179,6 +1179,10 @@
in the
guest during kdump initialization.
Note that this change isn't intended to workaround the kvm lapic bug;
bigsmp should correctly stay away from initializing LDR.
Suggested-by: Thomas Gleixner
Signed-off-by: Bandan Das
---
arch/x86/kernel/apic/bigsmp_32.c | 24 ++
ven't been enabled, a simple guest only change can be to
just clear out the LDR.
Bandan Das (2):
x86/apic: Do not initialize LDR and DFR for bigsmp
x86/apic: include the LDR when clearing out apic registers
arch/x86/kernel/apic/apic.c | 4
arch/x86/kernel/apic/bigsmp
Thomas Gleixner writes:
> Bandan,
>
> On Wed, 21 Aug 2019, Bandan Das wrote:
>> Thomas Gleixner writes:
>> So, in KVM: if we make sure that the logical destination map isn't filled up
>> if the virtual
>> apic is not enabled by software, it really doe
Thomas Gleixner writes:
> Bandan,
>
> On Mon, 19 Aug 2019, Bandan Das wrote:
>> Thomas Gleixner writes:
>> > On Wed, 14 Aug 2019, Bandan Das wrote:
>> >> On a 32 bit RHEL6 guest with greater than 8 cpus, the
>> >> kdump kernel hangs when calibr
Hi Thomas,
Thomas Gleixner writes:
> Bandan,
>
> On Wed, 14 Aug 2019, Bandan Das wrote:
>> On a 32 bit RHEL6 guest with greater than 8 cpus, the
>> kdump kernel hangs when calibrating apic. This happens
>> because when apic initializes bigsmp, it also initializes LDR
m the guest while building the logical destination map
even for inactive vcpus. While KVM apic can be fixed to ignore apics
that haven't been enabled, a simple guest only change can be to
just clear out the LDR.
Signed-off-by: Bandan Das
---
arch/x86/kernel/apic/apic.c | 4
1 file
There's a default warning message that gets printed, however,
there are various failure conditions:
- a msr read can fail
- a msr write can fail
- a msr has an unexpected value
- all msrs have unexpected values (disable PMU)
Lastly, use %llx to silence checkpatch
Signed-off-by: Banda
Hi Peter,
Peter Zijlstra writes:
> On Fri, Apr 12, 2019 at 03:09:17PM -0400, Bandan Das wrote:
>>
>> There's a default warning message that gets printed, however,
>> there are various failure conditions:
>> - a msr read can fail
>> - a msr write can fa
or message in
virtualized environment") completely removed printing the msr in
question but these messages could be helpful for debugging vPMUs as
well. Add them back and change them to pr_debugs, this keeps the
behavior the same for baremetal.
Lastly, use %llx to silence checkpatch
Signed-off
David Hildenbrand writes:
...
>> vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
>> @@ -10325,36 +10321,43 @@ static inline bool
>> nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
>> /* This shortcut is ok because we support only x2APIC MSRs so far. */
>> if (!nest
Christian König writes:
> Hi Bandas,
>
> thanks for the patch, but this is a known issue with a fix already on
> the way into the next -rc.
Oh great! Thank you, have a pointer to the patch so that I can test ?
> Regards,
> Christian.
>
> Am 07.12.2017 um 09:00 schrieb
there will be no way to break out of the loop when enabling 64bit BAR.
Add checks and exit the loop in these cases without attempting to enable
BAR.
Signed-off-by: Bandan Das
---
arch/x86/pci/fixup.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/x86/pci/fixup.c
David Hildenbrand writes:
...
>> v1:
>> https://lkml.org/lkml/2017/6/29/958
>>
>> Bandan Das (3):
>> KVM: vmx: Enable VMFUNCs
>> KVM: nVMX: Enable VMFUNC for the L1 hypervisor
>> KVM: nVMX: Emulate EPTP switching for the L1 hypervisor
>>
&
Enable VMFUNC in the secondary execution controls. This simplifies the
changes necessary to expose it to nested hypervisors. VMFUNCs still
cause #UD when invoked.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/vmx.c | 22
When L2 uses vmfunc, L0 utilizes the associated vmexit to
emulate a switching of the ept pointer by reloading the
guest MMU.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 6 +++
arch/x86/kvm/vmx.c | 124
Expose VMFUNC in MSRs and VMCS fields. No actual VMFUNCs are enabled.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 53 +++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
Paolo Bonzini writes:
> On 03/08/2017 13:39, David Hildenbrand wrote:
>>> + /* AD, if set, should be supported */
>>> + if ((address & VMX_EPT_AD_ENABLE_BIT)) {
>>> + if (!enable_ept_ad_bits)
>>> + return false;
>> In theory (I guess) we would have to check here if
u reload.
These patches expose eptp switching/vmfunc to the nested hypervisor.
vmfunc is enabled in the secondary controls for the host and is
exposed to the nested hypervisor. However, if the nested hypervisor
decides to use eptp switching, L0 emulates it.
v1:
https://lkml.org/lkml/2017/6/29/958
Enable VMFUNC in the secondary execution controls. This simplifies the
changes necessary to expose it to nested hypervisors. VMFUNCs still
cause #UD when invoked.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/vmx.c | 22
When L2 uses vmfunc, L0 utilizes the associated vmexit to
emulate a switching of the ept pointer by reloading the
guest MMU.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 6 +++
arch/x86/kvm/vmx.c | 130
Expose VMFUNC in MSRs and VMCS fields. No actual VMFUNCs are enabled.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 53 +++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
d in the secondary controls for the host and is
exposed to the nested hypervisor. However, if the nested hypervisor
decides to use eptp switching, L0 emulates it.
v1:
https://lkml.org/lkml/2017/6/29/958
Bandan Das (3):
KVM: vmx: Enable VMFUNCs
KVM: nVMX: Enable VMFUNC for the L1 hypervisor
K
Radim Krčmář writes:
> 2017-07-28 15:52-0400, Bandan Das:
>> When L2 uses vmfunc, L0 utilizes the associated vmexit to
>> emulate a switching of the ept pointer by reloading the
>> guest MMU.
>>
>> Signed-off-by: Paolo Bonzini
>> Signed-off-by: Bandan D
Hi David,
David Hildenbrand writes:
>> +static inline bool nested_cpu_has_eptp_switching(struct vmcs12 *vmcs12)
>> +{
>> +return nested_cpu_has_vmfunc(vmcs12) &&
>> +(vmcs12->vm_function_control &
>> + VMX_VMFUNC_EPTP_SWITCHING);
>> +}
>> +
>> static inline bool is_n
Jintack Lim writes:
...
>>
>> I'll share my experiment setup shortly.
>
> I summarized my experiment setup here.
>
> https://github.com/columbia/nesting-pub/wiki/Nested-virtualization-on-ARM-setup
Thanks Jintack! I was able to test L2 boot up with these instructions.
Next, I will try to run some
When L2 uses vmfunc, L0 utilizes the associated vmexit to
emulate a switching of the ept pointer by reloading the
guest MMU.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 6 +++
arch/x86/kvm/vmx.c | 124
Expose VMFUNC in MSRs and VMCS fields. No actual VMFUNCs are enabled.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 53 +++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
ndary controls for the host and is
exposed to the nested hypervisor. However, if the nested hypervisor
decides to use eptp switching, L0 emulates it.
v1:
https://lkml.org/lkml/2017/6/29/958
Bandan Das (3):
KVM: vmx: Enable VMFUNCs
KVM: nVMX: Enable VMFUNC for the L1 hypervisor
KVM: nVMX: Em
Enable VMFUNC in the secondary execution controls. This simplifies the
changes necessary to expose it to nested hypervisors. VMFUNCs still
cause #UD when invoked.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/vmx.c | 22
Radim Krčmář writes:
> 2017-07-17 13:58-0400, Bandan Das:
>> Radim Krčmář writes:
>> ...
>>>> > and no other mentions of a VM exit, so I think that the VM exit happens
>>>> > only under these conditions:
>>>> >
>>>&g
Radim Krčmář writes:
...
>> > and no other mentions of a VM exit, so I think that the VM exit happens
>> > only under these conditions:
>> >
>> > — The EPT memory type (bits 2:0) must be a value supported by the
>> > processor as indicated in the IA32_VMX_EPT_VPID_CAP MSR (see
>> > Appen
David Hildenbrand writes:
+ /*
+ * If the (L2) guest does a vmfunc to the currently
+ * active ept pointer, we don't have to do anything else
+ */
+ if (vmcs12->ept_pointer != address) {
+ if (address >> cpuid_maxphyaddr(vcpu) ||
+
Radim Krčmář writes:
...
>> Why do you think it's a bug ?
>
> SDM defines a different behavior and hardware doesn't do that either.
> There are only two reasons for a VMFUNC VM exit from EPTP switching:
>
> 1) ECX > 0
> 2) EPTP would cause VM entry to fail if in VMCS.EPT_POINTER
>
> KVM can fail
Radim Krčmář writes:
...
>> > Thanks, we're not here to judge the guest, but to provide a bare-metal
>> > experience. :)
>>
>> There are certain cases where do. For example, when L2 instruction emulation
>> fails we decide to kill L2 instead of injecting the error to L1 and let it
>> handle
>> t
Radim Krčmář writes:
> 2017-07-11 16:34-0400, Bandan Das:
>> Radim Krčmář writes:
>>
>> > 2017-07-11 15:50-0400, Bandan Das:
>> >> Radim Krčmář writes:
>> >> > 2017-07-11 14:24-0400, Bandan Das:
>> >> >> Bandan Das writ
Radim Krčmář writes:
> 2017-07-11 15:38-0400, Bandan Das:
>> Radim Krčmář writes:
>>
>> > 2017-07-11 14:35-0400, Bandan Das:
>> >> Jim Mattson writes:
>> >> ...
>> >> >>> I can find the definition for an vmexit i
Radim Krčmář writes:
> 2017-07-11 15:50-0400, Bandan Das:
>> Radim Krčmář writes:
>> > 2017-07-11 14:24-0400, Bandan Das:
>> >> Bandan Das writes:
>> >> > If there's a triple fault, I think it's a good idea to inject it
>> >>
Radim Krčmář writes:
> 2017-07-11 14:24-0400, Bandan Das:
>> Bandan Das writes:
>> > If there's a triple fault, I think it's a good idea to inject it
>> > back. Basically, there's no need to take care of damage c
Radim Krčmář writes:
> 2017-07-11 14:35-0400, Bandan Das:
>> Jim Mattson writes:
>> ...
>> >>> I can find the definition for an vmexit in case of index >=
>> >>> VMFUNC_EPTP_ENTRIES, but not for !vmcs12->eptp_list_address in the SDM.
>&g
Radim Krčmář writes:
> 2017-07-11 14:05-0400, Bandan Das:
>> Radim Krčmář writes:
>>
>> > [David did a great review, so I'll just point out things I noticed.]
>> >
>> > 2017-07-11 09:51+0200, David Hildenbrand:
>> >> On 10.07.20
Jim Mattson writes:
...
>>> I can find the definition for an vmexit in case of index >=
>>> VMFUNC_EPTP_ENTRIES, but not for !vmcs12->eptp_list_address in the SDM.
>>>
>>> Can you give me a hint?
>>
>> I don't think there is. Since, we are basically emulating eptp switching
>> for L2, this is a go
Bandan Das writes:
>>> + /*
>>> +* If the (L2) guest does a vmfunc to the currently
>>> +* active ept pointer, we don't have to do anything else
>>> +*/
>>> + if (vmcs12->ept_pointer != address) {
Radim Krčmář writes:
> [David did a great review, so I'll just point out things I noticed.]
>
> 2017-07-11 09:51+0200, David Hildenbrand:
>> On 10.07.2017 22:49, Bandan Das wrote:
>> > When L2 uses vmfunc, L0 utilizes the associated vmexit to
>> > emu
David Hildenbrand writes:
> On 10.07.2017 22:49, Bandan Das wrote:
>> When L2 uses vmfunc, L0 utilizes the associated vmexit to
>> emulate a switching of the ept pointer by reloading the
>> guest MMU.
>>
>> Signed-off-by: Paolo Bonzini
>> Signed-off-by:
Enable VMFUNC in the secondary execution controls. This simplifies the
changes necessary to expose it to nested hypervisors. VMFUNCs still
cause #UD when invoked.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/vmx.c | 22
it.
v1:
https://lkml.org/lkml/2017/6/29/958
Bandan Das (3):
KVM: vmx: Enable VMFUNCs
KVM: nVMX: Enable VMFUNC for the L1 hypervisor
KVM: nVMX: Emulate EPTP switching for the L1 hypervisor
arch/x86/include/asm/vmx.h | 9
arch/x86/kvm/vmx.c | 125
When L2 uses vmfunc, L0 utilizes the associated vmexit to
emulate a switching of the ept pointer by reloading the
guest MMU.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 6 +
arch/x86/kvm/vmx.c | 58
Expose VMFUNC in MSRs and VMCS fields. No actual VMFUNCs are enabled.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
Reviewed-by: David Hildenbrand
---
arch/x86/kvm/vmx.c | 53 +++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff
David Hildenbrand writes:
>> -kvm_queue_exception(vcpu, UD_VECTOR);
>> +struct vcpu_vmx *vmx = to_vmx(vcpu);
>> +struct vmcs12 *vmcs12;
>> +u32 function = vcpu->arch.regs[VCPU_REGS_RAX];
>> +
>> +/*
>> + * VMFUNC is only supported for nested guests, but we always enable th
switching/vmfunc to the nested hypervisor.
vmfunc is enabled in the secondary controls for the host and is
exposed to the nested hypervisor. However, if the nested hypervisor
decides to use eptp switching, L0 emulates it.
v1:
https://lkml.org/lkml/2017/6/29/958
Bandan Das (3):
KVM: vmx: Enable
Expose VMFUNC in MSRs and VMCS fields. No actual VMFUNCs are enabled.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 53 +++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
When L2 uses vmfunc, L0 utilizes the associated vmexit to
emulate a switching of the ept pointer by reloading the
guest MMU.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 6 +
arch/x86/kvm/vmx.c | 58
Enable VMFUNC in the secondary execution controls. This simplifies the
changes necessary to expose it to nested hypervisors. VMFUNCs still
cause #UD when invoked.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
Reviewed-by: David Hildenbrand
---
arch/x86/include/asm/vmx.h | 3
. However, if the nested hypervisor
decides to use eptp switching, L0 emulates it.
v1:
https://lkml.org/lkml/2017/6/29/958
Bandan Das (3):
KVM: vmx: Enable VMFUNCs
KVM: nVMX: Enable VMFUNC for the L1 hypervisor
KVM: nVMX: Emulate EPTP switching for the L1 hypervisor
arch/x86/include/asm
Enable VMFUNC in the secondary execution controls. This simplifies the
changes necessary to expose it to nested hypervisors. VMFUNCs still
cause #UD when invoked.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 3 +++
arch/x86/kvm/vmx.c | 22
When L2 uses vmfunc, L0 utilizes the associated vmexit to
emulate a switching of the ept pointer by reloading the
guest MMU.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 6 +
arch/x86/kvm/vmx.c | 55
Expose VMFUNC in MSRs and VMCS fields. No actual VMFUNCs are enabled.
Signed-off-by: Paolo Bonzini
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 53 +++--
1 file changed, 51 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
ection ?
Bandan
> On Thu, Jun 29, 2017 at 4:29 PM, Bandan Das wrote:
>> These patches expose eptp switching/vmfunc to the nested hypervisor. Testing
>> with
>> kvm-unit-tests seems to work ok.
>>
>> If the guest hypervisor enables vmfunc/eptp switching, a "sh
Hi Paolo,
Paolo Bonzini writes:
> - Original Message -
>> From: "Bandan Das"
>> To: k...@vger.kernel.org
>> Cc: pbonz...@redhat.com, linux-kernel@vger.kernel.org
>> Sent: Friday, June 30, 2017 1:29:55 AM
>> Subject: [PATCH 1/2] KVM
n a vmexit with exit reason 59. This hooks to handle_vmfunc()
to rewrite vmcs12->ept_pointer to reload the mmu and get a new root hpa.
This new shadow ept pointer is written to the shadow eptp list in the given
index. A next vmfunc call to switch to the given index would succeed without
an exit.
Advertise VMFUNC and EPTP switching function to the L1
hypervisor. Change nested_vmx_exit_handled() to return false
for VMFUNC so L0 can handle it.
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 4
arch/x86/kvm/vmx.c | 18 ++
2 files changed, 22
list and subsequently, reload the mmu to resume L2.
On the next vmfunc(0, index) however, the processor will load the
entry without an exit.
Signed-off-by: Bandan Das
---
arch/x86/include/asm/vmx.h | 5 +++
arch/x86/kvm/vmx.c | 104 +
2 files
Jintack Lim writes:
> Compilation on 32bit arm architecture will fail without them.
...
>> It seems these functions are
>> defined separately in 32/64 bit specific header files. Or is it that
>> 64 bit compilation also depends on the 32 bit header file ?
>
> It's only for 32bit architecture. For
Hi Jintack,
Jintack Lim writes:
> Hi Bandan,
>
> On Tue, Jun 6, 2017 at 4:21 PM, Bandan Das wrote:
>> Jintack Lim writes:
>>
>>> Emulate taking an exception to the guest hypervisor running in the
>>> virtual EL2 as described in ARM ARM AArch64.TakeExc
Jintack Lim writes:
> Emulate taking an exception to the guest hypervisor running in the
> virtual EL2 as described in ARM ARM AArch64.TakeException().
ARM newbie here, I keep thinking of ARM ARM as a typo ;)
...
> +static inline int kvm_inject_nested_sync(struct kvm_vcpu *vcpu, u64 esr_el2)
> +
Jintack Lim writes:
> From: Christoffer Dall
>
> When running in virtual EL2 we use the shadow EL1 systerm register array
> for the save/restore process, so that hardware and especially the memory
> subsystem behaves as code written for EL2 expects while really running
> in EL1.
>
> This works g
Christoffer Dall writes:
> On Fri, Jun 02, 2017 at 01:36:23PM -0400, Bandan Das wrote:
>> Christoffer Dall writes:
>>
>> > On Thu, Jun 01, 2017 at 04:05:49PM -0400, Bandan Das wrote:
>> >> Jintack Lim writes:
>> >> ...
>> >> >
Christoffer Dall writes:
> On Thu, Jun 01, 2017 at 04:05:49PM -0400, Bandan Das wrote:
>> Jintack Lim writes:
>> ...
>> > +/**
>> > + * kvm_arm_setup_shadow_state -- prepare shadow state based on emulated
>> > mode
>> &g
Jintack Lim writes:
> From: Christoffer Dall
>
> Set up virutal EL2 context to hardware if the guest exception level is
> EL2.
>
> Signed-off-by: Christoffer Dall
> Signed-off-by: Jintack Lim
> ---
> arch/arm64/kvm/context.c | 32 ++--
> 1 file changed, 26 insertio
Jintack Lim writes:
...
> +/**
> + * kvm_arm_setup_shadow_state -- prepare shadow state based on emulated mode
> + * @vcpu: The VCPU pointer
> + */
> +void kvm_arm_setup_shadow_state(struct kvm_vcpu *vcpu)
> +{
> + struct kvm_cpu_context *ctxt = &vcpu->arch.ctxt;
> +
> + ctxt->hw_pstate =
"Huang, Kai" writes:
...
> Hi Bandan,
>
> I was just suggesting. You and Paolo still make the decision :)
Sure Kai, I don't mind the name change at all.
The maintainer has already picked this up and I don't think
the function name change is worth submitting a follow up.
Thank you very much for t
Paolo Bonzini writes:
...
>> Is the purpose of returning 1 to make upper layer code to inject PML
>> full VMEXIt to L1 in nested_ept_inject_page_fault?
>
> Yes, it triggers a fault
>>> +
>>> +gpa = vmcs_read64(GUEST_PHYSICAL_ADDRESS) & ~0xFFFull;
>>> +
>>> +page = nested_get_page(v
Hi Kai,
"Huang, Kai" writes:
> On 5/6/2017 7:25 AM, Bandan Das wrote:
>> When KVM updates accessed/dirty bits, this hook can be used
>> to invoke an arch specific function that implements/emulates
>> dirty logging such as PML.
>>
>> Signed-off-by:
Paolo Bonzini writes:
> On 05/05/2017 21:25, Bandan Das wrote:
>> v2:
>> 2/3: Clear out all bits except bit 12
>> 3/3: Slightly modify an existing comment, honor L0's
>> PML setting when clearing it for L1
>>
>> v1:
>> http://www.spinics
When KVM updates accessed/dirty bits, this hook can be used
to invoke an arch specific function that implements/emulates
dirty logging such as PML.
Signed-off-by: Bandan Das
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.c | 15 +++
arch/x86/kvm/mmu.h
if L1
has enabled PML. If the PML index overflows, we change the
exit reason and run L1 to simulate a PML full event.
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 81 --
1 file changed, 79 insertions(+), 2 deletions(-)
diff --git a/arch
write the gpa to the
buffer provided by L1. If the index overflows, we just
change the exit reason before running L1.
Bandan Das (3):
kvm: x86: Add a hook for arch specific dirty logging emulation
nVMX: Implement emulated Page Modification Logging
nVMX: Advertise PML to L1 hypervisor
arc
Advertise the PML bit in vmcs12 but don't try to enable
it in hardware when running L2 since L0 is emulating it. Also,
preserve L0's settings for PML since it may still
want to log writes.
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 16 +++-
1 file changed, 11
Paolo Bonzini writes:
> On 04/05/2017 00:14, Bandan Das wrote:
>> Advertise the PML bit in vmcs12 but clear it out
>> before running L2 since we don't depend on hardware support
>> for PML emulation.
>>
>> Signed-off-by: Bandan Das
>> ---
>> a
Paolo Bonzini writes:
> On 04/05/2017 00:14, Bandan Das wrote:
>> +if (vmx->nested.pml_full) {
>> +exit_reason = EXIT_REASON_PML_FULL;
>> +vmx->nested.pml_full = false;
>> +} else if (fault->error_code & PFER
When KVM updates accessed/dirty bits, this hook can be used
to invoke an arch specific function that implements/emulates
dirty logging such as PML.
Signed-off-by: Bandan Das
---
arch/x86/include/asm/kvm_host.h | 2 ++
arch/x86/kvm/mmu.c | 15 +++
arch/x86/kvm/mmu.h
if L1
has enabled PML. If the PML index overflows, we change the
exit reason and run L1 to simulate a PML full event.
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 81 +-
1 file changed, 80 insertions(+), 1 deletion(-)
diff --git a/arch/x86
Advertise the PML bit in vmcs12 but clear it out
before running L2 since we don't depend on hardware support
for PML emulation.
Signed-off-by: Bandan Das
---
arch/x86/kvm/vmx.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
These patches implement PML on top of EPT A/D emulation
(ae1e2d1082ae).
When dirty bit is being set, we write the gpa to the
buffer provided by L1. If the index overflows, we just
change the exit reason before running L1.
Bandan Das (3):
kvm: x86: Add a hook for arch specific dirty logging
Paolo Bonzini writes:
> - Original Message -
>> From: "Bandan Das"
>> To: "Paolo Bonzini"
>> Cc: linux-kernel@vger.kernel.org, k...@vger.kernel.org, da...@redhat.com
>> Sent: Wednesday, April 12, 2017 7:35:16 AM
>> Subject: Re: [
Paolo Bonzini writes:
...
> accessed_dirty = have_ad ? PT_GUEST_ACCESSED_MASK : 0;
> +
> + /*
> + * FIXME: on Intel processors, loads of the PDPTE registers for PAE
> paging
> + * by the MOV to CR instruction are treated as reads and do not cause
> the
> + * processor to
Paolo Bonzini writes:
> On 17/02/2017 01:45, Bandan Das wrote:
>> Paolo Bonzini writes:
>>
>>> The FPU is always active now when running KVM.
>>
>> The lazy code was a performance optimization, correct ?
>> Is this just dormant code and being remove
Paolo Bonzini writes:
> - Original Message -
>> From: "Bandan Das"
>> To: "Paolo Bonzini"
>> Cc: linux-kernel@vger.kernel.org, k...@vger.kernel.org
>> Sent: Friday, February 17, 2017 1:04:14 AM
>> Subject: Re: [PATCH] KVM: VMX
1 - 100 of 297 matches
Mail list logo