Christopherson
Signed-off-by: Uros Bizjak
Reviewed-by: Krish Sadhukhan
---
arch/x86/include/asm/kvm_host.h | 25 -
arch/x86/kvm/svm/sev.c | 2 --
arch/x86/kvm/svm/svm.c | 2 --
arch/x86/kvm/vmx/vmx.c | 4 +---
arch/x86/kvm/vmx/vmx_ops.h
+ ".pushsection .discard.instr_end\n" \
+ ".long 668b - .\n" \
+ ".popsection\n" \
+ "669:\n" \
+ _ASM_EXTABLE(666b, 667b)
+
#define KVM_DEFAULT_PLE_GAP 128
#define KVM_VMX_DEFAULT_PLE_WINDOW4096
#define KVM_DEFAULT_PLE_WINDOW_GROW 2
Reviewed-by: Krish Sadhukhan
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: e1ebb2b49048c4767cfa0d8466f9c701e549fa5e
Gitweb:
https://git.kernel.org/tip/e1ebb2b49048c4767cfa0d8466f9c701e549fa5e
Author:Krish Sadhukhan
AuthorDate:Thu, 17 Sep 2020 21:20:38
Committer
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: 5866e9205b47a983a77ebc8654949f696342f2ab
Gitweb:
https://git.kernel.org/tip/5866e9205b47a983a77ebc8654949f696342f2ab
Author:Krish Sadhukhan
AuthorDate:Thu, 17 Sep 2020 21:20:36
Committer
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: 75d1cc0e05af579301ce4e49cf6399be4b4e6e76
Gitweb:
https://git.kernel.org/tip/75d1cc0e05af579301ce4e49cf6399be4b4e6e76
Author:Krish Sadhukhan
AuthorDate:Thu, 17 Sep 2020 21:20:37
Committer
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: f1f325183519ba25370765607e2732d6edf53de1
Gitweb:
https://git.kernel.org/tip/f1f325183519ba25370765607e2732d6edf53de1
Author:Krish Sadhukhan
AuthorDate:Thu, 17 Sep 2020 21:20:36
Committer
The following commit has been merged into the x86/cpu branch of tip:
Commit-ID: 789521fca70ec8cb98f7257b880405e46f8a47a1
Gitweb:
https://git.kernel.org/tip/789521fca70ec8cb98f7257b880405e46f8a47a1
Author:Krish Sadhukhan
AuthorDate:Thu, 17 Sep 2020 21:20:37
Committer
page.
Signed-off-by: Krish Sadhukhan
---
arch/x86/kvm/svm/sev.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 7bf7bf734979..3c9a45efdd4d 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -384,7 +
-by: Tom Lendacky
Signed-off-by: Krish Sadhukhan
---
arch/x86/mm/pat/set_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index d1b2a889f035..40baa90e74f4 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch
-
enforced cache coherency is indicated by EAX[10] in CPUID leaf 0x801f.
Suggested-by: Tom Lendacky
Signed-off-by: Krish Sadhukhan
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/scattered.c| 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/include/asm
e
[PATCH 3/3 v4] KVM: SVM: Don't flush cache if hardware enforces cache
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/scattered.c| 1 +
arch/x86/kvm/svm/sev.c | 3 ++-
arch/x86/mm/pat/set_memory.c | 2 +-
4 files changed, 5 insertions(+), 2 deleti
On 9/15/20 4:30 AM, lihaiwei.ker...@gmail.com wrote:
From: Haiwei Li
'exit_fastpath' isn't used anywhere else, so remove it.
Suggested-by: Krish Sadhukhan
Signed-off-by: Haiwei Li
---
arch/x86/kvm/svm/svm.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff -
sed anywhere else and svm_vcpu_run()
doesn't return from anywhere else either.
Also, svm_exit_handlers_fastpath() doesn't have any other caller.
Should we get rid of it as well ?
For your changes,
Reviewed-by: Krish Sadhukhan
}
On 9/11/20 12:36 PM, Dave Hansen wrote:
On 9/11/20 12:25 PM, Krish Sadhukhan wrote:
diff --git a/arch/x86/include/asm/cpufeatures.h
b/arch/x86/include/asm/cpufeatures.h
index 81335e6fe47d..0e5b27ee5931 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
-
enforced cache coherency is indicated by EAX[10] in CPUID leaf 0x801f.
Suggested-by: Tom Lendacky
Signed-off-by: Krish Sadhukhan
---
arch/x86/include/asm/cpufeatures.h | 1 +
arch/x86/kernel/cpu/scattered.c| 1 +
2 files changed, 2 insertions(+)
diff --git a/arch/x86/include/asm
++--
arch/x86/mm/mem_encrypt_identity.c | 4 ++--
arch/x86/mm/pat/set_memory.c | 2 +-
9 files changed, 21 insertions(+), 12 deletions(-)
Krish Sadhukhan (4):
x86: AMD: Replace numeric value for SME CPUID leaf with a #define
x86: AMD: Add hardware-enforced cache coherency
-by: Tom Lendacky
Signed-off-by: Krish Sadhukhan
---
arch/x86/mm/pat/set_memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index d1b2a889f035..78d5511c5edd 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch
Signed-off-by: Krish Sadhukhan
---
arch/x86/boot/compressed/mem_encrypt.S | 5 +++--
arch/x86/include/asm/cpufeatures.h | 5 +
arch/x86/kernel/cpu/amd.c | 2 +-
arch/x86/kernel/cpu/scattered.c| 4 ++--
arch/x86/kvm/cpuid.c | 2 +-
arch/x86/kvm/svm
page.
Signed-off-by: Krish Sadhukhan
---
arch/x86/kvm/svm/sev.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index 402dc4234e39..8aa2209f2637 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -384,7 +
lt; n; i++) {
+ set_page_dirty_lock(pages[i]);
+ mark_page_accessed(pages[i]);
+ }
sev_unpin_memory(kvm, pages, n);
return ret;
}
Reviewed-by: Krish Sadhukhan
On 6/25/20 1:03 AM, Joerg Roedel wrote:
From: Joerg Roedel
Match the naming with other nested svm functions.
No functional changes.
Signed-off-by: Joerg Roedel
---
arch/x86/kvm/svm/svm.c | 6 +++---
arch/x86/kvm/svm/svm.h | 2 +-
2 files changed, 4 insertions(+), 4 deletions(-)
diff -
+++---
arch/x86/kvm/svm/svm.h| 20 +++---
5 files changed, 85 insertions(+), 85 deletions(-)
Reviewed-by: Krish Sadhukhan
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
Similar to VMX, the state that is captured through the currently available
IOCTLs is a mix of L1 and L2 state, dependent on whether the L2 guest was
running at the moment when the process was interrupted to save its state.
In particular, the SVM-specifi
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
According to the AMD manual, the effect of turning off EFER.SVME while a
guest is running is undefined. We make it leave guest mode immediately,
similar to the effect of clearing the VMX bit in MSR_IA32_FEAT_CTL.
I see that svm_set_efer() is called i
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
There is only one GIF flag for the whole processor, so make sure it is not
clobbered
when switching to L2 (in which case we also have to include the
V_GIF_ENABLE_MASK,
lest we confuse enable_gif/disable_gif/gif_set). When going back, L1 could in
theor
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
The control state changes on every L2->L0 vmexit, and we will have to
serialize it in the nested state. So keep it up to date in svm->nested.ctl
and just copy them back to the nested VMCB in nested_svm_vmexit.
Signed-off-by: Paolo Bonzini
---
arch/x
On 5/29/20 12:04 PM, Paolo Bonzini wrote:
On 29/05/20 20:10, Krish Sadhukhan wrote:
Unmapping the nested VMCB in enter_svm_guest_mode is a bit of a wart,
since the map is not used elsewhere in the function. There are
just two calls, so move it there.
The last sentence sounds bit incomplete
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
Split out filling svm->vmcb.save and svm->vmcb.control before VMRUN.
Only the latter will be useful when restoring nested SVM state.
This patch introduces no semantic change, so the MMU setup is still
done in nested_prepare_vmcb_save. The next patch wi
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
Unmapping the nested VMCB in enter_svm_guest_mode is a bit of a wart,
since the map is not used elsewhere in the function. There are
just two calls, so move it there.
The last sentence sounds bit incomplete.
Also, does it make sense to mention the r
On 5/29/20 8:39 AM, Paolo Bonzini wrote:
svm_load_mmu_pgd is delaying the write of GUEST_CR3 to prepare_vmcs02
Did you mean to say enter_svm_guest_mode here ?
as
an optimization, but this is only correct before the nested vmentry.
If userspace is modifying CR3 with KVM_SET_SREGS after the VM
On 5/26/20 10:22 AM, Paolo Bonzini wrote:
The usual drill at this point, except there is no code to remove because this
case was not handled at all.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/svm/nested.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/a
On 5/26/20 10:22 AM, Paolo Bonzini wrote:
In case an interrupt arrives after nested.check_events but before the
call to kvm_cpu_has_injectable_intr, we could end up enabling the interrupt
window even if the interrupt is actually going to be a vmexit. This is
useless rather than harmful, but it
+++
arch/x86/kvm/x86.c | 4
2 files changed, 7 insertions(+)
Nit: The added 'break' statement in patch# 2 is not required.
Reviewed-by: Krish Sadhukhan
kvm/svm/svm.c | 11 ++-
arch/x86/kvm/svm/svm.h | 28 +---
5 files changed, 116 insertions(+), 65 deletions(-)
Reviewed-by: Krish Sadhukhan
On 5/15/20 10:41 AM, Paolo Bonzini wrote:
When restoring SVM nested state, the control state will be stored already
in svm->nested by KVM_SET_NESTED_STATE. We will not need to fish it out of
L1's VMCB. Pull everything into a separate function so that it is
documented which fields are needed.
On 10/15/19 6:27 PM, Xiaoyao Li wrote:
On 10/16/2019 6:05 AM, Krish Sadhukhan wrote:
On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.
No functional change.
Signed-off-by: Xiaoyao Li
---
arch/x86
On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
Move the MSR bitmap setup codes to vmx_vmcs_setup() and only setup them
when hardware has msr_bitmap capability.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/vmx.c | 39 ---
1 file changed, 20 insertions(+), 1
On 10/15/2019 09:40 AM, Xiaoyao Li wrote:
Rename {vmx,nested_vmx}_vcpu_setup to {vmx,nested_vmx}_vmcs_setup,
to match what they really do.
No functional change.
Signed-off-by: Xiaoyao Li
---
arch/x86/kvm/vmx/nested.c | 2 +-
arch/x86/kvm/vmx/nested.h | 2 +-
arch/x86/kvm/vmx/vmx.c|
everything so that the vCPU's 64-bit mode is determined
directly from EFER_LMA and the VMCS checks are based on that, which
matches section 26.2.4 of the SDM.
Cc: Sean Christopherson
Cc: Jim Mattson
Cc: Krish Sadhukhan
Fixes: 5845038c111db27902bc220a4f70070fe945871c
Signed-off-by: Paolo Bonzi
and SVM.
Fixes: 74f169090b6f ("kvm/svm: Setup MCG_CAP on AMD properly")
Fixes: b31c114b82b2 ("KVM: X86: Provide a capability to disable PAUSE
intercepts")
Fixes: 411b44ba80ab ("svm: Implements update_pi_irte hook to setup posted
interrupt")
Cc: Krish Sadhukhan
Sign
On 08/01/2019 09:46 AM, Sean Christopherson wrote:
Remove two stale checks for non-NULL ops now that they're implemented by
both VMX and SVM.
Fixes: 74f169090b6f ("kvm/svm: Setup MCG_CAP on AMD properly")
Fixes: b31c114b82b2 ("KVM: X86: Provide a capability to disable PAUSE
intercepts")
Sign
0xFF00;
@@ -7804,6 +7806,8 @@ static int __init vmx_init(void)
}
#endif
+ host_x2apic_enabled = x2apic_enabled();
+
r = kvm_init(&vmx_x86_ops, sizeof(struct vcpu_vmx),
__alignof__(struct vcpu_vmx), THIS_MODULE);
if (r)
Reviewed-by: Krish Sadhukhan
On 06/06/2019 11:41 AM, Sean Christopherson wrote:
On Thu, Jun 06, 2019 at 05:24:12PM +0200, Paolo Bonzini wrote:
These function do not prepare the entire state of the vmcs02, only the
rarely needed parts. Rename them to make this clearer.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/vm
*vmx)
vmcs_write64(TSC_MULTIPLIER, vmx->current_tsc_ratio);
}
+void dump_vmcs(void);
+
#endif /* __KVM_X86_VMX_H */
Reviewed-by: Krish Sadhukhan
("PostedIntrVec = 0x%02x\n", vmcs_read16(POSTED_INTR_NV));
if ((secondary_exec_control & SECONDARY_EXEC_ENABLE_EPT))
Reviewed-by: Krish Sadhukhan
xF8F8F8F8F8F8F8F8)
+ return false;
+ /* 0, 1, 4, 5, 6, 7 are valid values. */
+ return (data | ((data & 0x0202020202020202) << 1)) == data;
+}
+
#endif
Reviewed-by: Krish Sadhukhan
On 10/20/2018 03:50 PM, KarimAllah Ahmed wrote:
The spec only requires the posted interrupt descriptor address to be
64-bytes aligned (i.e. bits[0:5] == 0). Using page_address_valid also
forces the address to be page aligned.
Only validate that the address does not cross the maximum physical
exit_qual))
return -EINVAL;
- if (kvm_state->flags & KVM_STATE_NESTED_RUN_PENDING)
- vmx->nested.nested_run_pending = 1;
-
vmx->nested.dirty_vmcs12 = true;
ret = enter_vmx_non_root_mode(vcpu, NULL);
if (ret)
Reviewed-by: Krish Sadhukhan
r_guest_kernel_gs_base);
+#else
+ vmcs_writel(HOST_FS_BASE, segment_base(vmx->host_state.fs_sel));
+ vmcs_writel(HOST_GS_BASE, segment_base(vmx->host_state.gs_sel));
#endif
if (boot_cpu_has(X86_FEATURE_MPX))
rdmsrl(MSR_IA32_BNDCFGS, vmx->host_state.msr_host_bndcfgs);
Reviewed-by: Krish Sadhukhan
},
[CPUID_F_0_EDX] = { 0xf, 0, CPUID_EDX},
Reviewed-by: Krish Sadhukhan
ID_CONTROL_FIELD);
+
+ load_vmcs12_mmu_host_state(vcpu, vmcs12);
+
/*
* The emulated instruction was already skipped in
* nested_vmx_run, but the updated RIP was never
Reviewed-by: Krish Sadhukhan
On 11/02/2017 11:40 PM, Wanpeng Li wrote:
2017-11-03 14:31 GMT+08:00 Krish Sadhukhan :
On 11/02/2017 05:50 PM, Wanpeng Li wrote:
From: Wanpeng Li
According to the SDM, if the "load IA32_BNDCFGS" VM-entry controls is 1,
the
following checks are performed on the field for the IA
52 matches
Mail list logo