On 30/11/20 14:35, Maxim Levitsky wrote:
This quirk reflects the fact that we currently treat MSR_IA32_TSC
and MSR_TSC_ADJUST access by the host (e.g qemu) in a way that is different
compared to an access from the guest.

For host's MSR_IA32_TSC read we currently always return L1 TSC value, and for
host's write we do the tsc synchronization.

For host's MSR_TSC_ADJUST write, we don't make the tsc 'jump' as we should
for this msr.

When the hypervisor uses the new TSC GET/SET state ioctls, all of this is no
longer needed, thus leave this enabled only with a quirk
which the hypervisor can disable.

Suggested-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Maxim Levitsky <mlevi...@redhat.com>

This needs to be covered by a variant of the existing selftests testcase (running the same guest code, but different host code of course).

Paolo

---
  arch/x86/include/uapi/asm/kvm.h |  1 +
  arch/x86/kvm/x86.c              | 19 ++++++++++++++-----
  2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index 8e76d3701db3f..2a60fc6674164 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -404,6 +404,7 @@ struct kvm_sync_regs {
  #define KVM_X86_QUIRK_LAPIC_MMIO_HOLE    (1 << 2)
  #define KVM_X86_QUIRK_OUT_7E_INC_RIP     (1 << 3)
  #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4)
+#define KVM_X86_QUIRK_TSC_HOST_ACCESS      (1 << 5)
#define KVM_STATE_NESTED_FORMAT_VMX 0
  #define KVM_STATE_NESTED_FORMAT_SVM   1
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4f0ae9cb14b8a..46a2111d54840 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3091,7 +3091,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct 
msr_data *msr_info)
                break;
        case MSR_IA32_TSC_ADJUST:
                if (guest_cpuid_has(vcpu, X86_FEATURE_TSC_ADJUST)) {
-                       if (!msr_info->host_initiated) {
+                       if (!msr_info->host_initiated ||
+                           !kvm_check_has_quirk(vcpu->kvm, 
KVM_X86_QUIRK_TSC_HOST_ACCESS)) {
                                s64 adj = data - vcpu->arch.ia32_tsc_adjust_msr;
                                adjust_tsc_offset_guest(vcpu, adj);
                        }
@@ -3118,7 +3119,8 @@ int kvm_set_msr_common(struct kvm_vcpu *vcpu, struct 
msr_data *msr_info)
                vcpu->arch.msr_ia32_power_ctl = data;
                break;
        case MSR_IA32_TSC:
-               if (msr_info->host_initiated) {
+               if (msr_info->host_initiated &&
+                   kvm_check_has_quirk(vcpu->kvm, 
KVM_X86_QUIRK_TSC_HOST_ACCESS)) {
                        kvm_synchronize_tsc(vcpu, data);
                } else {
                        u64 adj = kvm_compute_tsc_offset(vcpu, data) - 
vcpu->arch.l1_tsc_offset;
@@ -3409,17 +3411,24 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct 
msr_data *msr_info)
                msr_info->data = vcpu->arch.msr_ia32_power_ctl;
                break;
        case MSR_IA32_TSC: {
+               u64 tsc_offset;
+
                /*
                 * Intel SDM states that MSR_IA32_TSC read adds the TSC offset
                 * even when not intercepted. AMD manual doesn't explicitly
                 * state this but appears to behave the same.
                 *
-                * On userspace reads and writes, however, we unconditionally
+                * On userspace reads and writes, when 
KVM_X86_QUIRK_SPECIAL_TSC_READ
+                * is present, however, we unconditionally
                 * return L1's TSC value to ensure backwards-compatible
                 * behavior for migration.
                 */
-               u64 tsc_offset = msr_info->host_initiated ? 
vcpu->arch.l1_tsc_offset :
-                                                           
vcpu->arch.tsc_offset;
+
+               if (msr_info->host_initiated &&
+                   kvm_check_has_quirk(vcpu->kvm, 
KVM_X86_QUIRK_TSC_HOST_ACCESS))
+                       tsc_offset = vcpu->arch.l1_tsc_offset;
+               else
+                       tsc_offset = vcpu->arch.tsc_offset;
msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + tsc_offset;
                break;


Reply via email to