On 2011-07-18 20:26, Marcelo Tosatti wrote:
> 
> On Fri, Jul 08, 2011 at 02:40:53PM -0400, Bandan Das wrote:
>> I have already discussed this a bit with Nadav but hoping someone 
>> else has any other ideas/clues/suggestions/comments. With recent versions of 
>> the 
>> kernel (The last I tried is 3.0-rc5 with nVMX patches already merged), my L1 
>> guest 
>> always hangs when I start L2. 
>>
>> My setup : The host, L1 and L2 all are FC15 with the host running 3.0-rc5. 
>> When L1 is up 
>> and running, I start L2 from L1. Within a minute or two, both L1 and L2 
>> hang. Although, if
>> if I run tracing on the host, I see :
>>
>> ...
>> qemu-kvm-19756 [013] 153774.856178: kvm_exit: reason APIC_ACCESS rip 
>> 0xffffffff81025098 info 1380 0
>> qemu-kvm-19756 [013] 153774.856189: kvm_exit: reason VMREAD rip 
>> 0xffffffffa00d5127 info 0 0
>> qemu-kvm-19756 [013] 153774.856191: kvm_exit: reason VMREAD rip 
>> 0xffffffffa00d5127 info 0 0
>> ...
>>
>> My point being that I only see kvm_exit messages but no kvm_entry. Does this 
>> mean that the VCPUs
>> are somehow stuck in L2 ?
>>
>> Anyway, since this setup was running fine for me on older kernels, and I 
>> couldn't
>> identify any significant changes in nVMX, I sifted through the other KVM 
>> changes and found this :
>>
>> --
>> commit 1aa8ceef0312a6aae7dd863a120a55f1637b361d
>> Author: Nikola Ciprich <extmaill...@linuxbox.cz>
>> Date:   Wed Mar 9 23:36:51 2011 +0100
>>
>>     KVM: fix kvmclock regression due to missing clock update
>>     
>>     commit 387b9f97750444728962b236987fbe8ee8cc4f8c moved 
>> kvm_request_guest_time_update(vcpu),
>>     breaking 32bit SMP guests using kvm-clock. Fix this by moving (new) 
>> clock update function
>>     to proper place.
>>     
>>     Signed-off-by: Nikola Ciprich <nikola.cipr...@linuxbox.cz>
>>     Acked-by: Zachary Amsden <zams...@redhat.com>
>>     Signed-off-by: Avi Kivity <a...@redhat.com>
>>
>> index 01f08a6..f1e4025 100644 (file)
>> --- a/arch/x86/kvm/x86.c
>> +++ b/arch/x86/kvm/x86.c
>> @@ -2127,8 +2127,8 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>>                 if (check_tsc_unstable()) {
>>                         kvm_x86_ops->adjust_tsc_offset(vcpu, -tsc_delta);
>>                         vcpu->arch.tsc_catchup = 1;
>> -                       kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
>>                 }
>> +               kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
>>                 if (vcpu->cpu != cpu)
>>                         kvm_migrate_timers(vcpu);
>>                 vcpu->cpu = cpu;
>> --
>>
>> If I revert this change, my L1/L2 guests run fine. This ofcourse, just hides 
>> the bug
>> because on my machine, check_tsc_unstable() returns false.
>>
>> I found out from Nadav that when KVM decides to run L2, it will write 
>> vmcs01->tsc_offset + vmcs12->tsc_offset to the active TSC_OFFSET which seems 
>> right.
>> But I verified that, if instead, I just write 
>> vmcs01->tsc_offset to TSC_OFFSET in prepare_vmcs02(), I don't see the bug 
>> anymore.
>>
>> Not sure where to go from here. I would appreciate if any one has any ideas.
>>
>>
>> Bandan
> 
> Using guests TSC value when performing TSC adjustments is wrong. Can
> you please try the following patch, which skips TSC adjustments if
> vcpu is in guest mode.
> 
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 2b76ae3..44c90d1 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -1096,6 +1096,9 @@ static int kvm_guest_time_update(struct kvm_vcpu *v)
>       s64 kernel_ns, max_kernel_ns;
>       u64 tsc_timestamp;
>  
> +     if (is_guest_mode(v))
> +             return 0;
> +
>       /* Keep irq disabled to prevent changes to the clock */
>       local_irq_save(flags);
>       kvm_get_msr(v, MSR_IA32_TSC, &tsc_timestamp);
> @@ -2214,6 +2217,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>               tsc_delta = !vcpu->arch.last_guest_tsc ? 0 :
>                            tsc - vcpu->arch.last_guest_tsc;
>  
> +             if (is_guest_mode(vcpu))
> +                     tsc_delta = 0;
> +
>               if (tsc_delta < 0)
>                       mark_tsc_unstable("KVM discovered backwards TSC");
>               if (check_tsc_unstable()) {
> @@ -2234,7 +2240,8 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
>  {
>       kvm_x86_ops->vcpu_put(vcpu);
>       kvm_put_guest_fpu(vcpu);
> -     kvm_get_msr(vcpu, MSR_IA32_TSC, &vcpu->arch.last_guest_tsc);
> +     if (!is_guest_mode(vcpu))
> +             kvm_get_msr(vcpu, MSR_IA32_TSC, &vcpu->arch.last_guest_tsc);
>  }
>  
>  static int is_efer_nx(void)
> @@ -5717,7 +5724,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
>       if (hw_breakpoint_active())
>               hw_breakpoint_restore();
>  
> -     kvm_get_msr(vcpu, MSR_IA32_TSC, &vcpu->arch.last_guest_tsc);
> +     if (!is_guest_mode(vcpu))
> +             kvm_get_msr(vcpu, MSR_IA32_TSC, &vcpu->arch.last_guest_tsc);
>  
>       vcpu->mode = OUTSIDE_GUEST_MODE;
>       smp_wmb();

That unfortunately does not fix the L1 lockups I get here - unless I
confine L1 to a single CPU. It looks like (don't have all symbols for
the guest kernel ATM) that we are stuck in processing a timer IRQ.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to