在 2016/10/24 23:18, Paolo Bonzini 写道:
On 24/10/2016 17:14, Radim Krčmář wrote:
2016-10-24 16:39+0200, Paolo Bonzini:
On 19/10/2016 19:24, Radim Krčmář wrote:
+ if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
+ if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+
On 24/10/2016 17:14, Radim Krčmář wrote:
> 2016-10-24 16:39+0200, Paolo Bonzini:
>> On 19/10/2016 19:24, Radim Krčmář wrote:
> + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
> + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
> + &vcp
2016-10-24 16:39+0200, Paolo Bonzini:
> On 19/10/2016 19:24, Radim Krčmář wrote:
>>> > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
>>> > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
>>> > + &vcpu->arch.st.steal,
>>> > +
On 19/10/2016 19:24, Radim Krčmář wrote:
>> > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED)
>> > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
>> > + &vcpu->arch.st.steal,
>> > + sizeof(struct kvm_steal_ti
在 2016/10/20 01:24, Radim Krčmář 写道:
2016-10-19 06:20-0400, Pan Xinhui:
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu as p
2016-10-19 06:20-0400, Pan Xinhui:
> This is to fix some lock holder preemption issues. Some other locks
> implementation do a spin loop before acquiring the lock itself.
> Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
> takes the cpu as parameter and return true if the c
This is to fix some lock holder preemption issues. Some other locks
implementation do a spin loop before acquiring the lock itself.
Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It
takes the cpu as parameter and return true if the cpu is preempted. Then
kernel can break the
7 matches
Mail list logo