On 27/05/18 04:47, Tian, Kevin wrote:
>> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
>> Sent: Tuesday, May 22, 2018 7:21 PM
>>
>> At the moment, all modifications of the MSR lists are in current context.
>> However, future changes may need to put MSR_EFER into the lists from
>> domctl
>> hypercall context.
>>
>> Plumb a struct vcpu parameter down through the infrastructure, and use
>> vmx_vmcs_{enter,exit}() for safe access to the VMCS in vmx_add_msr().
>> Use
>> assertions to ensure that access is either in current context, or while the
>> vcpu is paused.
>>
>> For now it is safe to require that remote accesses are under the domctl lock.
>> This will remain safe if/when the global domctl lock becomes per-domain.
>>
>> Note these expectations beside the fields in arch_vmx_struct, and reorder
>> the
>> fields to avoid unnecessary padding.
>>
>> Signed-off-by: Andrew Cooper <andrew.coop...@citrix.com>
>> ---
>> CC: Jan Beulich <jbeul...@suse.com>
>> CC: Jun Nakajima <jun.nakaj...@intel.com>
>> CC: Kevin Tian <kevin.t...@intel.com>
>> CC: Wei Liu <wei.l...@citrix.com>
>> CC: Roger Pau Monné <roger....@citrix.com>
>>
>> To preempt any questions about spinlocks, the use of the MSR lists in the
>> return-to-guest path causes checklock failures for plain spinlocks (despite 
>> it
>> technically being safe to live here), and the call to alloc_xenheap_page()
>> makes it impossible to use irqsave/restore variants, due to the nested
>> acquisition of the heap lock.
> I don't understand above words. How does it relate to the patch here?

It explains why I haven't/can't introduce a spinlock to protect access,
in case someone reviewing the code asks "why not introduce a spinlock".

>> @@ -1333,12 +1335,14 @@ struct vmx_msr_entry
>> *vmx_find_msr(uint32_t msr, enum vmx_msr_list_type type)
>>      return ((ent < end) && (ent->index == msr)) ? ent : NULL;
>>  }
>>
>> -int vmx_add_msr(uint32_t msr, enum vmx_msr_list_type type)
>> +int vmx_add_msr(struct vcpu *v, uint32_t msr, enum vmx_msr_list_type
>> type)
>>  {
>> -    struct vcpu *curr = current;
>> -    struct arch_vmx_struct *arch_vmx = &curr->arch.hvm_vmx;
>> +    struct arch_vmx_struct *arch_vmx = &v->arch.hvm_vmx;
>>      struct vmx_msr_entry **ptr, *start = NULL, *ent, *end;
>>      unsigned int total;
>> +    int rc;
>> +
>> +    ASSERT(v == current || !vcpu_runnable(v));
>>
>>      switch ( type )
>>      {
>> @@ -1357,13 +1361,18 @@ int vmx_add_msr(uint32_t msr, enum
>> vmx_msr_list_type type)
>>          return -EINVAL;
>>      }
>>
>> +    vmx_vmcs_enter(v);
>> +
> why entering vmcs so early even before possible page allocation?

Because the next thing the allocation path does is write to the MSR
load/save list fields.

The alternative would be to have an else on this if(), and a second
vmcs_enter() after the memory allocation, but as these are two one-time
allocations in uncontended paths, I didn't consider the added complexity
worth it.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to