> >> > (XEN)nvmx_handle_vmclear
> >> > (XEN)nvmx_handle_vmptrld
> >> > (XEN)map_io_bitmap_all
> >> > (XEN)_map_io_bitmap
> >> > (XEN)virtual_vmcs_enter
> >> > (XEN)_map_io_bitmap
> >> > (XEN)virtual_vmcs_enter
> >> > (XEN)_map_msr_bitmap
> >> > (XEN)virtual_vmcs_enter
> >> > (XEN)nvmx_set_vmcs_poin
>>> On 25.02.16 at 07:50, wrote:
>> thanks for the analysis!
>>
>> > (XEN)nvmx_handle_vmclear
>> > (XEN)nvmx_handle_vmptrld
>> > (XEN)map_io_bitmap_all
>> > (XEN)_map_io_bitmap
>> > (XEN)virtual_vmcs_enter
>> > (XEN)_map_io_bitmap
>> > (XEN)virtual_vmcs_enter
>> > (XEN)_map_msr_bitmap
>> > (XEN)
> thanks for the analysis!
>
> > (XEN)nvmx_handle_vmclear
> > (XEN)nvmx_handle_vmptrld
> > (XEN)map_io_bitmap_all
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_msr_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)nvmx_set_vmcs_poin
> >>> On 24.02.16 at 08:04, wrote:
> > I found the code path when creating the L2 guest:
>
> thanks for the analysis!
>
> > (XEN)nvmx_handle_vmclear
> > (XEN)nvmx_handle_vmptrld
> > (XEN)map_io_bitmap_all
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_io_bitmap
> > (XEN)virtu
Liang,
>>> On 24.02.16 at 08:04, wrote:
> I found the code path when creating the L2 guest:
thanks for the analysis!
> (XEN)nvmx_handle_vmclear
> (XEN)nvmx_handle_vmptrld
> (XEN)map_io_bitmap_all
> (XEN)_map_io_bitmap
> (XEN)virtual_vmcs_enter
> (XEN)_map_io_bitmap
> (XEN)virtual_vmcs_enter
>
> >> -void virtual_vmcs_enter(void *vvmcs)
> >> +void virtual_vmcs_enter(const struct vcpu *v)
> >> {
> >> -__vmptrld(pfn_to_paddr(domain_page_map_to_mfn(vvmcs)));
> >> +__vmptrld(v->arch.hvm_vmx.vmcs_shadow_maddr);
> >
> > Debug shows v->arch.hvm_vmx.vmcs_shadow_maddr will equal to 0 at
> Thanks for getting back on this.
>
> >> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> >> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> >> @@ -932,37 +932,36 @@ void vmx_vmcs_switch(paddr_t from, paddr
> >> spin_unlock(&vmx->vmcs_lock);
> >> }
> >>
> >> -void virtual_vmcs_enter(void *vvmcs)
> >> +void virtual_
>>> On 23.02.16 at 09:34, wrote:
> I found some issues in your patch, see the comments below.
Thanks for getting back on this.
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -932,37 +932,36 @@ void vmx_vmcs_switch(paddr_t from, paddr
>> spin_unlock(&vmx->vmc
n, Kevin; Nakajima, Jun
> Subject: [Xen-devel] [PATCH 3/3] vVMX: use latched VMCS machine address
>
> Instead of calling domain_page_map_to_mfn() over and over, latch the
> guest VMCS machine address unconditionally (i.e. independent of whether
> VMCS shadowing is supported by the ha
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Monday, October 19, 2015 11:23 PM
>
> Instead of calling domain_page_map_to_mfn() over and over, latch the
> guest VMCS machine address unconditionally (i.e. independent of whether
> VMCS shadowing is supported by the hardware).
>
> Since thi
Instead of calling domain_page_map_to_mfn() over and over, latch the
guest VMCS machine address unconditionally (i.e. independent of whether
VMCS shadowing is supported by the hardware).
Since this requires altering the parameters of __[gs]et_vmcs{,_real}()
(and hence all their callers) anyway, ta
11 matches
Mail list logo