> Subject: RE: [Xen-devel] Future support of 5-level paging in Xen:wq
>
> > From: Li, Liang Z
> > Sent: Tuesday, December 13, 2016 11:58 AM
> >
> > Hi All,
> >
> > We are now working on enabling 5 level paging & 5 level EPT for XEN.
> > We need
Hi All,
We are now working on enabling 5 level paging & 5 level EPT for XEN. We need
the community's opinions about the following aspects:
1. Should we enable 5 level paging for PV guest ? (You are discussing)
2. Should we make the 5 level paging and 4 level paging supported with one
binary?
> >>> On 17.11.16 at 09:24, wrote:
> > There are a lot of code try to get the pte flags repeatedly, why not
> > save the result and reuse it in the following code? It can help to
> > save some CPU cycles and make the code cleaner, no?
> >
> > I am not sure if this is the right direction, just chan
> >> >>> On 30.03.16 at 09:28, wrote:
> >> > 2016-03-29 18:39 GMT+08:00 Jan Beulich :
> >> >> ---
> >> >> I assume this also addresses the issue which
> >> >>
> >> http://lists.xenproject.org/archives/html/xen-devel/2016-01/msg03189.
> >> html
> >> >> attempted to deal with in a not really accepta
> >>> On 01.04.16 at 09:40, wrote:
> > A couple of weeks ago, Jianzhong reported an issue, the SRIOV NICs
> > (Intel 82599, 82571 ) can't work correctly in Windows guest.
> > By debugging, we found your this patch, the commit ID
> > ad28e42bd1d28d746988ed71654e8aa670629753, caused the regression.
Hi Jan,
A couple of weeks ago, Jianzhong reported an issue, the SRIOV NICs (Intel
82599, 82571 ) can't work correctly in Windows guest.
By debugging, we found your this patch, the commit ID
ad28e42bd1d28d746988ed71654e8aa670629753, caused the regression.
Could you help to take a look which par
> >> > (XEN)nvmx_handle_vmclear
> >> > (XEN)nvmx_handle_vmptrld
> >> > (XEN)map_io_bitmap_all
> >> > (XEN)_map_io_bitmap
> >> > (XEN)virtual_vmcs_enter
> >> > (XEN)_map_io_bitmap
> >> > (XEN)virtual_vmcs_enter
> >> > (XEN)_map_msr_bitmap
> >> > (XEN)virtual_vmcs_enter
> >> > (XEN)nvmx_set_vmcs_poin
> thanks for the analysis!
>
> > (XEN)nvmx_handle_vmclear
> > (XEN)nvmx_handle_vmptrld
> > (XEN)map_io_bitmap_all
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_msr_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)nvmx_set_vmcs_poin
> >>> On 24.02.16 at 08:04, wrote:
> > I found the code path when creating the L2 guest:
>
> thanks for the analysis!
>
> > (XEN)nvmx_handle_vmclear
> > (XEN)nvmx_handle_vmptrld
> > (XEN)map_io_bitmap_all
> > (XEN)_map_io_bitmap
> > (XEN)virtual_vmcs_enter
> > (XEN)_map_io_bitmap
> > (XEN)virtu
> >> -void virtual_vmcs_enter(void *vvmcs)
> >> +void virtual_vmcs_enter(const struct vcpu *v)
> >> {
> >> -__vmptrld(pfn_to_paddr(domain_page_map_to_mfn(vvmcs)));
> >> +__vmptrld(v->arch.hvm_vmx.vmcs_shadow_maddr);
> >
> > Debug shows v->arch.hvm_vmx.vmcs_shadow_maddr will equal to 0 at
> Thanks for getting back on this.
>
> >> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> >> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> >> @@ -932,37 +932,36 @@ void vmx_vmcs_switch(paddr_t from, paddr
> >> spin_unlock(&vmx->vmcs_lock);
> >> }
> >>
> >> -void virtual_vmcs_enter(void *vvmcs)
> >> +void virtual_
> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
> Sent: Monday, January 11, 2016 5:59 PM
> To: Wei Liu; Li, Liang Z
> Cc: ian.campb...@citrix.com; stefano.stabell...@eu.citrix.com;
> ian.jack...@eu.citrix.com; xen-devel@lists.xen.org; jbeul...@suse.com
> Subject: Re:
Hi Jan,
I found some issues in your patch, see the comments below.
> -Original Message-
> From: xen-devel-boun...@lists.xen.org [mailto:xen-devel-
> boun...@lists.xen.org] On Behalf Of Jan Beulich
> Sent: Monday, October 19, 2015 11:23 PM
> To: xen-devel
> Cc: Tian, Kevin; Nakajima, Jun
> >> We found dom0 will crash when booing on HSW-EX server, the dom0
> >> kernel version is v4.4. By debugging I found the your patch '
> >> x86/xen: discard RAM regions above the maximum reservation' , which
> the commit ID is : f5775e0b6116b7e2425ccf535243b21 caused the regression.
> The debug me
> This is a -EBUSY. Is there anything magic about mfn 188d903? It just looks
> like plain RAM in the E820 table.
> Have you got dom0 configured to use linear p2m mode? Without it, dom0 can
> only have a maximum of 512GB of RAM.
> ~Andrew
No special configuration for dom0, actually, the ser
> Cc: wei.l...@citrix.com; ian.campb...@citrix.com;
> stefano.stabell...@eu.citrix.com; ian.jack...@eu.citrix.com; xen-
> de...@lists.xen.org; jbeul...@suse.com
> Subject: Re: [Xen-devel] [PATCH] libxc: Expose the MPX cpuid flag to guest
>
> On Mon, Jan 11, 2016 at 04:52:10PM +0800, Liang Li wrote
> >>> On 14.01.16 at 03:26, wrote:
> > We find when adding a VF by the command 'xl pci-assignable-add $BDF',
> > a warning message like this:
> > 'libxl: warning: libxl_pci.c:843:libxl__device_pci_assignable_add:
> > :03:10.1 not bound to a driver, will not be rebound'
> > always appears, our
Hi Wei,
We find when adding a VF by the command 'xl pci-assignable-add $BDF', a warning
message like this:
'libxl: warning: libxl_pci.c:843:libxl__device_pci_assignable_add: :03:10.1
not bound to a driver, will not be rebound'
always appears, our QA team treat this as a bug.
By checking t
> On 11/01/16 09:05, Wei Liu wrote:
> > On Mon, Jan 11, 2016 at 04:52:10PM +0800, Liang Li wrote:
> >> If hardware support memory protect extension, expose this feature to
> >> guest by default. Users don't have to use a 'cpuid= ' option in
> >> config file to turn it on.
> >>
> >> Signed-off-by: L
> > Add pci = [ '$VF_BDF', '$VF_BDF', '$VF_BDF'] in
>
> This is a bit confusing: it is not actually correct to assign the same
> device, even
> an SR_IOV VF, multiple times, so these must be all different. More like:
>
> pci = [ '$VF_BDF1', '$VF_BDF2', '$VF_BDF3']
>
>
> > hvm guest configurati
> Cc: linux-ker...@vger.kernel.org; ian.campb...@citrix.com;
> wei.l...@citrix.com; xen-de...@lists.xenproject.org;
> net...@vger.kernel.org
> Subject: Re: [PATCH] xen-netback: remove duplicated function definition
>
> From: Liang Li
> Date: Sat, 4 Jul 2015 03:33:00 +0800
>
> > There are two du
> Please don't forget to Cc the maintainer (recently changed, now added).
>
> > @@ -1076,6 +1077,9 @@ void ept_sync_domain(struct p2m_domain *p2m)
> >
> > ASSERT(local_irq_is_enabled());
> >
> > +if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) ) {
> > +p2m_flush_nestedp2m(d);
Sorry for the wrong email of Yang's, please ignore. I will resend.
Liang
> -Original Message-
> From: Li, Liang Z
> Sent: Saturday, June 27, 2015 5:57 AM
> To: xen-devel@lists.xen.org
> Cc: t...@xen.org; k...@xen.org; jbeul...@suse.com;
> andrew.coop...@citr
> > > > xen/arch/x86/mm/p2m-ept.c | 4
> > > > 1 file changed, 4 insertions(+)
> > > >
> > > > diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
> ept.c
> > > > index 5133eb6..26293a0 100644
> > > > --- a/xen/arch/x86/mm/p2m-ept.c
> > > > +++ b/xen/arch/x86/mm/p2m-ept.c
> > > > @@
> -Original Message-
> From: Tim Deegan [mailto:t...@xen.org]
> Sent: Thursday, June 11, 2015 10:20 PM
> To: Li, Liang Z
> Cc: xen-devel@lists.xen.org; k...@xen.org; jbeul...@suse.com;
> andrew.coop...@citrix.com; Tian, Kevin; Zhang, Yang Z
> Subject: Re: [RESEND
> -Original Message-
> From: Tim Deegan [mailto:t...@xen.org]
> Sent: Thursday, June 04, 2015 9:11 PM
> To: Li, Liang Z
> Cc: xen-devel@lists.xen.org; k...@xen.org; jbeul...@suse.com;
> andrew.coop...@citrix.com; Tian, Kevin; Zhang, Yang Z
> Subject: Re: [RESEND
> Tian, Kevin wrote on 2015-04-03:
> >> From: Tim Deegan [mailto:t...@xen.org]
> >> Sent: Thursday, March 26, 2015 7:10 PM
> >>
> >> Hi, VMX maintainers,
> >>
> >> I was looking at the nested EPT code while following up on Ed's email
> >> about altp2m design, and I can't see where nested-EPT entrie
>
> >>> On 16.04.15 at 22:49, wrote:
> > ... making the code better document itself. No functional change
> > intended.
> >
> > Signed-off-by: Liang Li
> > ---
>
> From looking at it I can't see what the difference to v1 is, and you also
> don't
> say anything in that regard here.
>
> Jan
In
>
> Much appreciated, thanks!
>
> Jan
I just sent the wrong patch , sorry! I will send the right one.
Liang
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
Sorry, invalid patch, Please ignore this.
> xen/arch/x86/hvm/vmx/vmx.c | 11 ++-
> 1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 6c4f78c..5e90027 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch
> On 13/04/2015 16:12, Liang Li wrote:
> > 2. Do the attach and detach operation with a time interval. eg. 10s.
> >
> > The error message will not disappear if retry, in this case, it's
> > a bug.
> >
> > In the 'xen_pt_region_add' and 'xen_pt_region_del', we should only
> > care about
> >> This would be easier to read as
> >>
> >> if ( cpu_has_vmx_vnmi &&
> >> (idtv_info & INTR_INFO_INTR_TYPE_MASK) ==
> >> (X86_EVENTTYPE_NMI<<8)) )
> >
> > I was going to say something similar, but I think in the past Jan has
> > said that Liang's original is more in line with the coding sty
> On Tue, Apr 7, 2015 at 2:42 AM, mailing lists wrote:
> > Hi --
> >
> > I've been trying to get nested virtualization working with Xen so that
> > I could boot Windows and use Hyper-V related features, however I have
> > not had much success. Using Windows 8.1 or Windows 2012r2, I'm able
> > to
Hi All,
I found a way to preproduce the bug very easy by using the
apic->send_IPI_all(NMI_VECTOR)
in L2 in a kernel module to trigger MNI. And I have verified that the bug can
be fixed as Jan's
suggestion, ' the second half of vmx_idtv_reinject() needs to be done without
regard to
nes
> >>> On 19.01.15 at 10:00, wrote:
> > --- a/xen/arch/x86/apic.c
> > +++ b/xen/arch/x86/apic.c
> > @@ -915,6 +915,11 @@ void __init x2apic_bsp_setup(void)
> > return;
> > }
> > printk("x2APIC: Already enabled by BIOS: Ignoring cmdline
> > disable.\n");
> > +} els
> >>> On 22.01.15 at 08:44, wrote:
> > Tian, Kevin wrote on 2015-01-22:
> >>> From: Jan Beulich [mailto:jbeul...@suse.com]
> >>> Sent: Wednesday, January 21, 2015 6:31 PM
> >>>
>
> Yes, it's true. But I still don't understand why to do the
> flush_all just when iommu_enable is true.
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: Wednesday, January 21, 2015 7:22 PM
> To: Li, Liang Z
> Cc: Andrew Cooper; Dong, Eddie; Tian, Kevin; Zhang, Yang Z; xen-
> de...@lists.xen.org; k...@xen.org; Tim Deegan
> Subject: RE: [Xen-dev
> >> >> flush_all function will consume about 8 milliseconds, in my test
> >> > environment, the VM
> >> >> has 4 VCPUs, the hvm_load_mtrr_msr() will be called four times,
> >> >> and totally
> >> > consumes
> >> >> about 500 milliseconds. Obviously, there are too many flush_all calls.
> >> >>
>
> >> I found the restore process of the live migration is quit long, so I
> >> try to
> > find out what's going on.
> >> By debugging, I found the most time consuming process is restore the
> >> VM's
> > MTRR MSR,
> >> The process is done in the function hvm_load_mtrr_msr(), it will call
> >> the m
Hi Jan,
I found the restore process of the live migration is quit long, so I try to
find out what's going on.
By debugging, I found the most time consuming process is restore the VM's MTRR
MSR,
The process is done in the function hvm_load_mtrr_msr(), it will call the
memory_type_changed(), which
> > > Use the 'xl pci-attach $DomU $BDF' command to attach more than one
> > > PCI devices to the guest, then detach the devices with 'xl
> > > pci-detach $DomU $BDF', after that, re-attach these PCI devices
> > > again, an error message will be reported like following:
> > >
> > > libxl: error: li
>>>(cpu_has_page1gb && paging_mode_hap(d)) the change above is
>>> pointless. While, considering this, comments on
>>> v2 may have been misleading, you should have simply updated the patch
>>> description instead to clarify why the v2 change was okay even for the
>>> shadow mode case.
>> I c
>> -if (!hvm_pse1gb_supported(d))
>> +if (!hvm_pse1gb_supported(d) || paging_mode_shadow(d))
>> *edx &= ~cpufeat_mask(X86_FEATURE_PAGE1GB);
>
>With
>
>#define hvm_pse1gb_supported(d) \
>(cpu_has_page1gb && paging_mode_hap(d))
>the change above is pointless. While,
>>> patch is ok?
>>
>> No - Tim having confirmed that shadow mode doesn't support 1Gb pages,
> the feature clearly must not be made visible for shadow mode guest.
>Indeed. Liang, Can you add the shadow mode check in the next version?
Ok , I will do it and resend the patch.
> > The libxl__device_exists will return 1 if more than one PCI devices are
> > attached to the guest, no matter the BDFs are identical or not.
>
> That means this check is problematic. I think the original intention was to
> check on BDFs, however it wasn't thoroughly tested. Sorry.
>
> > I don'
> Originally the code allowed users to attach the same device more than
> once. It just stupidly overwrites xenstore entries. This is bogus as
> frontend will be very confused.
>
> Introduce a helper function to check if the device to be written to
> xenstore already exists. A new error code is als
> > > Konrad,
> > > this is another bug fix for QEMU: pci hotplug doesn't work when
> > > xen_platform_pci=0 without this.
> >
> > Yes.
> > >
> > >I think we should have it in 4.5. What do yo think?
> >
> > Do you believe we should first get an Tested-by from the Intel QA folks?
> Liang at Intel
Now it's no need to set the "cpuid= " option in the config file
to expose the 1GB hugepage feature to guest.
Signed-off-by: Li Liang
---
tools/libxc/xc_cpuid_x86.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index a
48 matches
Mail list logo