On 09/08/17 15:00, Bruno Alvisio wrote:
> Hello,
>
> In /docs/specs/libxc-migration-stream.pandoc "x86 HVM Guest" section
> states that:
>
> "HVM\_PARAMS must precede HVM\_CONTEXT, as certain parameters can
> affect the validity of architectural state in the context." (line 679)
>
> However, from t
Hello,
In /docs/specs/libxc-migration-stream.pandoc "x86 HVM Guest" section states
that:
"HVM\_PARAMS must precede HVM\_CONTEXT, as certain parameters can affect the
validity of architectural state in the context." (line 679)
However, from the code it looks like the HVM_CONTEXT record is sent an
>>> Chao Gao 08/04/17 10:12 AM >>>
>I have a machine which has two numa nodes. The NODE0 contains
>memory range from 0 to 0x18400MB and NODE1 contains memory range
>from 0x18400MB to 0x1c400MB. The resource available to dom0 is
>restricted through adding "dom0_mem=10G dom0_nodes=0 dom0_max_vcpus=4
Hi, everyone.
I have a machine which has two numa nodes. The NODE0 contains
memory range from 0 to 0x18400MB and NODE1 contains memory range
from 0x18400MB to 0x1c400MB. The resource available to dom0 is
restricted through adding "dom0_mem=10G dom0_nodes=0 dom0_max_vcpus=48"
to Xen Command Line. E
On 07/22/2017 12:33 AM, Tamas K Lengyel wrote:
> Hey Razvan,
Hello,
> the vm_event that is being generated by doing
> VM_EVENT_FLAG_GET_NEXT_INTERRUPT sends almost all required information
> about the interrupt to the listener to allow it to get reinjected,
> except the instruction length. If the
Hey Razvan,
the vm_event that is being generated by doing
VM_EVENT_FLAG_GET_NEXT_INTERRUPT sends almost all required information
about the interrupt to the listener to allow it to get reinjected,
except the instruction length. If the listener wants to reinject the
interrupt to the guest via xc_hvm_
Dear Christopher,
On 18.07.17 12:08, Andrii Anisov wrote:
In order to build the required system, a board candidate should:
- have a CPU able to switch to EL2/Virtualization mode
- have a public bootloader sources able to switch CPU to
EL2/Virtualizaiotn mode
- have a Linux system
Dear Christopher,
On 14.07.17 00:39, Christopher Lambert wrote:
I would like to test xen on an ARM target with one linux (as dom0) and
one android (as domU).
As I know there were built such setups on Renesas R-Car H2 and H3, and
TI DRA7xx based boards (Lager, Salvator-X and Jacinto6 EVM board
Hello,
I would like to test xen on an ARM target with one linux (as dom0) and one
android (as domU).My goal is to prototype a system with one critical
application (on linux side) and one non critical gui application (on android
side).
If possible, I don't want to do porting job (driver writting)
Op 2017-07-04 14:51, schreef Juergen Gross:
On 04/07/17 14:44, Jean-Louis Dupond wrote:
Hi Juergen,
We run Xen as dom0 with Windows domU's.
Now with older 4.9.x kernels, we had BSOD's when we booted a domU with
old Xen drivers in it.
With the newest kernel (4.9.34), this seems to be resolved.
On 04/07/17 14:44, Jean-Louis Dupond wrote:
> Hi Juergen,
>
> We run Xen as dom0 with Windows domU's.
>
> Now with older 4.9.x kernels, we had BSOD's when we booted a domU with
> old Xen drivers in it.
> With the newest kernel (4.9.34), this seems to be resolved.
>
> I was wondering what caused
Hi Juergen,
We run Xen as dom0 with Windows domU's.
Now with older 4.9.x kernels, we had BSOD's when we booted a domU with
old Xen drivers in it.
With the newest kernel (4.9.34), this seems to be resolved.
I was wondering what caused this. And that patch triggered my attention
:)
Thanks
Je
On 04/07/17 13:54, Jean-Louis Dupond wrote:
> Hi All,
>
> We had some issues with BSOD's on Windows at startup when using some old
> Xen drivers inside Windows.
> Now when we upgrade to the most recent 4.9.x kernel. The issue seems to
> be resolved.
>
> Could it be that the following commit fixes
Hi All,
We had some issues with BSOD's on Windows at startup when using some old
Xen drivers inside Windows.
Now when we upgrade to the most recent 4.9.x kernel. The issue seems to
be resolved.
Could it be that the following commit fixes the issue:
https://git.kernel.org/pub/scm/linux/kernel/
Question has been answered. Thanks.
On Thu, May 25, 2017 at 6:35 PM, Bruno Alvisio
wrote:
> Hello all,
>
> Summary
>
> I am using XEN hypervisor to run a HVM with a QEMU backed disk. After I
> start the HVM I use QMP "query-block" command to see the devices of the VM.
> Initially the command ret
Hello all,
Summary
I am using XEN hypervisor to run a HVM with a QEMU backed disk. After I
start the HVM I use QMP "query-block" command to see the devices of the VM.
Initially the command returns the disk that I set as part of the
configuration but after a few seconds the "query-block" command
On 03/31/2017 01:32 PM, Meng Xu wrote:
> Hi Boris,
>
> On Fri, Mar 31, 2017 at 12:01 PM, Boris Ostrovsky
> wrote:
When I program the general performance counter to trigger an overflow
interrupt, I set the following bits for the event selector register
and run a task to generate the
Hi Boris,
On Fri, Mar 31, 2017 at 12:01 PM, Boris Ostrovsky
wrote:
>
>>> When I program the general performance counter to trigger an overflow
>>> interrupt, I set the following bits for the event selector register
>>> and run a task to generate the L3 cache cache miss.
>>> FLAG_ENABLE: 0x40U
>> When I program the general performance counter to trigger an overflow
>> interrupt, I set the following bits for the event selector register
>> and run a task to generate the L3 cache cache miss.
>> FLAG_ENABLE: 0x40UL
>> FLAG_INT:0x10UL
>> FLAG_USR: 0x01UL
>> L3_ALLMISS_EVENT
>>> On 31.03.17 at 17:41, wrote:
> I'm wondering:
> How does Xen (vpmu) handle the general performance counter's overflow
> interrupt?
> Could you point me to the function handler, if Xen does handle it?
Two simple steps take you there: grep for LVTPC to find which vector
is being used (PMU_APIC
[Sorry, I cc.ed Quan's previous email at Intel. Change to his current email.]
On Fri, Mar 31, 2017 at 11:41 AM, Meng Xu wrote:
> Hi Jan and Boris,
>
> I'm Meng Xu from the University of Pennsylvania.
>
> I'm wondering:
> How does Xen (vpmu) handle the general performance counter's overflow
> int
Hi Jan and Boris,
I'm Meng Xu from the University of Pennsylvania.
I'm wondering:
How does Xen (vpmu) handle the general performance counter's overflow interrupt?
Could you point me to the function handler, if Xen does handle it?
---What I want to achieve---
I'm looking at the real-time performa
> -Original Message-
> From: Konrad Rzeszutek Wilk
> Sent: Tuesday, March 21, 2017 08:19 AM
> To: Xuquan (Quan Xu); Venu Busireddy
> Cc: Jan Beulich; anthony.per...@citrix.com; george.dun...@eu.citrix.com;
> ian.jack...@eu.citrix.com; Fanhenglong; Kevin Tian; StefanoStabellini;
> xen-deve
. snip..
> support to pass-through large bar (pci-e bar > 4G) device..
Yes it does work.
>
> > I was assuming large BAR handling to work so far
> >(Konrad had done some adjustments there quite a while ago, from all I
> >recall).
> >
>
>
> _iirc_ what Konrad mentioned was using qemu-trad..
Ye
>>> On 21.03.17 at 02:53, wrote:
> On March 20, 2017 3:35 PM, Jan Beulich wrote:
> On 20.03.17 at 02:58, wrote:
>>> On March 16, 2017 11:32 PM, Jan Beulich wrote:
>>> On 16.03.17 at 15:21, wrote:
> On March 16, 2017 10:06 PM, Jan Beulich wrote:
> On 16.03.17 at 14:55, wrote:
On March 20, 2017 3:35 PM, Jan Beulich wrote:
On 20.03.17 at 02:58, wrote:
>> On March 16, 2017 11:32 PM, Jan Beulich wrote:
>> On 16.03.17 at 15:21, wrote:
On March 16, 2017 10:06 PM, Jan Beulich wrote:
On 16.03.17 at 14:55, wrote:
>> I try to pass-through a device wi
>>> On 20.03.17 at 02:58, wrote:
> On March 16, 2017 11:32 PM, Jan Beulich wrote:
> On 16.03.17 at 15:21, wrote:
>>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>>> On 16.03.17 at 14:55, wrote:
> I try to pass-through a device with 8G large bar, such as nvidia
> M60(note1, pci
On March 16, 2017 11:32 PM, Jan Beulich wrote:
On 16.03.17 at 15:21, wrote:
>> On March 16, 2017 10:06 PM, Jan Beulich wrote:
>> On 16.03.17 at 14:55, wrote:
I try to pass-through a device with 8G large bar, such as nvidia
M60(note1, pci-e info as below). It takes about '__15 s
>>> On 16.03.17 at 15:21, wrote:
> On March 16, 2017 10:06 PM, Jan Beulich wrote:
> On 16.03.17 at 14:55, wrote:
>>> I try to pass-through a device with 8G large bar, such as nvidia
>>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>>> update 8G large bar in QEMU::xen_pt_
On March 16, 2017 10:06 PM, Jan Beulich wrote:
On 16.03.17 at 14:55, wrote:
>> I try to pass-through a device with 8G large bar, such as nvidia
>> M60(note1, pci-e info as below). It takes about '__15 sconds__' to
>> update 8G large bar in QEMU::xen_pt_region_update()..
>> Specifically, it is
>>> On 16.03.17 at 14:55, wrote:
> I try to pass-through a device with 8G large bar, such as nvidia M60(note1,
> pci-e info as below). It takes about '__15 sconds__' to update 8G large bar
> in
> QEMU::xen_pt_region_update()..
> Specifically, it is xc_domain_memory_mapping() in xen_pt_region_up
Hello,
I try to pass-through a device with 8G large bar, such as nvidia M60(note1,
pci-e info as below). It takes about '__15 sconds__' to update 8G large bar in
QEMU::xen_pt_region_update()..
Specifically, it is xc_domain_memory_mapping() in xen_pt_region_update().
Digged into xc_domain_memory
Hi Jan,
On 2017/3/13 19:38, Jan Beulich wrote:
On 13.03.17 at 06:12, wrote:
>> I'm confusing about the behavior of HLT instruction in VMX guest mode.
>>
>> I set "hlt exiting" bit to 0 in VMCS, and the vcpu didn't vmexit when execute
>> HLT as expected. However, I used powertop/cpupower on
>>> On 13.03.17 at 06:12, wrote:
> I'm confusing about the behavior of HLT instruction in VMX guest mode.
>
> I set "hlt exiting" bit to 0 in VMCS, and the vcpu didn't vmexit when execute
> HLT as expected. However, I used powertop/cpupower on host to watch the pcpu's
> c-states, it seems that th
Hi guys,
I'm confusing about the behavior of HLT instruction in VMX guest mode.
I set "hlt exiting" bit to 0 in VMCS, and the vcpu didn't vmexit when execute
HLT as expected. However, I used powertop/cpupower on host to watch the pcpu's
c-states, it seems that the pcpu didn't enter C1/C1E state d
Hi guys,
I'm confusing about the behavior of HLT instruction in VMX guest mode.
I set "hlt exiting" bit to 0 in VMCS, and the vcpu didn't vmexit when execute
HLT as expected. However, I used powertop/cpupower on host to watch the pcpu's
c-states, it seems that the pcpu didn't enter C1/C1E state d
Hi, all!
Can anyone please point me to what PVMMU is and why it is
only supported for x86, but not ARM?
What I am trying to do is manually balloon in/out pages allocated
by my driver (with {alloc|free}_xenballooned_pages everything
works just fine, but this API gives me pages, while I want to u
Hi Stefano
On Sat, Dec 10, 2016 at 2:45 AM, Stefano Stabellini
wrote:
> On Thu, 8 Dec 2016, Oleksandr Tyshchenko wrote:
>> On Thu, Dec 8, 2016 at 9:39 PM, Julien Grall wrote:
>> >
>> >
>> > On 08/12/16 17:06, Oleksandr Tyshchenko wrote:
>> >>
>> >> Hi Julien,
>> >
>> >
>> > Hi Oleksandr,
>> Hi J
On Thu, 8 Dec 2016, Oleksandr Tyshchenko wrote:
> On Thu, Dec 8, 2016 at 9:39 PM, Julien Grall wrote:
> >
> >
> > On 08/12/16 17:06, Oleksandr Tyshchenko wrote:
> >>
> >> Hi Julien,
> >
> >
> > Hi Oleksandr,
> Hi Julien,
>
> thank you for sharing your opinion.
>
> >
> > As discussed on IRC, I CC
On Thu, Dec 8, 2016 at 9:39 PM, Julien Grall wrote:
>
>
> On 08/12/16 17:06, Oleksandr Tyshchenko wrote:
>>
>> Hi Julien,
>
>
> Hi Oleksandr,
Hi Julien,
thank you for sharing your opinion.
>
> As discussed on IRC, I CCed xen-devel and Stefano.
>
>> We would like to hear your opinion about the pr
On 08/12/16 17:06, Oleksandr Tyshchenko wrote:
Hi Julien,
Hi Oleksandr,
As discussed on IRC, I CCed xen-devel and Stefano.
We would like to hear your opinion about the proper way of porting
kernel driver to XEN.
There is a Linux iommu driver "IPMMU VMSA" for supporting
VMSA-compatible IPMM
>>> On 09.11.16 at 13:01, wrote:
> Based on CVE-2015-7814 and commit 1ef01396fdff, ' arm: handle races between
> relinquish_memory and free_domheap_pages'..
> relinquish_memory() [xen/arch/arm/domain.c, arm code],
> when couldn't get a reference -- someone is freeing this page and has already
>
Hi,
Based on CVE-2015-7814 and commit 1ef01396fdff, ' arm: handle races between
relinquish_memory and free_domheap_pages'..
relinquish_memory() [xen/arch/arm/domain.c, arm code],
when couldn't get a reference -- someone is freeing this page and has already
committed to doing so, so no more to do
>>> On 07.10.16 at 17:41, wrote:
> There are a ton of calls to flush_area_local, and a good chunk of them
> with the idle vCPU being the active one when it is called. As for
> write_cr3, there are also a lot of calls there. When I added some
> debug output to observe just how many dom0 would take
On Fri, Oct 7, 2016 at 9:32 AM, Jan Beulich wrote:
On 04.10.16 at 17:06, wrote:
>> At 08:29 -0600 on 04 Oct (1475569774), Jan Beulich wrote:
>>> >>> On 04.10.16 at 16:12, wrote:
>>> > yes, I understand that is the case when you do need to flush a guest.
>>> > And yes, there seem to be paths
>>> On 04.10.16 at 17:06, wrote:
> At 08:29 -0600 on 04 Oct (1475569774), Jan Beulich wrote:
>> >>> On 04.10.16 at 16:12, wrote:
>> > yes, I understand that is the case when you do need to flush a guest.
>> > And yes, there seem to be paths that require to bump the tag of a
>> > specific guest fo
On 23/09/2016 17:36, 조현권 wrote:
Hi
Hello,
Sorry for the late reply.
I am experimenting Xen with my embedded system environment and got a
question in next_module() function.
/
/
/static paddr_t __init next_module(paddr_t s, paddr_t *end)/
/{/
/struct bootmodules *mi = &bootinfo.modules;/
At 08:29 -0600 on 04 Oct (1475569774), Jan Beulich wrote:
> >>> On 04.10.16 at 16:12, wrote:
> > yes, I understand that is the case when you do need to flush a guest.
> > And yes, there seem to be paths that require to bump the tag of a
> > specific guest for certain events (mov-to-cr4 with paging
>>> On 04.10.16 at 16:12, wrote:
> yes, I understand that is the case when you do need to flush a guest.
> And yes, there seem to be paths that require to bump the tag of a
> specific guest for certain events (mov-to-cr4 with paging mode changes
> for example). What I'm poking at it here is that w
On Tue, Oct 4, 2016 at 1:41 AM, Jan Beulich wrote:
On 01.10.16 at 21:05, wrote:
>> However, I've found two other sources that need more attention:
>>
>> In x86/flushtlb.c the function flush_area_local invalidates all guest
>> TLBs as such:
>>
>> if ( flags & (FLUSH_TLB|FLUSH_TLB_GLOBAL) )
>
>>> On 01.10.16 at 21:05, wrote:
> However, I've found two other sources that need more attention:
>
> In x86/flushtlb.c the function flush_area_local invalidates all guest
> TLBs as such:
>
> if ( flags & (FLUSH_TLB|FLUSH_TLB_GLOBAL) )
> {
> if ( order == 0 )
> {
> ...
>
On Tue, Sep 27, 2016 at 7:49 AM, Jan Beulich wrote:
On 26.09.16 at 18:12, wrote:
>> On Mon, Sep 26, 2016 at 12:24 AM, Jan Beulich wrote:
>> On 23.09.16 at 22:45, wrote:
On Fri, Sep 23, 2016 at 9:50 AM, Tamas K Lengyel
wrote:
> On Fri, Sep 23, 2016 at 9:39 AM, Jan Beulich
>>> On 26.09.16 at 18:12, wrote:
> On Mon, Sep 26, 2016 at 12:24 AM, Jan Beulich wrote:
> On 23.09.16 at 22:45, wrote:
>>> On Fri, Sep 23, 2016 at 9:50 AM, Tamas K Lengyel
>>> wrote:
On Fri, Sep 23, 2016 at 9:39 AM, Jan Beulich wrote:
On 23.09.16 at 17:26, wrote:
>> On F
On Mon, Sep 26, 2016 at 12:24 AM, Jan Beulich wrote:
On 23.09.16 at 22:45, wrote:
>> On Fri, Sep 23, 2016 at 9:50 AM, Tamas K Lengyel
>> wrote:
>>> On Fri, Sep 23, 2016 at 9:39 AM, Jan Beulich wrote:
>>> On 23.09.16 at 17:26, wrote:
> On Fri, Sep 23, 2016 at 2:24 AM, Jan Beulich
>>> On 23.09.16 at 22:45, wrote:
> On Fri, Sep 23, 2016 at 9:50 AM, Tamas K Lengyel
> wrote:
>> On Fri, Sep 23, 2016 at 9:39 AM, Jan Beulich wrote:
>> On 23.09.16 at 17:26, wrote:
On Fri, Sep 23, 2016 at 2:24 AM, Jan Beulich wrote:
On 22.09.16 at 19:18, wrote:
>> So I ve
Hi
I am experimenting Xen with my embedded system environment and got a
question in next_module() function.
*static paddr_t __init next_module(paddr_t s, paddr_t *end)*
*{*
*struct bootmodules *mi = &bootinfo.modules;*
*paddr_t lowest = ~(paddr_t)0;*
*int i;*
*for ( i = 0; i < mi
On Fri, Sep 23, 2016 at 9:50 AM, Tamas K Lengyel
wrote:
> On Fri, Sep 23, 2016 at 9:39 AM, Jan Beulich wrote:
> On 23.09.16 at 17:26, wrote:
>>> On Fri, Sep 23, 2016 at 2:24 AM, Jan Beulich wrote:
>>> On 22.09.16 at 19:18, wrote:
> So I verified that when CPU-based load exiting is
On Fri, Sep 23, 2016 at 9:39 AM, Jan Beulich wrote:
On 23.09.16 at 17:26, wrote:
>> On Fri, Sep 23, 2016 at 2:24 AM, Jan Beulich wrote:
>> On 22.09.16 at 19:18, wrote:
So I verified that when CPU-based load exiting is enabled, the TLB
flush here is critical. Without it the gu
>>> On 23.09.16 at 17:26, wrote:
> On Fri, Sep 23, 2016 at 2:24 AM, Jan Beulich wrote:
> On 22.09.16 at 19:18, wrote:
>>> So I verified that when CPU-based load exiting is enabled, the TLB
>>> flush here is critical. Without it the guest kernel crashes at random
>>> points during boot. OTOH
On Fri, Sep 23, 2016 at 2:24 AM, Jan Beulich wrote:
On 22.09.16 at 19:18, wrote:
>> So I verified that when CPU-based load exiting is enabled, the TLB
>> flush here is critical. Without it the guest kernel crashes at random
>> points during boot. OTOH why does Xen trap every guest CR3 update
On 09/23/16 11:24, Jan Beulich wrote:
On 22.09.16 at 19:18, wrote:
>> So I verified that when CPU-based load exiting is enabled, the TLB
>> flush here is critical. Without it the guest kernel crashes at random
>> points during boot. OTOH why does Xen trap every guest CR3 update
>> uncondition
>>> On 22.09.16 at 19:18, wrote:
> So I verified that when CPU-based load exiting is enabled, the TLB
> flush here is critical. Without it the guest kernel crashes at random
> points during boot. OTOH why does Xen trap every guest CR3 update
> unconditionally? While we have features such as the vm
On Thu, Sep 22, 2016 at 5:37 AM, Tamas K Lengyel
wrote:
> On Sep 22, 2016 05:27, "Jan Beulich" wrote:
>>
>> >>> On 22.09.16 at 12:35, wrote:
>> > On Sep 22, 2016 02:56, "Jan Beulich" wrote:
>> >>
>> >> >>> On 21.09.16 at 17:30, wrote:
>> >> > What I'm saying is that the guest OS should be in c
On Sep 22, 2016 05:27, "Jan Beulich" wrote:
>
> >>> On 22.09.16 at 12:35, wrote:
> > On Sep 22, 2016 02:56, "Jan Beulich" wrote:
> >>
> >> >>> On 21.09.16 at 17:30, wrote:
> >> > What I'm saying is that the guest OS should be in charge of managing
> >> > its own TLB when VPID is in use. Whether
>>> On 22.09.16 at 12:39, wrote:
> On Sep 22, 2016 03:00, "Jan Beulich" wrote:
>>
>> >>> On 21.09.16 at 20:26, wrote:
>> > So reading through the Intel SDM the following bits are relevant here:
>> >
>> > 28.3.3.1
>> > Operations that Invalidate Cached Mappings
>> > The following operations inval
>>> On 22.09.16 at 12:35, wrote:
> On Sep 22, 2016 02:56, "Jan Beulich" wrote:
>>
>> >>> On 21.09.16 at 17:30, wrote:
>> > What I'm saying is that the guest OS should be in charge of managing
>> > its own TLB when VPID is in use. Whether it does flush the TLB or not
>> > is not of our concern. I
On Sep 22, 2016 03:00, "Jan Beulich" wrote:
>
> >>> On 21.09.16 at 20:26, wrote:
> > So reading through the Intel SDM the following bits are relevant here:
> >
> > 28.3.3.1
> > Operations that Invalidate Cached Mappings
> > The following operations invalidate cached mappings as indicated:
> > ● O
On Sep 22, 2016 02:56, "Jan Beulich" wrote:
>
> >>> On 21.09.16 at 17:30, wrote:
> > What I'm saying is that the guest OS should be in charge of managing
> > its own TLB when VPID is in use. Whether it does flush the TLB or not
> > is not of our concern. If it's a sane OS it will likely flush whe
>>> On 21.09.16 at 20:26, wrote:
> So reading through the Intel SDM the following bits are relevant here:
>
> 28.3.3.1
> Operations that Invalidate Cached Mappings
> The following operations invalidate cached mappings as indicated:
> ● Operations that architecturally invalidate entries in the TLB
>>> On 21.09.16 at 17:30, wrote:
> What I'm saying is that the guest OS should be in charge of managing
> its own TLB when VPID is in use. Whether it does flush the TLB or not
> is not of our concern. If it's a sane OS it will likely flush when it
> needs to, but we should not be jumping in and do
On Wed, Sep 21, 2016 at 9:30 AM, Tamas K Lengyel
wrote:
> On Wed, Sep 21, 2016 at 9:23 AM, Jan Beulich wrote:
> On 21.09.16 at 17:16, wrote:
>>> On Wed, Sep 21, 2016 at 9:09 AM, Tamas K Lengyel
>>> wrote:
On Wed, Sep 21, 2016 at 8:44 AM, Jan Beulich wrote:
On 21.09.16 at 16:1
On Wed, Sep 21, 2016 at 9:23 AM, Jan Beulich wrote:
On 21.09.16 at 17:16, wrote:
>> On Wed, Sep 21, 2016 at 9:09 AM, Tamas K Lengyel
>> wrote:
>>> On Wed, Sep 21, 2016 at 8:44 AM, Jan Beulich wrote:
>>> On 21.09.16 at 16:18, wrote:
> On Wed, Sep 21, 2016 at 4:23 AM, Jan Beulich w
>>> On 21.09.16 at 17:16, wrote:
> On Wed, Sep 21, 2016 at 9:09 AM, Tamas K Lengyel
> wrote:
>> On Wed, Sep 21, 2016 at 8:44 AM, Jan Beulich wrote:
>> On 21.09.16 at 16:18, wrote:
On Wed, Sep 21, 2016 at 4:23 AM, Jan Beulich wrote:
On 20.09.16 at 19:29, wrote:
>> I'm try
On Wed, Sep 21, 2016 at 9:09 AM, Tamas K Lengyel
wrote:
> On Wed, Sep 21, 2016 at 8:44 AM, Jan Beulich wrote:
> On 21.09.16 at 16:18, wrote:
>>> On Wed, Sep 21, 2016 at 4:23 AM, Jan Beulich wrote:
>>> On 20.09.16 at 19:29, wrote:
> I'm trying to figure out the design decision regar
On Wed, Sep 21, 2016 at 8:44 AM, Jan Beulich wrote:
On 21.09.16 at 16:18, wrote:
>> On Wed, Sep 21, 2016 at 4:23 AM, Jan Beulich wrote:
>> On 20.09.16 at 19:29, wrote:
I'm trying to figure out the design decision regarding the handling of
guest MOV-TO-CR3 operations and TLB f
>>> On 21.09.16 at 16:18, wrote:
> On Wed, Sep 21, 2016 at 4:23 AM, Jan Beulich wrote:
> On 20.09.16 at 19:29, wrote:
>>> I'm trying to figure out the design decision regarding the handling of
>>> guest MOV-TO-CR3 operations and TLB flushes. AFAICT since support for
>>> VPID has been added t
On Wed, Sep 21, 2016 at 4:23 AM, Jan Beulich wrote:
On 20.09.16 at 19:29, wrote:
>> I'm trying to figure out the design decision regarding the handling of
>> guest MOV-TO-CR3 operations and TLB flushes. AFAICT since support for
>> VPID has been added to Xen, every guest MOV-TO-CR3 flushes th
>>> On 20.09.16 at 19:29, wrote:
> I'm trying to figure out the design decision regarding the handling of
> guest MOV-TO-CR3 operations and TLB flushes. AFAICT since support for
> VPID has been added to Xen, every guest MOV-TO-CR3 flushes the TLB
> (vmx_cr_access -> hvm_mov_to_cr -> hvm_set_cr3 ->
Hi all,
I'm trying to figure out the design decision regarding the handling of
guest MOV-TO-CR3 operations and TLB flushes. AFAICT since support for
VPID has been added to Xen, every guest MOV-TO-CR3 flushes the TLB
(vmx_cr_access -> hvm_mov_to_cr -> hvm_set_cr3 -> paging_update_cr3 ->
hap_update_c
Hi I am studying xen student
I got a question in end boot allocator code
Why xen reads bootmem_region_list backwards??
for ( i= nr_bootmem_regions ; i -- >0 ; )
{struct bootmem_region ...}
I think there would be no problem although it reads its values forward
because added page list is same.
T
On 11/07/16 04:58, 소병철 wrote:
> Hello everyone :)
>
>
>
> I have a question about xen event channel. Is it possible to allocate
> two event channels in one xen frontend driver?
Yes. For example, netif can have a separate Tx and Rx event channel.
David
___
Title: Samsung Enterprise Portal mySingle
Hello everyone :)
I have a question about xen event channel. Is it possible to allocate two event channels in one xen frontend driver?
I want to bind two different irq handler to one xen frontend driver by using two event channels.
However, I'm
Hi,
I was searching for limiting IOPS for the disk attached to XenServer VMs. And I
couldn't find a way to limit the IOPS.
So, I was wondering whether XenServer actually supports limiting IOPS or not.
If yes, what are the ways, both user and programmatical, to do so.
Thanks & Regards,
Ganesh.
On Mon, Jun 20, 2016 at 11:43 AM, Meng Xu wrote:
>
> Hi all,
>
> I'm running Xen on NVIDIA Jetson TK1 board. The Xen code is from Ian's
> repo.: git://xenbits.xen.org/people/ianc/xen.git with the commit point
> c78d51660446d33dac4bb07c3c17e1d14d62ebc2
>
> Right now, I can boot dom0 on Xen on the J
Hi all,
I'm running Xen on NVIDIA Jetson TK1 board. The Xen code is from Ian's
repo.: git://xenbits.xen.org/people/ianc/xen.git with the commit point
c78d51660446d33dac4bb07c3c17e1d14d62ebc2
Right now, I can boot dom0 on Xen on the Jetson board. After the
system boots up, I can boot up a VM1 usin
>>> On 20.06.16 at 08:37, wrote:
> Hi I am studying xen memory setup
>
> I am wondering the job of scrub heap pages in setup_mm
>
> If i see the code, it clear unallocated page by setting them to 0 using
> memset instruction
>
> But it looks unneccessary assuming unallocated page will be writte
Hi I am studying xen memory setup
I am wondering the job of scrub heap pages in setup_mm
If i see the code, it clear unallocated page by setting them to 0 using
memset instruction
But it looks unneccessary assuming unallocated page will be written to
valid data when it is qllocated
Is there spe
On Wed, Jun 15, 2016 at 09:46:45AM +0800, wj zhou wrote:
> Hi,
>
> Thanks a lot for your reply!
>
> On Tue, Jun 14, 2016 at 11:02 PM, Konrad Rzeszutek Wilk
> wrote:
> > On Tue, Jun 14, 2016 at 08:21:16AM +0800, wj zhou wrote:
> >> Hello all,
> >
> > Hey,
> >
> > CC-ing Daniel, and Dave.
> >>
> >>
On Tue, Jun 14, 2016 at 12:01 PM, Andrew Cooper
wrote:
>
> On 14/06/16 03:13, Meng Xu wrote:
> > On Mon, Jun 13, 2016 at 6:54 PM, Andrew Cooper
> > wrote:
> >> On 13/06/2016 18:43, Meng Xu wrote:
> >>> Hi,
> >>>
> >>> I have a quick question about using the Linux spin_lock() in Xen
> >>> environm
Hi,
Thanks a lot for your reply!
On Tue, Jun 14, 2016 at 11:02 PM, Konrad Rzeszutek Wilk
wrote:
> On Tue, Jun 14, 2016 at 08:21:16AM +0800, wj zhou wrote:
>> Hello all,
>
> Hey,
>
> CC-ing Daniel, and Dave.
>>
>> Sorry to disturb you, but I really want to figure it out.
>> The xen core of redhat
On 14/06/16 03:13, Meng Xu wrote:
> On Mon, Jun 13, 2016 at 6:54 PM, Andrew Cooper
> wrote:
>> On 13/06/2016 18:43, Meng Xu wrote:
>>> Hi,
>>>
>>> I have a quick question about using the Linux spin_lock() in Xen
>>> environment to protect some host-wide shared (memory) resource among
>>> VMs.
>>>
On Tue, Jun 14, 2016 at 08:21:16AM +0800, wj zhou wrote:
> Hello all,
Hey,
CC-ing Daniel, and Dave.
>
> Sorry to disturb you, but I really want to figure it out.
> The xen core of redhat 6 with pod is unable to be used with crash.
>
> I installed a hvm of redhat 6 by xen 4.7.0-rc2.
> And the me
On Mon, Jun 13, 2016 at 6:54 PM, Andrew Cooper
wrote:
> On 13/06/2016 18:43, Meng Xu wrote:
>> Hi,
>>
>> I have a quick question about using the Linux spin_lock() in Xen
>> environment to protect some host-wide shared (memory) resource among
>> VMs.
>>
>> *** The question is as follows ***
>> Supp
On Mon, Jun 13, 2016 at 5:17 PM, Boris Ostrovsky
wrote:
> On 06/13/2016 04:46 PM, Meng Xu wrote:
>> On Mon, Jun 13, 2016 at 2:28 PM, Boris Ostrovsky
>> wrote:
>>> On 06/13/2016 01:43 PM, Meng Xu wrote:
Hi,
I have a quick question about using the Linux spin_lock() in Xen
enviro
Hello all,
Sorry to disturb you, but I really want to figure it out.
The xen core of redhat 6 with pod is unable to be used with crash.
I installed a hvm of redhat 6 by xen 4.7.0-rc2.
And the memory is set as below:
memory=1024
maxmem=4096
"xl dump-core" is executed, and the core is produced suc
On 13/06/2016 18:43, Meng Xu wrote:
> Hi,
>
> I have a quick question about using the Linux spin_lock() in Xen
> environment to protect some host-wide shared (memory) resource among
> VMs.
>
> *** The question is as follows ***
> Suppose I have two Linux VMs sharing the same spinlock_t lock (throug
On 06/13/2016 04:46 PM, Meng Xu wrote:
> On Mon, Jun 13, 2016 at 2:28 PM, Boris Ostrovsky
> wrote:
>> On 06/13/2016 01:43 PM, Meng Xu wrote:
>>> Hi,
>>>
>>> I have a quick question about using the Linux spin_lock() in Xen
>>> environment to protect some host-wide shared (memory) resource among
>>>
On Mon, Jun 13, 2016 at 2:28 PM, Boris Ostrovsky
wrote:
> On 06/13/2016 01:43 PM, Meng Xu wrote:
>> Hi,
>>
>> I have a quick question about using the Linux spin_lock() in Xen
>> environment to protect some host-wide shared (memory) resource among
>> VMs.
>>
>> *** The question is as follows ***
>>
On 06/13/2016 01:43 PM, Meng Xu wrote:
> Hi,
>
> I have a quick question about using the Linux spin_lock() in Xen
> environment to protect some host-wide shared (memory) resource among
> VMs.
>
> *** The question is as follows ***
> Suppose I have two Linux VMs sharing the same spinlock_t lock (thr
Hi,
I have a quick question about using the Linux spin_lock() in Xen
environment to protect some host-wide shared (memory) resource among
VMs.
*** The question is as follows ***
Suppose I have two Linux VMs sharing the same spinlock_t lock (through
the sharing memory) on the same host. Suppose we
1 - 100 of 382 matches
Mail list logo