flight 57903 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57903/
Failures :-/ but no regressions.
Regressions which are regarded as allowable (not blocking):
test-amd64-amd64-libvirt-xsm 11 guest-start fail REGR. vs. 57419
test-amd64-i386-libvirt
flight 57998 rumpuserxen real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57998/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-rumpuserxen 5 rumpuserxen-build fail REGR. vs. 33866
build-i386-rumpuserxe
Hi Wei,
>
>
> * Improve RTDS scheduler (none)
It has two parts:
>
>Change RTDS from quantum driven to event driven
This is part 1 which only involves the hypervisor change;
The part 2 is supporting per-vcpu parameter get/set function in the toolstack.
>
> - Dagaen Golomb, Meng Xu, Cho
flight 57912 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57912/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stopfail REGR. vs. 56492
test-amd64-i386-xl-qemuu-win
On Fri, Jun 05, 2015 at 07:50:18PM +0100, Al Viro wrote:
> Basically, we have
> i_mutex: file size changes, contents-affecting syscalls. Per-inode.
> truncate_mutex: block pointers changes. Per-inode.
> s_lock: block and inode bitmaps changes. Per-filesystem.
>
> For UFS it's
flight 57908 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57908/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-amd64-amd64-rumpuserxen-amd64 15
rumpuserxen-demo-xenstorels/xenstorels.repeat fail in 57854 pass in 5790
At 18:21 +0100 on 05 Jun (1433528517), Andrew Cooper wrote:
> On 05/06/15 18:16, Stefano Stabellini wrote:
> > On Fri, 5 Jun 2015, Andrew Cooper wrote:
> >> On 05/06/15 17:43, Boris Ostrovsky wrote:
> >>> On 06/05/2015 12:16 PM, Roger Pau Monné wrote:
> El 03/06/15 a les 14.08, Jan Beulich ha
Hi Julien,
>When the property "clock-frequency" is present in the DT timer node, it means
>that the bootloader/firmware didn't correctly configured the
CNTFRQ/CNTFRQ_EL0 on each processor.
I did try this out, and it didn't affect my results. I don't understand why,
though :-)
What I see is tha
flight 57904 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57904/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemuu-win7-amd64 16 guest-stop fail in 57788 REGR. vs. 57312
Tests which are faili
On Wed, May 20, 2015 at 08:16:24AM -0500, Bjorn Helgaas wrote:
> On Tue, May 19, 2015 at 1:08 AM, Tina Ruchandani
> wrote:
> > struct timeval uses a 32-bit field for representing seconds,
> > which will overflow in the year 2038 and beyond. This patch replaces
> > struct timeval with 64-bit ktime_
On Fri, Jun 05, 2015 at 06:13:01PM +0100, Stefano Stabellini wrote:
> On Fri, 5 Jun 2015, Ian Campbell wrote:
> > On Fri, 2015-06-05 at 17:43 +0100, Wei Liu wrote:
> >
> > > 3. Add a libxl layer that wraps necessary information, take over
> > >Andrew's work on libxl migration v2. Having a lib
On 05/06/15 19:45, Konrad Rzeszutek Wilk wrote:
> On Thu, Jun 04, 2015 at 10:27:06PM +0800, yunfang tai wrote:
>> Hi all,
> Hey!
>> Recently, I am testing the TMEM support on Xen. I discovered that when
>> enabled TMEM in ubuntu 14.10 as guest on Xen 4.1 & Xen 4.3, "xm save" & "xm
>> restore“ f
flight 57928 rumpuserxen real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57928/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-rumpuserxen 5 rumpuserxen-build fail REGR. vs. 33866
build-i386-rumpuserxe
On Fri, Jun 05, 2015 at 06:27:01PM +0200, Fabian Frederick wrote:
> You're asking to remove lock_ufs() in allocation and replace it by
> truncate_mutex. I guess you're talking about doing that on current rc
> (without s_lock restored).
>
> I tried a quick patch on rc trying to convert lock_ufs()/
On Fri, 2015-06-05 at 18:10 +0100, Stefano Stabellini wrote:
> On Fri, 5 Jun 2015, Wei Liu wrote:
> > Hi all
> >
> > This bug is now considered a blocker for 4.6 release.
> >
> > The premises of the problem remain the same (George's translated
> > version):
> >
> > 1. QEMU may need extra pages f
On Thu, Jun 04, 2015 at 10:27:06PM +0800, yunfang tai wrote:
> Hi all,
Hey!
> Recently, I am testing the TMEM support on Xen. I discovered that when
> enabled TMEM in ubuntu 14.10 as guest on Xen 4.1 & Xen 4.3, "xm save" & "xm
> restore“ failed after there are more than 1000 pages put in persi
On 05/06/2015 17:09, Ian Campbell wrote:
+ * injection, ignoring level 2 & 3.
+ */
+if ( gicv3_sgir_to_cpumask(&vcpu_mask, sgir) )
+{
+gprintk(XENLOG_WARNING, "Wrong affinity in SGI1R_EL register\n");
I don't think we need to log this. The guest has
On 05/06/2015 16:56, Ian Campbell wrote:
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
From: Chen Baozi
There are 3 places to change:
* Initialise vMPIDR value in vcpu_initialise()
* Find the vCPU from vMPIDR affinity information when accessing GICD
registers in vGIC
* Find the vCPU
On Fri, Jun 05, 2015 at 06:46:35PM +0100, Andrew Cooper wrote:
> On 05/06/15 18:01, Wei Liu wrote:
> > This patch does following things:
> > 1. Document v1 format.
> > 2. Factor out function to handle QEMU restore data and function to
> >handle v1 blob for restore path.
> > 3. Refactor save fun
On Tue, Jun 02, 2015 at 02:58:17PM +, Simon Waterman wrote:
> Hi,
>
> We're hitting the kernel BUG below in one of our VMs running on Xen 4.4 and
> Linux kernel 3.13.0. We use the xl toolstack and are using PCI pass-through
> to pass network cards and a disk controller. It happens on a varie
On Fri, Jun 05, 2015 at 06:10:17PM +0100, Stefano Stabellini wrote:
> On Fri, 5 Jun 2015, Wei Liu wrote:
> > Hi all
> >
> > This bug is now considered a blocker for 4.6 release.
> >
> > The premises of the problem remain the same (George's translated
> > version):
> >
> > 1. QEMU may need extra
On 05/06/2015 17:31, Ian Campbell wrote:
On Fri, 2015-06-05 at 17:04 +0100, Julien Grall wrote:
On 05/06/15 16:49, Ian Campbell wrote:
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
From: Chen Baozi
Currently it only supports up to 8 vCPUs. Increase the region to hold
up to 128 vCPUs
Hi Ian,
On 05/06/2015 13:35, Ian Campbell wrote:
On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
Some version of the GIC are able so support multiple versions of the
vGIC.
For instance, some version of the GICv3 can as well support GICv2.
Signed-off-by: Julien Grall
After my suggesti
On 05/06/15 18:01, Wei Liu wrote:
> This patch does following things:
> 1. Document v1 format.
> 2. Factor out function to handle QEMU restore data and function to
>handle v1 blob for restore path.
> 3. Refactor save function to generate different blobs in the order
>specified in format spe
On Fri, Jun 05, 2015 at 05:58:11PM +0100, Ian Campbell wrote:
> On Fri, 2015-06-05 at 17:43 +0100, Wei Liu wrote:
>
> > 3. Add a libxl layer that wraps necessary information, take over
> >Andrew's work on libxl migration v2. Having a libxl layer that's not
> >part of migration v2 is a was
On 05/06/15 18:16, Stefano Stabellini wrote:
> On Fri, 5 Jun 2015, Andrew Cooper wrote:
>> On 05/06/15 17:43, Boris Ostrovsky wrote:
>>> On 06/05/2015 12:16 PM, Roger Pau Monné wrote:
El 03/06/15 a les 14.08, Jan Beulich ha escrit:
On 03.06.15 at 12:02, wrote:
>> On Tue, 2 Jun 20
On 06/05/2015 01:16 PM, Stefano Stabellini wrote:
On Fri, 5 Jun 2015, Andrew Cooper wrote:
On 05/06/15 17:43, Boris Ostrovsky wrote:
On 06/05/2015 12:16 PM, Roger Pau Monné wrote:
El 03/06/15 a les 14.08, Jan Beulich ha escrit:
On 03.06.15 at 12:02, wrote:
On Tue, 2 Jun 2015, Andrew Cooper
On 05/06/15 17:58, Ian Campbell wrote:
> On Fri, 2015-06-05 at 17:43 +0100, Wei Liu wrote:
>
>> 3. Add a libxl layer that wraps necessary information, take over
>>Andrew's work on libxl migration v2. Having a libxl layer that's not
>>part of migration v2 is a waste of effort.
>>
>> There a
On 06/05/2015 06:53 AM, wei.l...@citrix.com wrote:
> * Alternate p2m: support multiple copies of host p2m (ok)
> - Ed White
>
Revised design doc should be posted early week of June 8th.
V2 of patch series should follow within a couple of weeks.
V2 is significantly changed based on list feedb
On Fri, 5 Jun 2015, Andrew Cooper wrote:
> On 05/06/15 17:43, Boris Ostrovsky wrote:
> > On 06/05/2015 12:16 PM, Roger Pau Monné wrote:
> >> El 03/06/15 a les 14.08, Jan Beulich ha escrit:
> >> On 03.06.15 at 12:02, wrote:
> On Tue, 2 Jun 2015, Andrew Cooper wrote:
> > With my x86 mai
On Fri, 5 Jun 2015, Ian Campbell wrote:
> On Fri, 2015-06-05 at 17:43 +0100, Wei Liu wrote:
>
> > 3. Add a libxl layer that wraps necessary information, take over
> >Andrew's work on libxl migration v2. Having a libxl layer that's not
> >part of migration v2 is a waste of effort.
> >
> >
On Fri, Jun 5, 2015 at 10:08 PM, Ian Campbell wrote:
> On Fri, 2015-06-05 at 21:25 +0530, Vijay Kilari wrote:
>> Let xen mark those phantom devices added using MAPD as dummy and
>> just emulate and does not translate ITS commands for these devices.
>
> But we think guests might use this mechanism
On Fri, 5 Jun 2015, Wei Liu wrote:
> Hi all
>
> This bug is now considered a blocker for 4.6 release.
>
> The premises of the problem remain the same (George's translated
> version):
>
> 1. QEMU may need extra pages from Xen to implement option ROMS, and so at
>the moment it calls set_max_me
This patch does following things:
1. Document v1 format.
2. Factor out function to handle QEMU restore data and function to
handle v1 blob for restore path.
3. Refactor save function to generate different blobs in the order
specified in format specification.
4. Change functions to use "goto o
On Fri, 2015-06-05 at 17:43 +0100, Wei Liu wrote:
> 3. Add a libxl layer that wraps necessary information, take over
>Andrew's work on libxl migration v2. Having a libxl layer that's not
>part of migration v2 is a waste of effort.
>
> There are several obstacles for libxl migration v2 at
On 05/06/15 17:43, Boris Ostrovsky wrote:
> On 06/05/2015 12:16 PM, Roger Pau Monné wrote:
>> El 03/06/15 a les 14.08, Jan Beulich ha escrit:
>> On 03.06.15 at 12:02, wrote:
On Tue, 2 Jun 2015, Andrew Cooper wrote:
> With my x86 maintainer hat on, the following is an absolute
> mi
On 06/05/2015 12:21 PM, Stefano Stabellini wrote:
On Fri, 5 Jun 2015, Roger Pau Monné wrote:
El 03/06/15 a les 14.08, Jan Beulich ha escrit:
On 03.06.15 at 12:02, wrote:
On Tue, 2 Jun 2015, Andrew Cooper wrote:
With my x86 maintainer hat on, the following is an absolute minimum set
of prereq
On Fri, 2015-06-05 at 11:48 +0100, Ian Campbell wrote:
> All the flights in the new colo seem to have been on fiano[01].
>
> But having looked at the page again the early success was all on fiano0
> while the later failures were all on fiano1.
>
> fiano[01] are supposedly identical hardware.
>
>
Hi all
This bug is now considered a blocker for 4.6 release.
The premises of the problem remain the same (George's translated
version):
1. QEMU may need extra pages from Xen to implement option ROMS, and so at
the moment it calls set_max_mem() to increase max_pages so that it can
allocate
On 06/05/2015 12:16 PM, Roger Pau Monné wrote:
El 03/06/15 a les 14.08, Jan Beulich ha escrit:
On 03.06.15 at 12:02, wrote:
On Tue, 2 Jun 2015, Andrew Cooper wrote:
With my x86 maintainer hat on, the following is an absolute minimum set
of prerequisite for PVH.
* 32bit support
Could you ple
On 05/06/15 17:11, Jan Beulich wrote:
On 05.06.15 at 17:55, wrote:
>> On 05/06/15 15:51, Jan Beulich wrote:
>> On 02.06.15 at 18:26, wrote:
+/*
+ * max_maptrack_frames is per domain so each VCPU gets a share of
+ * the maximum, but allow at least one frame per
On Fri, 5 Jun 2015, Jan Beulich wrote:
> >>> On 05.06.15 at 13:32, wrote:
> >> --- a/hw/xen/xen_pt.c
> >> +++ b/hw/xen/xen_pt.c
> >> @@ -248,7 +248,9 @@ static void xen_pt_pci_write_config(PCID
> >>
> >> /* check unused BAR register */
> >> index = xen_pt_bar_offset_to_index(addr);
> >
On Fri, 2015-06-05 at 17:00 +0100, Julien Grall wrote:
> >> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> >> index 648ca08..b033c0b 100644
> >> --- a/tools/libxl/xl_cmdimpl.c
> >> +++ b/tools/libxl/xl_cmdimpl.c
> >> @@ -1298,6 +1298,18 @@ static void parse_config_data(const cha
On Fri, 2015-06-05 at 21:25 +0530, Vijay Kilari wrote:
> Let xen mark those phantom devices added using MAPD as dummy and
> just emulate and does not translate ITS commands for these devices.
But we think guests might use this mechanism to drive completion
(instead of polling), so we have to trans
On 05/06/15 17:26, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>> [...]
>> +#define GICV2_MAX_CPUS 8
>
> This and GICV3_MAX_CPUS don't seem very worthwhile, unless there are to
> be other uses of them.
>
> In fact, GICV3_MAX_CPUS is really MAX_VIRT_CPUS, through i
On 06/05/2015 12:09 PM, Jan Beulich wrote:
@@ -201,27 +202,56 @@ static inline void context_load(struct vcpu *v)
}
}
-static void amd_vpmu_load(struct vcpu *v)
+static int amd_vpmu_load(struct vcpu *v, bool_t from_guest)
{
struct vpmu_struct *vpmu = vcpu_vpmu(v);
-struct x
On 05/06/15 13:48, Ian Campbell wrote:
> On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
>> * Modify the GICv3 driver to recognize a such device. I wasn't able
>> to find a register which tell if GICv2 is supported on GICv3. The only
>> way to find it seems to check if the DT node provid
On 06/05/2015 12:03 PM, Jan Beulich wrote:
On 29.05.15 at 20:42, wrote:
@@ -289,19 +302,24 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t
msr_content,
{
struct vcpu *v = current;
struct vpmu_struct *vpmu = vcpu_vpmu(v);
+unsigned int idx = 0;
+int type = get_p
On Fri, 2015-06-05 at 17:04 +0100, Julien Grall wrote:
> On 05/06/15 16:49, Ian Campbell wrote:
> > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> >> From: Chen Baozi
> >>
> >> Currently it only supports up to 8 vCPUs. Increase the region to hold
> >> up to 128 vCPUs, which is the maximum
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> [...]
> +#define GICV2_MAX_CPUS 8
This and GICV3_MAX_CPUS don't seem very worthwhile, unless there are to
be other uses of them.
In fact, GICV3_MAX_CPUS is really MAX_VIRT_CPUS, through it's
association with the affinity mapping, i.e. if on
On Wed, 2015-06-03 at 09:35 -0400, Boris Ostrovsky wrote:
> > What I'm hearing from the x86 maintainers is that this is actually a
> > high priority and not a "nice to have cleanup".
> >
> >> I picked 32-bit support, Elena is looking into AMD
> > With the TODOs + these 2 being the things which the
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> evtchn_init will call domain_max_vcpus to allocate poll_mask. On
> arm/arm64 platform, this number is determined by the vGIC the guest
> is going to use, which won't be initialised until arch_domain_create
> is called in
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> After we have increased the size of GICR in address space for guest
> and made use of both AFF0 and AFF1 in (v)MPIDR, we are now able to
> support up to 4096 vCPUs in theory. However, it will cost 512M
> address space for
On Fri, 5 Jun 2015, Roger Pau Monné wrote:
> El 03/06/15 a les 14.08, Jan Beulich ha escrit:
> On 03.06.15 at 12:02, wrote:
> >> On Tue, 2 Jun 2015, Andrew Cooper wrote:
> >>> With my x86 maintainer hat on, the following is an absolute minimum set
> >>> of prerequisite for PVH.
> >>>
> >>> *
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> According to ARM CPUs bindings, the reg field should match the MPIDR's
> affinity bits. We will use AFF0 and AFF1 when constructing the reg value
> of the guest at the moment, for it is enough for the current max vcpu
> n
>>> On 05.06.15 at 18:11, wrote:
> On 05/06/15 16:38, Jan Beulich wrote:
> On 05.06.15 at 15:31, wrote:
>>> There is no need for top level sections for each of these, so they are
>>> subsumed into more-generic sections.
>>>
>>> .data.read_mostly and .lockprofile.data are moved to .data
>>>
>>> On 05.06.15 at 17:57, wrote:
> On 05/06/15 12:28, Jan Beulich wrote:
>> Qemu shouldn't be fiddling with this bit directly, as the hypervisor
>> may (and now does) use it for its own purposes. Provide it with a
>> replacement interface, allowing the hypervisor to track host and guest
>> masking
El 03/06/15 a les 14.08, Jan Beulich ha escrit:
On 03.06.15 at 12:02, wrote:
>> On Tue, 2 Jun 2015, Andrew Cooper wrote:
>>> With my x86 maintainer hat on, the following is an absolute minimum set
>>> of prerequisite for PVH.
>>>
>>> * 32bit support
>>
>> Could you please explain why 32bit is
>>> On 05.06.15 at 18:00, wrote:
> On Fri, Jun 05, 2015 at 04:16:59PM +0100, Jan Beulich wrote:
>> >>> On 05.06.15 at 16:49, wrote:
>> > On Mon, May 18, 2015 at 01:41:34PM +0100, Jan Beulich wrote:
>> >> >>> On 15.05.15 at 21:44, wrote:
>> >> A general remark: Having worked on ELF on different o
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> +mpidr_aff = vcpuid_to_vaffinity(cpu);
> +DPRINT("Create cpu@%lx (logical CPUID: %d) node\n", mpidr_aff, cpu);
"PRIx64" again please. I think the hex vs. decimal here is to be
expected and ok by the way.
With that fixed: Acked
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> According to ARM CPUs bindings, the reg field should match the MPIDR's
> affinity bits. We will use AFF0 and AFF1 when constructing the reg value
> of the guest at the moment, for it is enough for the current max vcpu
> n
>>> On 05.06.15 at 17:55, wrote:
> On 05/06/15 15:51, Jan Beulich wrote:
> On 02.06.15 at 18:26, wrote:
>>> +/*
>>> + * max_maptrack_frames is per domain so each VCPU gets a share of
>>> + * the maximum, but allow at least one frame per VCPU.
>>> + */
>>> +if ( v->maptrack
On 05/06/15 16:38, Jan Beulich wrote:
On 05.06.15 at 15:31, wrote:
>> There is no need for top level sections for each of these, so they are
>> subsumed into more-generic sections.
>>
>> .data.read_mostly and .lockprofile.data are moved to .data
>>
>> .init.setup, .initcall.init, .xsm_ini
> @@ -201,27 +202,56 @@ static inline void context_load(struct vcpu *v)
> }
> }
>
> -static void amd_vpmu_load(struct vcpu *v)
> +static int amd_vpmu_load(struct vcpu *v, bool_t from_guest)
> {
> struct vpmu_struct *vpmu = vcpu_vpmu(v);
> -struct xen_pmu_amd_ctxt *ctxt = vpmu->con
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> To support more than 16 vCPUs, we have to calculate cpumask with AFF1
> field value in ICC_SGI1R_EL1.
>
> Signed-off-by: Chen Baozi
> ---
> xen/arch/arm/vgic-v3.c| 30 ++
> xen/i
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> Use cpumask_t instead of unsigned long which can only express 64 cpus at
> the most. Add the {gicv2|gicv3}_sgir_to_cpumask in corresponding vGICs
> to translate GICD_SGIR/ICC_SGI1R_EL1 to vcpu_mask for vgic_to_sgi.
>
> S
On 05/06/15 16:49, Ian Campbell wrote:
> On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
>> From: Chen Baozi
>>
>> Currently it only supports up to 8 vCPUs. Increase the region to hold
>> up to 128 vCPUs, which is the maximum number that GIC-500 supports.
>>
>> Signed-off-by: Chen Baozi
>> R
>>> On 29.05.15 at 20:42, wrote:
> @@ -289,19 +302,24 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t
> msr_content,
> {
> struct vcpu *v = current;
> struct vpmu_struct *vpmu = vcpu_vpmu(v);
> +unsigned int idx = 0;
> +int type = get_pmu_reg_type(msr, &idx);
>
>
On Fri, Jun 05, 2015 at 04:16:59PM +0100, Jan Beulich wrote:
> >>> On 05.06.15 at 16:49, wrote:
> > On Mon, May 18, 2015 at 01:41:34PM +0100, Jan Beulich wrote:
> >> >>> On 15.05.15 at 21:44, wrote:
> >> > As such having the payload in an ELF file is the sensible way. We would
> >> > be
> >> > c
On 05/06/15 13:42, Ian Campbell wrote:
> On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
>> A platform may have a GIC compatible with previous version of the
>> device.
>>
>> This is allow to virtualize an unmodified OS on new hardware if the GIC
>> is compatible with older version.
>>
>> Wh
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> There are 3 places to change:
>
> * Initialise vMPIDR value in vcpu_initialise()
> * Find the vCPU from vMPIDR affinity information when accessing GICD
> registers in vGIC
> * Find the vCPU from vMPIDR affinity informa
On 05/06/15 12:28, Jan Beulich wrote:
> Qemu shouldn't be fiddling with this bit directly, as the hypervisor
> may (and now does) use it for its own purposes. Provide it with a
> replacement interface, allowing the hypervisor to track host and guest
> masking intentions independently (clearing the
On 05/06/15 15:51, Jan Beulich wrote:
On 02.06.15 at 18:26, wrote:
>> Performance analysis of aggregate network throughput with many VMs
>> shows that performance is signficantly limited by contention on the
>> maptrack lock when obtaining/releasing maptrack handles from the free
>> list.
>>
On Fri, Jun 5, 2015 at 6:58 PM, Ian Campbell wrote:
> On Fri, 2015-06-05 at 18:11 +0530, Vijay Kilari wrote:
>
>> >>Here device table memory allocated by guest is used to lookup for the
>> >> device.
>> >> Why can't we avoid using the guest memory all together and just only
>> >> emulate
>>
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> GICv3 restricts that the maximum number of CPUs in affinity 0 (one
> cluster) is 16.
Please add the reference to why this is.
> That is to say the upper 4 bits of affinity 0 is unused.
> Current implementation conside
On Fri, 2015-06-05 at 13:34 +0100, Julien Grall wrote:
> On 04/06/15 17:25, Joe Perches wrote:
> > On Thu, 2015-06-04 at 13:52 +0100, Julien Grall wrote:
> >> On 04/06/15 13:46, David Vrabel wrote:
> >>> On 04/06/15 13:45, Julien Grall wrote:
> On 03/06/15 18:06, Joe Perches wrote:
> > On
On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> From: Chen Baozi
>
> Currently it only supports up to 8 vCPUs. Increase the region to hold
> up to 128 vCPUs, which is the maximum number that GIC-500 supports.
>
> Signed-off-by: Chen Baozi
> Reviewed-by: Julien Grall
Acked-by: Ian Campb
>>> On 05.06.15 at 17:39, wrote:
> On 05/06/15 12:26, Jan Beulich wrote:
>> Also make dmar_{read,write}q() actually do what their names suggest (we
>> don't need to be concerned of 32-bit restrictions anymore).
>>
>> Signed-off-by: Jan Beulich
>
> Your patch has a typo "don#t" which isn't presen
On Fri, 2015-06-05 at 16:29 +0100, Julien Grall wrote:
> On 05/06/15 13:26, Ian Campbell wrote:
> > On Fri, 2015-06-05 at 13:24 +0100, Ian Campbell wrote:
> >> On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
> >>> There is a global check for page alignment within this function.
> >>>
> >>> S
On 05/06/15 12:26, Jan Beulich wrote:
> Also make dmar_{read,write}q() actually do what their names suggest (we
> don't need to be concerned of 32-bit restrictions anymore).
>
> Signed-off-by: Jan Beulich
Your patch has a typo "don#t" which isn't present in this commit message.
Otherwise, Review
>>> On 05.06.15 at 15:31, wrote:
> There is no need for top level sections for each of these, so they are
> subsumed into more-generic sections.
>
> .data.read_mostly and .lockprofile.data are moved to .data
>
> .init.setup, .initcall.init, .xsm_initcall.init are moved to .init.data
>
> Thi
On 05/06/15 12:25, Jan Beulich wrote:
> Now that we support it for our guests, let's do so ourselves too.
>
> Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
On 05/06/15 12:24, Jan Beulich wrote:
> The specification explicitly provides for this, so we should have
> supported this from the beginning.
>
> Signed-off-by: Jan Beulich
Reviewed-by: Andrew Cooper
___
Xen-devel mailing list
Xen-devel@lists.xen.org
On 05/06/15 13:26, Ian Campbell wrote:
> On Fri, 2015-06-05 at 13:24 +0100, Ian Campbell wrote:
>> On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
>>> There is a global check for page alignment within this function.
>>>
>>> Signed-off-by: Julien Grall
>>> Cc: Zoltan Kiss
>>
>> Acked-by: Ia
>>> On 05.06.15 at 16:28, wrote:
> On 06/05/2015 09:53 AM, wei.l...@citrix.com wrote:
>>
>> * VPMU - 'perf' support in Xen (good)
>> v21 posted
>> Need reviews/final ack.
>>- Boris Ostrovsky
>
> I posted a version last week with very few changes. Besides Jan's review
> I think it n
>>> On 05.06.15 at 17:00, wrote:
> On Wed, May 20, 2015 at 05:11:20PM +0200, Martin Pohlack wrote:
>> * Xen as it is now, has a couple of non-unique symbol names which will
>> make runtime symbol identification hard. Sometimes, static symbols
>> simply have the same name in C files, sometimes
flight 57895 xen-4.2-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/57895/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xend-winxpsp3 16 guest-stop fail REGR. vs. 53018
test-amd64-i386-x
On 05/06/15 13:23, Ian Campbell wrote:
> On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
>> Make clear that the GIC interface is 4K and not rely on PAGE_SIZE == 4K.
>
> I'm not really sure about this, it seems like splitting hairs a bit too
> finely.
It's very confusing when you read the c
>>> On 05.06.15 at 16:49, wrote:
> On Mon, May 18, 2015 at 01:41:34PM +0100, Jan Beulich wrote:
>> >>> On 15.05.15 at 21:44, wrote:
>> > As such having the payload in an ELF file is the sensible way. We would be
>> > carrying the various set of structures (and data) in the ELF sections under
>> >
On Fri, 2015-06-05 at 15:37 +0100, Julien Grall wrote:
> On 05/06/15 15:08, Ian Campbell wrote:
> > On Mon, 2015-06-01 at 20:56 +0800, Chen Baozi wrote:
> >> From: Chen Baozi
> >>
> >> Currently the number of vcpus on arm64 with GICv3 is limited up to 8 due
> >> to the fixed size of redistributor
On 05/06/15 16:00, Konrad Rzeszutek Wilk wrote:
>> As you discussed, if you allocate hotpatch memory withing +-2GB of the
>> > patch location, no further trampoline indirection is required, a
>> > 5-byte JMP does the trick on x86. We found that to be sufficient in
>> > our experiments.
> And worst
On 05/06/15 13:18, Ian Campbell wrote:
> On Fri, 2015-05-08 at 14:29 +0100, Julien Grall wrote:
>
> Subject: "messages printed"
>
>> - Print all the redistributor regions rather than only the first
>> one...
>> - Add # in the format to print 0x for hexadecimal. It's easier to
>> d
On Wed, May 20, 2015 at 05:11:20PM +0200, Martin Pohlack wrote:
> Hi,
>
> this looks very interesting.
Thank you!
>
> I have talked about an experimental Xen hotpatching design at Linux
> Plumbers Conference 2014 in Düsseldorf, slides are here:
>
> http://www.linuxplumbersconf.net/2014/ocw//sys
On 05/06/15 15:28, Boris Ostrovsky wrote:
>
>
>>
>> == Deferred ==
>>
>>
>> * IO-NUMA - hwloc and xl (none)
>> Andrew Cooper had an RFC patch for hwloc
>> add restrictions as to which devices cannot safely/functionally
>> be split apart.
>>- Boris Ostrovsky
>>
>
> I don't have any im
Hi Wei,
On 05/06/15 14:53, wei.l...@citrix.com wrote:
> === Hypervisor ARM ===
> * ARM GICv2 on GICv3 support (none)
(fair)
> - Julien Grall
> - Vijay Kilari
I'm the only one working on it...
Regards,
--
Julien Grall
___
Xen-devel mailing
>>> On 02.06.15 at 18:26, wrote:
> Performance analysis of aggregate network throughput with many VMs
> shows that performance is signficantly limited by contention on the
> maptrack lock when obtaining/releasing maptrack handles from the free
> list.
>
> Instead of a single free list use a per-V
On Mon, May 18, 2015 at 08:54:22PM +0800, Liuqiming (John) wrote:
> Hi Konrad,
>
> Will this design include hotpatch build tools chain?
Yes, that is certainly the idea.
> Such as how these .xplice_ section are created? How to handle xen symbols
> when creating hotpatch elf file?
Right now I am
On Mon, May 18, 2015 at 01:41:34PM +0100, Jan Beulich wrote:
> >>> On 15.05.15 at 21:44, wrote:
> > As such having the payload in an ELF file is the sensible way. We would be
> > carrying the various set of structures (and data) in the ELF sections under
> > different names and with definitions. T
>>> On 05.06.15 at 15:44, wrote:
> On 05/06/15 14:35, Jan Beulich wrote:
> On 02.06.15 at 18:26, wrote:
>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -288,10 +288,10 @@ static inline void put_maptrack_handle(
>>> struct grant_table *t, int handle)
>>> {
>>
On 05/06/15 15:38, Jan Beulich wrote:
On 05.06.15 at 15:44, wrote:
>> On 05/06/15 14:35, Jan Beulich wrote:
>> On 02.06.15 at 18:26, wrote:
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -288,10 +288,10 @@ static inline void put_maptrack_handle(
1 - 100 of 224 matches
Mail list logo