Hi,
Does anyone meet build OVMF failure? The ovmf is xen default repo:
http://xenbits.xen.org/git-http/ovmf.git, with latest commit
af9785a9ed61daea52b47f0bf448f1f228beee1e, and OS is X86_64 RHEL6.6.
...
make[1]: ***
[/home/nightly/builds_xen_unstable/xen-src-bf0d4923-20151029/tools/firmware/
> -Original Message-
> From: Hu, Robert
> Sent: Monday, November 2, 2015 11:44 AM
> To: 'Ian Jackson' ;
> xen-de...@lists.xenproject.org
> Cc: Ian Campbell
> Subject: RE: [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing
>
> > -Original Message-
> > From: Ian Jackson [mailto:
Do the allocation of page tables in a separate function. This will
allow to do the allocation at different times of the boot preparations
depending on the features the kernel is supporting.
Signed-off-by: Juergen Gross
---
grub-core/loader/i386/xen.c | 82
Modify the page table construction to allow multiple virtual regions
to be mapped. This is done as preparation for removing the p2m list
from the initial kernel mapping in order to support huge pv domains.
This allows a cleaner approach for mapping the relocator page by
using this capability.
The
The Xen hypervisor supports starting a dom0 with large memory (up to
the TB range) by not including the initrd and p2m list in the initial
kernel mapping. Especially the p2m list can grow larger than the
available virtual space in the initial mapping.
The started kernel is indicating the support o
Do the p2m list allocation of the to be loaded kernel in a separate
function. This will allow doing the p2m list allocation at different
times of the boot preparations depending on the features the kernel
is supporting.
While at this remove superfluous setting of first_p2m_pfn and
nr_p2m_frames as
Modern pvops linux kernels support a p2m list not covered by the
kernel mapping. This capability is flagged by an elf-note specifying
the virtual address the kernel is expecting the p2m list to be mapped
to.
In case the elf-note is set by the kernel don't place the p2m list
into the kernel mapping
Modern pvops linux kernels support an initrd not covered by the initial
mapping. This capability is flagged by an elf-note.
In case the elf-note is set by the kernel don't place the initrd into
the initial mapping. This will allow to load larger initrds and/or
support domains with larger memory, a
Do the allocation of special pages (start info, console and xenbus
ring buffers) in a separate function. This will allow to do the
allocation at different times of the boot preparations depending on
the features the kernel is supporting.
Signed-off-by: Juergen Gross
---
grub-core/loader/i386/xen
flight 63400 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63400/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-armhf-armhf-xl-rtds 11 guest-startfail in 63375 pass in 63400
test-armhf-armhf-xl-xsm 16 guest-s
On 11/02/2015 12:49 PM, kbuild test robot wrote:
> Hi Bob,
>
> [auto build test ERROR on v4.3-rc7 -- if it's inappropriate base, please
> suggest rules for selecting the more suitable base]
>
> url:
> https://github.com/0day-ci/linux/commits/Bob-Liu/xen-block-multi-hardware-queues-rings-sup
Hi Bob,
[auto build test ERROR on v4.3-rc7 -- if it's inappropriate base, please
suggest rules for selecting the more suitable base]
url:
https://github.com/0day-ci/linux/commits/Bob-Liu/xen-block-multi-hardware-queues-rings-support/20151102-122806
config: x86_64-allyesconfig (attached as .c
Split per ring information to an new structure "xen_blkif_ring", so that one vbd
device can associate with one or more rings/hardware queues.
Introduce 'pers_gnts_lock' to protect the pool of persistent grants since we
may have multi backend threads.
This patch is a preparation for supporting mul
Make persistent grants per-queue/ring instead of per-device, so that we can
drop the 'dev_lock' and get better scalability.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 89 +---
1 file changed, 34 insertions(+), 55 deletions(-)
diff --git a/d
Backend advertises "multi-queue-max-queues" to front, then get the negotiated
number from "multi-queue-num-queues" wrote by blkfront.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/blkback.c | 11 +++
drivers/block/xen-blkback/common.h | 1 +
drivers/block/xen-blkback/xenbus.c |
Note: These patches were based on original work of Arianna's internship for
GNOME's Outreach Program for Women.
After using blk-mq api, a guest has more than one(nr_vpus) software request
queues associated with each block front. These queues can be mapped over several
rings(hardware queues) to the
The per device io_lock became a coarser grained lock after multi-queues/rings
was introduced, this patch introduced a fine-grained ring_lock for each ring.
The old io_lock was renamed to dev_lock and only protect the ->grants list
which is shared by all rings.
Signed-off-by: Bob Liu
---
drivers
Split per ring information to an new structure "blkfront_ring_info".
A ring is the representation of a hardware queue, every vbd device can associate
with one or more rings depending on how many hardware queues/rings to be used.
This patch is a preparation for supporting real multi hardware queue
The number of hardware queues for xen/blkfront is set by parameter
'max_queues'(default 4), while the max value xen/blkback supported is notified
through xenstore("multi-queue-max-queues").
The negotiated number is the smaller one and would be written back to xenstore
as "multi-queue-num-queues",
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.
Signed-off-by: Arianna Avanzini
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/common.h
Make pool of persistent grants and free pages per-queue/ring instead of
per-device to get better scalability.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkback/blkback.c | 212 +---
drivers/block/xen-blkback/common.h | 32 +++---
drivers/block/xen-blkback/xen
Preparatory patch for multiple hardware queues (rings). The number of
rings is unconditionally set to 1, larger number will be enabled in next
patch so as to make every single patch small and readable.
Signed-off-by: Bob Liu
---
drivers/block/xen-blkfront.c | 327 +---
Document the multi-queue/ring feature in terms of XenStore keys to be written by
the backend and by the frontend.
Signed-off-by: Bob Liu
--
v2:
Add descriptions together with multi-page ring buffer.
---
include/xen/interface/io/blkif.h | 48
1 file change
> -Original Message-
> From: Ian Jackson [mailto:ian.jack...@eu.citrix.com]
> Sent: Saturday, September 26, 2015 3:15 AM
> To: xen-de...@lists.xenproject.org
> Cc: Hu, Robert ; Ian Campbell
> ; Ian Jackson
> Subject: [OSSTEST PATCH v14 PART 2 10-26/26] Nested HVM testing
>
> This is the s
flight 63398 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63398/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 16 guest-localmigrate/x10
fail REGR. vs. 59254
Liuyingdong wrote on 2015-10-31:
> Hi All
>
> We encountered a blue screen problem when live migrate
> Win8.1/Win2012R2 64bit VM from V3 processor to non-V3 processor
> sandbox, KVM does not has this problem.
>
> After looking into the MSR capabilities, we found XEN hypervisor
> exposed bit 39 an
This run is configured for baseline tests only.
flight 38236 xen-4.4-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/38236/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-i386-rumpuserxen-i386 1 build-check(1)
This run is configured for baseline tests only.
flight 38237 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/38237/
Perfect :-)
All tests in this flight passed
version targeted for testing:
ovmf df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
baseline version:
ovm
flight 63395 linux-3.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63395/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf-pvops 5 kernel-build fail REGR. vs. 62648
Tests which are failin
flight 63397 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63397/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf-libvirt 5 libvirt-build fail REGR. vs. 63340
Tests which did not succe
flight 63396 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63396/
Perfect :-)
All tests in this flight passed
version targeted for testing:
ovmf df60fb4cc2ca896fcea9e37b06c276d569f1a6b8
baseline version:
ovmf 843f8ca01bc195cd077f13512fe285e8db9
flight 63391 linux-3.10 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63391/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-armhf-pvops 5 kernel-build fail REGR. vs. 62642
Tests which are failin
flight 63384 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63384/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl-xsm 16 guest-start/debian.repeat fail REGR. vs. 63363
Regressions which a
flight 63382 xen-4.4-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63382/
Failures :-/ but no regressions.
Regressions which are regarded as allowable (not blocking):
test-armhf-armhf-xl-multivcpu 16 guest-start/debian.repeatfail like 63097
test-amd64-amd64-xl-qemuu-
flight 63381 xen-4.3-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63381/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-migrupgrade 21 guest-migrate/src_host/dst_host fail REGR. vs.
63212
Regression
flight 63385 linux-mingo-tip-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63385/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-pvops 5 kernel-build fail REGR. vs. 60684
build-i386
flight 63379 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63379/
Failures and problems with tests :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl-multivcpu 3 host-install(3)broken REGR. vs. 633
flight 63378 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/63378/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs.
63358
test-amd64-
38 matches
Mail list logo