flight 111656 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111656/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 3a3d62d2e66d7bec1b97f51c26eac5326e30ad94
baseline version:
ovmf c82fc2b555285306904c9
On Tue, Jul 11, 2017 at 01:07:46AM -0400, Brian Gerst wrote:
> > If I make the scattered feature support conditional on CONFIG_X86_64
> > (based on comment below) then cpu_has() will always be false unless
> > CONFIG_X86_64 is enabled. So this won't need to be wrapped by the
> > #ifdef.
>
> If you
On Mon, Jul 10, 2017 at 3:41 PM, Tom Lendacky wrote:
> On 7/8/2017 7:50 AM, Brian Gerst wrote:
>>
>> On Fri, Jul 7, 2017 at 9:38 AM, Tom Lendacky
>> wrote:
>>>
>>> Update the CPU features to include identifying and reporting on the
>>> Secure Memory Encryption (SME) feature. SME is identified by
On Mon, Jul 10, 2017 at 3:50 PM, Tom Lendacky wrote:
> On 7/8/2017 7:57 AM, Brian Gerst wrote:
>>
>> On Fri, Jul 7, 2017 at 9:39 AM, Tom Lendacky
>> wrote:
>>>
>>> Currently there is a check if the address being mapped is in the ISA
>>> range (is_ISA_range()), and if it is, then phys_to_virt() is
flight 111645 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111645/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-win7-amd64 18 guest-start/win.repeat fail REGR. vs.
111506
Tests which
linking on Linux debian/stretch/arm[64] with libxen-4.8:
exec.o: In function `reclaim_ramblock':
qemu/exec.c:2071: undefined reference to `xen_invalidate_map_cache_entry'
exec.o: In function `qemu_map_ram_ptr':
qemu/exec.c:2177: undefined reference to `xen_map_cache'
qemu/exec.
Hi,
Those errors were triggered installing libxen v4.8 on debian Stretch
ARM (32b and 64b).
It seems QEMU only support Xen on x86 host.
patch 1 disable PCI Passthrough if not on x86,
patch 2 disable xen_map_cache() on ARM, I don't think it is the correct
way to do it, then
patch 3 add few commen
linking on Linux debian/stretch/arm64 with libxen-4.8:
hw/xen/xen_pt.o: In function `xen_pt_pci_read_config':
qemu/hw/xen/xen_pt.c:206: undefined reference to `xen_shutdown_fatal_error'
hw/xen/xen_pt.o: In function `xen_igd_passthrough_isa_bridge_create':
qemu/hw/xen/xen_pt.c:698:
Signed-off-by: Philippe Mathieu-Daudé
---
hw/xen/xen_pt.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index 375efa68f6..21c32b0991 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -58,7 +58,7 @@
#include "hw/pci/pci.h"
#includ
Actually, qemu-xen has done this in igd_write_opregion() of
hw/xen/xen_pt_graphics.c, while qemu-xen-traditional lack of this, so I send
this patch to fix it.
thanks
> -Original Message-
> From: Pasi Kärkkäinen [mailto:pa...@iki.fi]
> Sent: Monday, July 10, 2017 11:00 PM
> To: Anthony PE
flight 111644 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111644/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-win7-amd64 18 guest-start/win.repeat fail in 111523
REGR. vs. 110441
Tes
flight 111635 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111635/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 110515
test-amd64-amd64-xl
Saving/restoring the physmap to/from xenstore was introduced to
QEMU majorly in order to cover up the VRAM region restore issue.
The sequence of restore operations implies that we should know
the effective guest VRAM address *before* we have the VRAM region
restored (which happens later). Unfortuna
This new call is trying to update a requested map cache entry
according to the changes in the physmap. The call is searching
for the entry, unmaps it and maps again at the same place using
a new guest address. If the mapping is dummy this call will
make it real.
This function makes use of a new xe
If we have a system with xenforeignmemory_map2() implemented
we don't need to save/restore physmap on suspend/restore
anymore. In case we resume a VM without physmap - try to
recreate the physmap during memory region restore phase and
remap map cache entries accordingly. The old code is left
for co
Non-functional change.
Signed-off-by: Igor Druzhinin
Reviewed-by: Stefano Stabellini
Reviewed-by: Paul Durrant
---
hw/i386/xen/xen-hvm.c | 57 ---
1 file changed, 31 insertions(+), 26 deletions(-)
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen
Dummys are simple anonymous mappings that are placed instead
of regular foreign mappings in certain situations when we need
to postpone the actual mapping but still have to give a
memory region to QEMU to play with.
This is planned to be used for restore on Xen.
Signed-off-by: Igor Druzhinin
Rev
flight 111632 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111632/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-examine 7 reboot fail REGR. vs. 111580
test-amd64-amd64-pai
On Mon, Jul 10, 2017 at 03:35:33PM +0200, Olaf Hering wrote:
> On Mon, Jul 10, Konrad Rzeszutek Wilk wrote:
>
> > Soo I wrote some code for exactly this for Xen 4.4.4 , along with
> > creation of a PGM map to see the NUMA nodes locality.
>
> Are you planning to prepare that for staging at some po
Under certain circumstances normal xen-mapcache functioning may be broken
by guest's actions. This may lead to either QEMU performing exit() due to
a caught bad pointer (and with QEMU process gone the guest domain simply
appears hung afterwards) or actual use of the incorrect pointer inside
QEMU a
On 7/8/2017 7:57 AM, Brian Gerst wrote:
On Fri, Jul 7, 2017 at 9:39 AM, Tom Lendacky wrote:
Currently there is a check if the address being mapped is in the ISA
range (is_ISA_range()), and if it is, then phys_to_virt() is used to
perform the mapping. When SME is active, the default is to add pa
On 7/8/2017 7:50 AM, Brian Gerst wrote:
On Fri, Jul 7, 2017 at 9:38 AM, Tom Lendacky wrote:
Update the CPU features to include identifying and reporting on the
Secure Memory Encryption (SME) feature. SME is identified by CPUID
0x801f, but requires BIOS support to enable it (set bit 23 of
M
flight 111624 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111624/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-arm64-arm64-libvirt-xsm 12 guest-start fail REGR. vs. 111403
test-armhf-armhf-
On Mon, Jul 10, 2017 at 11:07 AM, Petre Pircalabu
wrote:
> If case of a vm_event with the emulate_flags set, if the instruction
> cannot be emulated, the monitor should be notified instead of directly
> injecting a hw exception.
> This behavior can be used to re-execute an instruction not supporte
On 7/8/2017 4:24 AM, Ingo Molnar wrote:
* Tom Lendacky wrote:
This patch series provides support for AMD's new Secure Memory Encryption (SME)
feature.
I'm wondering, what's the typical performance hit to DRAM access latency when
SME
is enabled?
It's about an extra 10 cycles of DRAM lat
If case of a vm_event with the emulate_flags set, if the instruction
cannot be emulated, the monitor should be notified instead of directly
injecting a hw exception.
This behavior can be used to re-execute an instruction not supported by
the emulator using the real processor (e.g. altp2m) instead o
On 2017-07-10 01:52:27 -0600, Jan Beulich wrote:
> >>> On 07.07.17 at 20:11, wrote:
> > On 2017-07-06 02:45:18 -0600, Jan Beulich wrote:
> >> I think so, but I may be missing parts of your reasoning as to why
> >> hiding the device may be a good thing.
> >
> > Here is the rationale behind hiding
On Mon, Jul 10, 2017 at 04:49:18PM +0100, Peter Maydell wrote:
> On 5 July 2017 at 08:14, Paolo Bonzini wrote:
> > This will be useful when the functions are called, early in the configure
> > process, to filter out targets that do not support hardware acceleration.
> >
> > Signed-off-by: Paolo Bo
flight 111619 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111619/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-win7-amd64 18 guest-start/win.repeat fail REGR. vs.
111506
Tests which
flight 111628 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111628/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-win7-amd64 18 guest-start/win.repeat fail in 111523
REGR. vs. 110441
Tes
Hi,
Perhaps Anthony can review this patch (noticing you reviewed other igd related
patches recently..) ?
Thanks,
-- Pasi
On Tue, Jun 27, 2017 at 12:12:50PM +0800, Xiong Zhang wrote:
> Currently guest couldn't access host opregion when igd is passed through
> to guest with qemu-xen-traditiona
On Fri, Jul 07, 2017 at 12:07:58PM +0800, Xiong Zhang wrote:
> In igd passthrough environment, guest could only access opregion at the
> first bootup time. Once guest shutdown, later guest couldn't access
> opregion anymore.
> This is because qemu set emulated guest opregion base address to host
>
flight 111640 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111640/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
xtf d94ba594f2e680dc4f1d1026df38b8d0fb5a5dc1
baseline version:
xtf 48efc1044ba1348ad21db1
Hi Stefano,
Looks like this patch can be applied.
On Fri, Mar 24, 2017 at 01:40:25PM +, Paul Durrant wrote:
> Commit 090fa1c8 "add support for unplugging NVMe disks..." extended the
> existing disk unplug flag to cover NVMe disks as well as IDE and SCSI.
>
> The recent thread on the xen-dev
branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-pvh-intel
testid xen-boot
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.g
On Mon, Jul 10, Konrad Rzeszutek Wilk wrote:
> Soo I wrote some code for exactly this for Xen 4.4.4 , along with
> creation of a PGM map to see the NUMA nodes locality.
Are you planning to prepare that for staging at some point? I have not
checked this series is already merged.
Olaf
signature.
> -Original Message-
> From: Roger Pau Monne [mailto:roger@citrix.com]
> Sent: 30 June 2017 16:01
> To: xen-de...@lists.xenproject.org
> Cc: boris.ostrov...@oracle.com; julien.gr...@arm.com;
> konrad.w...@oracle.com; Roger Pau Monne ; Jan
> Beulich ; Andrew Cooper
> ; Paul Durrant
> Su
> -Original Message-
> From: Roger Pau Monne [mailto:roger@citrix.com]
> Sent: 30 June 2017 16:01
> To: xen-de...@lists.xenproject.org
> Cc: boris.ostrov...@oracle.com; julien.gr...@arm.com;
> konrad.w...@oracle.com; Roger Pau Monne ; Ian
> Jackson ; Wei Liu ; Jan
> Beulich ; Andrew Coo
On Mon, Jul 10, 2017 at 04:41:35AM -0600, Jan Beulich wrote:
> >>> On 10.07.17 at 12:10, wrote:
> > I would like to verify on which NUMA node the PFNs used by a HVM guest
> > are located. Is there an API for that? Something like:
> >
> > foreach (pfn, domid)
> > mfns_per_node[pfn_to_node(pf
On Mon, Jul 10, 2017 at 1:25 AM, Yi Sun wrote:
> On 17-07-07 12:37:28, Meng Xu wrote:
>> > + Sample cache capacity bitmasks for a bitlength of 8 are shown below.
>> > Please
>> > + note that all (and only) contiguous '1' combinations are allowed (e.g.
>> > H,
>> > + 0FF0H, 003CH, etc.).
>
On Fri, Jul 7, 2017 at 1:56 PM, Oleksandr Grytsov wrote:
> On Fri, Jul 7, 2017 at 1:32 PM, Wei Liu wrote:
>> On Fri, Jul 07, 2017 at 01:29:39PM +0300, Oleksandr Grytsov wrote:
>> > Actually my the first patch probably was done on the old codebase
>>> > which doesn't have locking in add function.
On Mon, Jul 10, 2017 at 3:22 PM, Oleksandr Grytsov wrote:
> On Thu, Jul 6, 2017 at 6:29 PM, Wei Liu wrote:
>> On Tue, Jun 27, 2017 at 01:03:19PM +0300, Oleksandr Grytsov wrote:
>>> From: Oleksandr Grytsov
>>>
>>> Add libxl__device_list, libxl__device_list_free.
>>> Device list is created from li
On Thu, Jul 6, 2017 at 6:29 PM, Wei Liu wrote:
> On Tue, Jun 27, 2017 at 01:03:19PM +0300, Oleksandr Grytsov wrote:
>> From: Oleksandr Grytsov
>>
>> Add libxl__device_list, libxl__device_list_free.
>> Device list is created from libxl xen store entries.
>> In order to fill libxl device structure
> -Original Message-
[snip]
> > > > +object_unparent(OBJECT(blkdev->iothread));
> > >
> > > Shouldn't this be object_unref?
> > >
> >
> > I don't think so. I think this is required to undo what was done by calling
> object_property_add_child() on the root object.
>
> Right, so if objec
On Mon, Jul 10, 2017 at 03:36:52AM -0600, Jan Beulich wrote:
On 10.07.17 at 03:17, wrote:
>> On Fri, Jul 07, 2017 at 09:57:47AM -0600, Jan Beulich wrote:
>> On 07.07.17 at 08:48, wrote:
+#define remote_pbl_operation_begin(flags) \
+({
On 7 July 2017 at 19:29, Stefano Stabellini wrote:
> The following changes since commit b11365867568ba954de667a0bfe0945b8f78d6bd:
>
> Merge remote-tracking branch 'remotes/borntraeger/tags/s390x-20170706' into
> staging (2017-07-06 11:42:59 +0100)
>
> are available in the git repository at:
>
>
flight 111611 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/111611/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-pvh-intel 7 xen-bootfail REGR. vs. 110515
test-amd64-amd64-xl
>>> On 10.07.17 at 12:10, wrote:
> I would like to verify on which NUMA node the PFNs used by a HVM guest
> are located. Is there an API for that? Something like:
>
> foreach (pfn, domid)
> mfns_per_node[pfn_to_node(pfn)]++
> foreach (node)
> printk("%x %x\n", node, mfns_per_node[node
Real hardware wraps silently in most cases, so we should behave the
same. Also split real and VM86 mode handling, as the latter really
ought to have limit checks applied.
Signed-off-by: Jan Beulich
---
v3: Restore 32-bit wrap check for AMD.
v2: Extend to non-64-bit modes. Reduce 64-bit check to a
I would like to verify on which NUMA node the PFNs used by a HVM guest
are located. Is there an API for that? Something like:
foreach (pfn, domid)
mfns_per_node[pfn_to_node(pfn)]++
foreach (node)
printk("%x %x\n", node, mfns_per_node[node])
Olaf
signature.asc
Description: PGP signat
On Sun, 9 Jul 2017, Peter Maydell wrote:
> Check the return status of the xen_host_pci_get_* functions we call in
> xen_pt_msix_init(), and fail device init if the reads failed rather than
> ploughing ahead. (Spotted by Coverity: CID 777338.)
>
> Signed-off-by: Peter Maydell
Reviewed-by: Stefano
>>> On 10.07.17 at 03:17, wrote:
> On Fri, Jul 07, 2017 at 09:57:47AM -0600, Jan Beulich wrote:
> On 07.07.17 at 08:48, wrote:
>>> +#define remote_pbl_operation_begin(flags) \
>>> +({ \
>>> +spin_lock_irqsave(&remo
Hello Meng Xu,
On 07.07.17 21:43, Meng Xu wrote:
Andrii,
If you encountered any question/difficulty in choosing the proper VCPU
parameters for your workload, please don't hesitate to ping me and
Dario.
Thank you. I'll keep you in touch when we have something specified.
--
*Andrii Anisov*
_
flight 71677 distros-debian-sid real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71677/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-amd64-sid-netboot-pygrub 11 guest-start fail REGR. vs. 71625
Tests whic
This run is configured for baseline tests only.
flight 71676 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71676/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf c82fc2b555285306904c9c1ed6524a85bee8841a
baseline v
When setting up the Xenstore watch for the memory target size the new
watch will fire at once. Don't try to reach the configured target size
by onlining new memory in this case, as the current memory size will
be smaller in almost all cases due to e.g. BIOS reserved pages.
Onlining new memory will
>>> On 07.07.17 at 20:11, wrote:
> On 2017-07-06 02:45:18 -0600, Jan Beulich wrote:
>> I think so, but I may be missing parts of your reasoning as to why
>> hiding the device may be a good thing.
>
> Here is the rationale behind hiding the erring device.
>
> If a device is misbehaving, one of th
While these are latent issues only for now, correct them right away:
- EVEX.V' (called RX in our code) needs to uniformly be 1,
- EXEX.R' (called R in our code) is uniformly being ignored.
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_
Recent changes to the SDM (and XED) have made clear that older hardware
raising #UD when the bit is set was really an erratum. Generalize the
so far AMD-only override.
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -5598,9
On 07/07/17 19:11, Thomas Gleixner wrote:
> On Fri, 7 Jul 2017, Thomas Gleixner wrote:
>
>> On Fri, 7 Jul 2017, Juergen Gross wrote:
>>
>>> Commit bf22ff45bed664aefb5c4e43029057a199b7070c ("genirq: Avoid
>>> unnecessary low level irq function calls") breaks Xen guest
>>> save/restore handling.
>>>
Going though the XED commits from the last couple of months made me
notice that VPINSRD, other than VPEXTRD, does not clear VEX.W for non-
64-bit modes, leading to an insertion of stray 32-bits of zero in case
the original instruction had the bit set.
Also remove a pointless fall-through in VPEXTR
61 matches
Mail list logo