Re: [Xen-devel] [PATCH v9 4/5] x86/PCI: Enable a 64bit BAR on AMD Family 15h (Models 30h-3fh) Processors v5

2017-11-23 Thread Christian König

Am 22.11.2017 um 18:27 schrieb Boris Ostrovsky:

On 11/22/2017 11:54 AM, Christian König wrote:

Am 22.11.2017 um 17:24 schrieb Boris Ostrovsky:

On 11/22/2017 05:09 AM, Christian König wrote:

Am 21.11.2017 um 23:26 schrieb Boris Ostrovsky:

On 11/21/2017 08:34 AM, Christian König wrote:

Hi Boris,

attached are two patches.

The first one is a trivial fix for the infinite loop issue, it now
correctly aborts the fixup when it can't find address space for the
root window.

The second is a workaround for your board. It simply checks if there
is exactly one Processor Function to apply this fix on.

Both are based on linus current master branch. Please test if they
fix
your issue.

Yes, they do fix it but that's because the feature is disabled.

Do you know what the actual problem was (on Xen)?

I still haven't understood what you actually did with Xen.

When you used PCI pass through with those devices then you have made a
major configuration error.

When the problem happened on dom0 then the explanation is most likely
that some PCI device ended up in the configured space, but the routing
was only setup correctly on one CPU socket.

The problem is that dom0 can be (and was in my case() booted with less
than full physical memory and so the "rest" of the host memory is not
necessarily reflected in iomem. Your patch then tried to configure that
memory for MMIO and the system hang.

And so my guess is that this patch will break dom0 on a single-socket
system as well.

Oh, thanks!

I've thought about that possibility before, but wasn't able to find a
system which actually does that.

May I ask why the rest of the memory isn't reported to the OS?

That memory doesn't belong to the OS (dom0), it is owned by the hypervisor.


Sounds like I can't trust Linux resource management and probably need
to read the DRAM config to figure things out after all.


My question is whether what you are trying to do should ever be done for
a guest at all (any guest, not necessarily Xen).


The issue is probably that I don't know enough about Xen: What exactly 
is dom0? My understanding was that dom0 is the hypervisor, but that 
seems to be incorrect.


The issue is that under no circumstances *EVER* a virtualized guest 
should have access to the PCI devices marked as "Processor Function" on 
AMD platforms. Otherwise it is trivial to break out of the virtualization.


When dom0 is something like the system domain with all hardware access 
then the approach seems legitimate, but then the hypervisor should 
report the stolen memory to the OS using the e820 table.


When the hypervisor doesn't do that and the Linux kernel isn't aware 
that there is memory at a given location mapping PCI space there will 
obviously crash the hypervisor.


Possible solutions as far as I can see are either disabling this feature 
when we detect that we are a Xen dom0, scanning the DRAM settings to 
update Linux resource handling or fixing Xen to report stolen memory to 
the dom0 OS as reserved.


Opinions?

Thanks,
Christian.



-boris




___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 116440: regressions - FAIL

2017-11-23 Thread osstest service owner
flight 116440 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116440/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm   6 xen-install  fail REGR. vs. 116190
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-saverestore fail REGR. vs. 116190
 test-armhf-armhf-xl-arndale  19 leak-check/check fail REGR. vs. 116190

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116190
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116190
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116190
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuua15d835f00dce270fd3194e83d9910f4b5b44ac0
baseline version:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf

Last test of basis   116190  2017-11-15 06:53:12 Z8 days
Failing since116227  2017-11-16 13:17:17 Z6 days8 attempts
Testing same since   116440  2017-11-22 09:32:36 Z0 days1 attempts


People who touched revisions under test:
  "Daniel P. Berrange" 
  Alberto Garcia 
  Alex Bennée 
  Alexey Kardashevskiy 
  Anton Nefedov 
  BALATON Zoltan 
  Christian Borntraeger 
  Cornelia Huck 
  Daniel Henrique Barboza 
  Daniel P. Berrange 
  Dariusz Stojaczyk 
  David Gibson 
  David Hildenbrand 
  Dou Liyang 
  Dr. David Alan Gilbert 
  Ed Swierk 
  Emilio G. Cota 
  Eric Blake 
  Gerd Hoffmann 
  Greg Kurz 
  Helge Deller 
  James Clarke 
  James Cowgill 
  Jason Wang 
  Jeff Cody 
  Jindrich Makovicka 
  Joel Stanley 
  John Paul Adrian Glaubitz 
  Kevin Wolf 
  linzhecheng 
  Mao Zhongyi 
  Marc-André Lureau 
  Marcel Apfelbaum 
  Maria Klimushenkova 
  Max Reitz 
  Michael Roth 
  Michael S. Tsirkin 
  Mike Nawrocki 
  Paolo Bonzini 
  Pavel Dovgalyuk 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Richard Henderson 
  Richard Henderson 
  Riku Voipio 
  Stefan Berger 
  Stefan Hajnoczi 
  Stefan Weil 
  Stefano Stabellini 
  Suraj Jitindar Singh 
  Thomas Huth 
  Vladimir Sementsov-Ogievskiy 
  Wang Guang 
  Wang Yong 
  Wanpeng Li 
  Wei Huang 
  Yongbok Kim 
  ZhiPeng Lu 

jobs:
 build-amd64-xsm 

Re: [Xen-devel] [PATCH v3 07/17] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-23 Thread Paul Durrant
> -Original Message-
> From: George Dunlap [mailto:george.dun...@citrix.com]
> Sent: 22 November 2017 19:20
> To: xen-devel@lists.xenproject.org
> Cc: George Dunlap ; Ian Jackson
> ; Wei Liu ; Andrew Cooper
> ; Jan Beulich ; Stefano
> Stabellini ; Konrad Wilk ;
> Tim (Xen.org) ; Roger Pau Monne ;
> Anthony Perard ; Paul Durrant
> ; Julien Grall 
> Subject: [PATCH v3 07/17] SUPPORT.md: Add virtual devices common to
> ARM and x86
> 
> Mostly PV protocols.
> 
> Signed-off-by: George Dunlap 

Reviewed-by: Paul Durrant 

> ---
> Changes since v2:
> - Define "having xl support" as a requirement for Tech Preview and
> Supported
> - ...and remove backend from xl support section
> - Add OpenBSD blkback
> - Fix Linux backend names
> - Remove non-existent implementation (PV USB Linux)
> - Remove support for PV keyboard in Windows (Fix in qemu tree didn't make
> it)
> 
> CC: Ian Jackson 
> CC: Wei Liu 
> CC: Andrew Cooper 
> CC: Jan Beulich 
> CC: Stefano Stabellini 
> CC: Konrad Wilk 
> CC: Tim Deegan 
> CC: Roger Pau Monne 
> CC: Anthony Perard 
> CC: Paul Durrant 
> CC: Julien Grall 
> ---
>  SUPPORT.md | 150
> ++
> +++
>  1 file changed, 150 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index dd3632b913..96c381fb55 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -128,6 +128,10 @@ Output of information in machine-parseable JSON
> format
> 
>  Status: Supported
> 
> +### QEMU backend hotplugging for xl
> +
> +Status: Supported
> +
>  ## Toolstack/3rd party
> 
>  ### libvirt driver for xl
> @@ -223,6 +227,152 @@ which add paravirtualized functionality to HVM
> guests
>  for improved performance and scalability.
>  This includes exposing event channels to HVM guests.
> 
> +## Virtual driver support, guest side
> +
> +### Blkfront
> +
> +Status, Linux: Supported
> +Status, FreeBSD: Supported, Security support external
> +Status, NetBSD: Supported, Security support external
> +Status, OpenBSD: Supported, Security support external
> +Status, Windows: Supported
> +
> +Guest-side driver capable of speaking the Xen PV block protocol
> +
> +### Netfront
> +
> +Status, Linux: Supported
> +States, Windows: Supported
> +Status, FreeBSD: Supported, Security support external
> +Status, NetBSD: Supported, Security support external
> +Status, OpenBSD: Supported, Security support external
> +
> +Guest-side driver capable of speaking the Xen PV networking protocol
> +
> +### PV Framebuffer (frontend)
> +
> +Status, Linux (xen-fbfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV Framebuffer protocol
> +
> +### PV Console (frontend)
> +
> +Status, Linux (hvc_xen): Supported
> +Status, Windows: Supported
> +Status, FreeBSD: Supported, Security support external
> +Status, NetBSD: Supported, Security support external
> +
> +Guest-side driver capable of speaking the Xen PV console protocol
> +
> +### PV keyboard (frontend)
> +
> +Status, Linux (xen-kbdfront): Supported
> +
> +Guest-side driver capable of speaking the Xen PV keyboard protocol
> +
> +### PV USB (frontend)
> +
> +Status, Linux: Supported
> +
> +### PV SCSI protocol (frontend)
> +
> +Status, Linux: Supported, with caveats
> +
> +NB that while the PV SCSI backend is in Linux and tested regularly,
> +there is currently no xl support.
> +
> +### PV TPM (frontend)
> +
> +Status, Linux (xen-tpmfront): Tech Preview
> +
> +Guest-side driver capable of speaking the Xen PV TPM protocol
> +
> +### PV 9pfs frontend
> +
> +Status, Linux: Tech Preview
> +
> +Guest-side driver capable of speaking the Xen 9pfs protocol
> +
> +### PVCalls (frontend)
> +
> +Status, Linux: Tech Preview
> +
> +Guest-side driver capable of making pv system calls
> +
> +## Virtual device support, host side
> +
> +For host-side virtual device support,
> +"Supported" and "Tech preview" include xl/libxl support
> +unless otherwise noted.
> +
> +### Blkback
> +
> +Status, Linux (xen-blkback): Supported
> +Status, FreeBSD (blkback): Supported, Security support external
> +Status, NetBSD (xbdback): Supported, security support external
> +Status, QEMU (xen_disk): Supported
> +Status, Blktap2: Deprecated
> +
> +Host-side implementations of the Xen PV block protocol
> +
> +### Netback
> +
> +Status, Linux (xen-netback): Supported
> +Status, FreeBSD (netback): Supported, Security support external
> +Status, NetBSD (xennetback): Supported, Security support external
> +
> +Host-side implementations of Xen PV network protocol
> +
> +### PV Framebuffer (backend)
> +
> +Status, QEMU: Supported
> +
> +Host-side implementaiton of the Xen PV framebuffer protocol
> +
> +### PV Console (xenconsoled)
> +
> +Status: Supported
> +
> +Host-side implementation of the Xen PV console protocol
> +
> +### PV keyboard (backend)
> +
> +Status, QEMU: Supported
> +
> +Host-side implementation fo the Xen PV keyboar

[Xen-devel] [distros-debian-wheezy test] 72484: all pass

2017-11-23 Thread Platform Team regression test user
flight 72484 distros-debian-wheezy real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72484/

Perfect :-)
All tests in this flight passed as required
baseline version:
 flight   72456

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-wheezy-netboot-pvgrub pass
 test-amd64-i386-i386-wheezy-netboot-pvgrub   pass
 test-amd64-i386-amd64-wheezy-netboot-pygrub  pass
 test-amd64-amd64-i386-wheezy-netboot-pygrub  pass



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] Linux as 32-bit Dom0?

2017-11-23 Thread Juergen Gross
On 22/11/17 15:48, Jan Beulich wrote:
 On 22.11.17 at 15:40,  wrote:
>> On 22/11/17 15:05, Jan Beulich wrote:
>>> Jürgen, Boris,
>>>
>>> am I trying something that's not allowed, but selectable via Kconfig?
>>> On system with multiple IO-APICs (I assume that's what triggers the
>>> problem) I get
>>>
>>> Kernel panic - not syncing: Max apic_id exceeded!
>>
>> Generally I don't think 32 bit dom0 is forbidden, but rarely used. I
>> wouldn't be too sad in case we'd decide to drop that support. ;-)
>>
>> Can you please be a little bit more specific?
>>
>> How many IOAPICs? From the code I guess this is an INTEL system with not
>> too recent IOAPIC versions (<0x14)?
>>
>> Having a little bit more of the boot log might help, too.
> 
> Full log attached, which should answer all questions. This is
> a Haswell system, so not too old an IO-APIC flavor I would say.

From this data I can't explain why the system is crashing.

Right now I have 3 possible explanations, all could be proofed by
adding some printk statements in io_apic_get_unique_id(). Could you
please print the value returned by get_physical_broadcast() and the
complete apic_id_map right before the panic() call?

The possibilities IMHO are:
- the LAPIC version is limiting the number of available apicids
- apic_id_map is somehow filled up completely with all bits set
- a compiler bug leading to a false positive


Juergen



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 12/16] SUPPORT.md: Add Security-releated features

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 18:13,  wrote:
> On 11/21/2017 08:52 AM, Jan Beulich wrote:
> On 13.11.17 at 16:41,  wrote:
>>> With the exception of driver domains, which depend on PCI passthrough,
>>> and will be introduced later.
>>>
>>> Signed-off-by: George Dunlap 
>> 
>> Shouldn't we also explicitly exclude tool stack disaggregation here,
>> with reference to XSA-77?
> 
> Well in this document, we already consider XSM "experimental"; that
> would seem to subsume the specific exclusions listed in XSA-77.
> 
> I've modified the "XSM & FLASK" as below; let me know what you think.
> 
> The other option would be to make separate entries for specific uses of
> XSM (i.e., "for simple domain restriction" vs "for domain disaggregation").
> 
>  -George
> 
> 
> ### XSM & FLASK
> 
> Status: Experimental
> 
> Compile time disabled.
> 
> Also note that using XSM
> to delegate various domain control hypercalls
> to particular other domains, rather than only permitting use by dom0,
> is also specifically excluded from security support for many hypercalls.
> Please see XSA-77 for more details.

That's fine with mel.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 16/16] SUPPORT.md: Add limits RFC

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 19:01,  wrote:

> 
>> On Nov 21, 2017, at 9:26 AM, Jan Beulich  wrote:
>>
> On 13.11.17 at 16:41,  wrote:
>>> +### Virtual CPUs
>>> +
>>> +Limit, x86 PV: 8192
>>> +Limit-security, x86 PV: 32
>>> +Limit, x86 HVM: 128
>>> +Limit-security, x86 HVM: 32
>>
>> Personally I consider the "Limit-security" numbers too low here, but
>> I have no proof that higher numbers will work _in all cases_.
> 
> You don’t have to have conclusive proof that the numbers work in all
> cases; we only need to have reasonable evidence that higher numbers are
> generally reliable.  To use US legal terminology, it’s “preponderance of
> evidence” (usually used in civil trials) rather than “beyond a
> reasonable doubt” (used in criminal trials).
> 
> In this case, there are credible claims that using more vcpus opens
> users up to a host DoS, and no evidence (or arguments) to the contrary.
>  I think it would be irresponsible, under those circumstances, to tell
> people that they should provide more vcpus to untrusted guests.
> 
> It wouldn’t be too hard to gather further evidence.  If someone
> competent spent a few days trying to crash a larger guest and failed,
> then that would be reason to think that perhaps larger numbers were safe.
> 
>>
>>> +### Virtual RAM
>>> +
>>> +Limit-security, x86 PV: 2047GiB
>>
>> I think this needs splitting for 64- and 32-bit (the latter can go up
>> to 168Gb only on hosts with no memory past the 168Gb boundary,
>> and up to 128Gb only on larger ones, without this being a processor
>> architecture limitation).
> 
> OK.  Below is an updated section.  It might be good to specify how large
> is "larger".

Well, simply anything with memory extending beyond the 168Gb
boundary, i.e. ...

> ---
> ### Virtual RAM
> 
> Limit-security, x86 PV 64-bit: 2047GiB
> Limit-security, x86 PV 32-bit: 168GiB (see below)
> Limit-security, x86 HVM: 1.5TiB
> Limit, ARM32: 16GiB
> Limit, ARM64: 1TiB
> 
> Note that there are no theoretical limits to 64-bit PV or HVM guest sizes
> other than those determined by the processor architecture.
> 
> All 32-bit PV guest memory must be under 168GiB;
> this means the total memory for all 32-bit PV guests cannot exced 168GiB.
> On larger hosts, this limit is 128GiB.

... "On hosts with memory extending beyond 168GiB, this limit is
128GiB."

>>> +### Event Channel FIFO ABI
>>> +
>>> +Limit: 131072
>>
>> Are we certain this is a security supportable limit? There is at least
>> one loop (in get_free_port()) which can potentially have this number
>> of iterations.
> 
> I have no idea.  Do you have another limit you’d like to propose instead?

Since I can't prove the given limit might be a problem, it's also
hard to suggest an alternative. Probably the limit is fine as is,
despite the number looking pretty big: In x86 PV page table
handling we're fine processing a single L2 in one go, which
involves twice as many iterations (otoh I'm struggling to find a
call tree where {alloc,free}_l2_table() would actually be called
with "preemptible" set to false).

> Also, I realized that I somehow failed to send out the 17th patch (!),
> which primarily had XXX entries for qemu-upstream/qemu-traditional, and
> host serial console support.
> 
> Shall I try to make a list of supported serial cards from
> /build/hg/xen.git/xen/drivers/char/Kconfig?

Hmm, interesting question. For the moment I'm having a hard time
seeing how someone using an arbitrary serial card, problems with
it could be caused by guest behavior. Other functionality problems
(read: bugs or missing code for unknown cards/quirks) aren't
security support relevant afaict.

Jan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 18:15,  wrote:
> On 11/21/2017 07:55 PM, Andrew Cooper wrote:
>> On 13/11/17 15:41, George Dunlap wrote:
>>> Signed-off-by: George Dunlap 
>>> ---
>>> CC: Ian Jackson 
>>> CC: Wei Liu 
>>> CC: Andrew Cooper 
>>> CC: Jan Beulich 
>>> CC: Stefano Stabellini 
>>> CC: Konrad Wilk 
>>> CC: Tim Deegan 
>>> CC: Tamas K Lengyel 
>>> ---
>>>  SUPPORT.md | 31 +++
>>>  1 file changed, 31 insertions(+)
>>>
>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>> index 0f7426593e..3e352198ce 100644
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -187,6 +187,37 @@ Export hypervisor coverage data suitable for analysis 
> by gcov or lcov.
>>>  
>>>  Status: Supported
>>>  
>>> +### Memory Sharing
>>> +
>>> +Status, x86 HVM: Tech Preview
>>> +Status, ARM: Tech Preview
>>> +
>>> +Allow sharing of identical pages between guests
>> 
>> "Tech Preview" should imply there is any kind of `xl dedup-these-domains
>> $X $Y` functionality.
>> 
>> The only thing we appears to have an example wrapper around the libxc
>> interface, which requires the user to nominate individual frames, and
>> this doesn't qualify as "functionally complete" IMO.
> 
> Right, I was getting confused with paging, which does have at least some
> code in the tools/ directory.  (But perhaps should also be considered
> experimental?  When was the last time anyone tried to use it?)

Olaf, are you still playing with it every now and then?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 02/17] SUPPORT.md: Add core functionality

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> Core memory management and scheduling.
> 
> Signed-off-by: George Dunlap 

Acked-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread Olaf Hering
On Thu, Nov 23, Jan Beulich wrote:

> Olaf, are you still playing with it every now and then?

No, I have not tried it since I last touched it.
The last thing I know was that integrating it into libxl was difficult
because it was not straight forward to describe "memory usage" properly.


Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 03/17] SUPPORT.md: Add some x86 features

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> Including host architecture support and guest types.
> 
> Signed-off-by: George Dunlap 

Acked-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 06/17] SUPPORT.md: Add scalability features

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> Superpage support and PVHVM.
> 
> Signed-off-by: George Dunlap 

Acked-by: Jan Beulich 
with one remark:

> +## Scalability
> +
> +### Super page support
> +
> +Status, x86 HVM/PVH, HAP: Supported
> +Status, x86 HVM/PVH, Shadow, 2MiB: Supported
> +Status, ARM: Supported
> +
> +NB that this refers to the ability of guests
> +to have higher-level page table entries point directly to memory,
> +improving TLB performance.
> +On ARM, and on x86 in HAP mode,
> +the guest has whatever support is enabled by the hardware.
> +On x86 in shadow mode, only 2MiB (L2) superpages are available;
> +furthermore, they do not have the performance characteristics of hardware 
> superpages.
> +
> +Also note is feature independent of the ARM "page granularity" feature (see 
> below).

Earlier lines in this block suggest you've tried to honor a certain
line length limit, while the two last non-empty ones clearly go
beyond 80 columns.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 07/17] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> Mostly PV protocols.
> 
> Signed-off-by: George Dunlap 

Acked-by: Jan Beulich 
with a couple of remarks.

> @@ -223,6 +227,152 @@ which add paravirtualized functionality to HVM guests
>  for improved performance and scalability.
>  This includes exposing event channels to HVM guests.
>  
> +## Virtual driver support, guest side

With "guest side" here, ...

> +### Blkfront
> +
> +Status, Linux: Supported
> +Status, FreeBSD: Supported, Security support external
> +Status, NetBSD: Supported, Security support external
> +Status, OpenBSD: Supported, Security support external
> +Status, Windows: Supported
> +
> +Guest-side driver capable of speaking the Xen PV block protocol
> +
> +### Netfront
> +
> +Status, Linux: Supported
> +States, Windows: Supported
> +Status, FreeBSD: Supported, Security support external
> +Status, NetBSD: Supported, Security support external
> +Status, OpenBSD: Supported, Security support external
> +
> +Guest-side driver capable of speaking the Xen PV networking protocol
> +
> +### PV Framebuffer (frontend)

... is "(frontend)" here (also on entries further down) really useful?
Same for "host side" and "(backend)" then further down.

Also would it perhaps make sense to sort multiple OS entries by
some criteria (name, support status, ...)? Just like we ask that
new source files have #include-s sorted, this helps reduce patch
conflicts when otherwise everyone adds to the end of such lists.

> +### PV SCSI protocol (frontend)
> +
> +Status, Linux: Supported, with caveats
> +
> +NB that while the PV SCSI backend is in Linux and tested regularly,
> +there is currently no xl support.

Perhaps a copy-and-paste mistake saying "backend" here?

> +### PV Framebuffer (backend)
> +
> +Status, QEMU: Supported
> +
> +Host-side implementaiton of the Xen PV framebuffer protocol

implementation

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 08/17] SUPPORT.md: Add x86-specific virtual hardware

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> x86-specific virtual hardware provided by the hypervisor, toolstack,
> or QEMU.
> 
> Signed-off-by: George Dunlap 

Non-QEMU parts
Acked-by: Jan Beulich 
with one typo preferably corrected:

> +### x86/Nested HVM
> +
> +Status, x86 HVM: Experimental
> +
> +This means providing hardware virtulatization support to guest VMs

virtualization

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 04/17] SUPPORT.md: Add core ARM features

2017-11-23 Thread Julien Grall

Hi George,

On 22/11/17 19:20, George Dunlap wrote:

Hardware support and guest type.

Signed-off-by: George Dunlap 
---
Changes since v2:
- Moved SMMUv* into generic IOMMU section

CC: Ian Jackson 
CC: Wei Liu 
CC: Andrew Cooper 
CC: Jan Beulich 
CC: Stefano Stabellini 
CC: Konrad Wilk 
CC: Tim Deegan 
CC: Julien Grall 
---
  SUPPORT.md | 25 -
  1 file changed, 24 insertions(+), 1 deletion(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index a4cf2da50d..5945ab4926 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -22,6 +22,14 @@ for the definitions of the support status levels etc.
  
  Status: Supported
  
+### ARM v7 + Virtualization Extensions

+
+Status: Supported
+
+### ARM v8
+
+Status: Supported
+
  ## Host hardware support
  
  ### Physical CPU Hotplug

@@ -35,6 +43,7 @@ for the definitions of the support status levels etc.
  ### Host ACPI (via Domain 0)
  
  Status, x86 PV: Supported

+Status, ARM: Experimental
  
  ### x86/Intel Platform QoS Technologies
  
@@ -44,6 +53,14 @@ for the definitions of the support status levels etc.
  
  Status, AMD IOMMU: Supported

  Status, Intel VT-d: Supported
+Status, ARM SMMUv1: Supported
+Status, ARM SMMUv2: Supported
+
+### ARM/GICv3 ITS
+
+Status: Experimental
+
+Extension to the GICv3 interrupt controller to support MSI.
  
  ## Guest Type
  
@@ -67,12 +84,18 @@ Requires hardware virtualisation support (Intel VMX / AMD SVM)
  
  Status: Supported
  
-PVH is a next-generation paravirtualized mode

+PVH is a next-generation paravirtualized mode


I am not sure to see the difference between the 2 lines. Is it intented?

The rest looks good.

Cheers,


  designed to take advantage of hardware virtualization support when possible.
  During development this was sometimes called HVMLite or PVHv2.
  
  Requires hardware virtualisation support (Intel VMX / AMD SVM)
  
+### ARM guest

+
+Status: Supported
+
+ARM only has one guest type at the moment
+
  ## Memory Management
  
  ### Dynamic memory control




--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 06/17] SUPPORT.md: Add scalability features

2017-11-23 Thread Julien Grall

Hi George,

On 22/11/17 19:20, George Dunlap wrote:

Superpage support and PVHVM.

Signed-off-by: George Dunlap 
---
Changes since v2:
- Reworked superpage section

CC: Ian Jackson 
CC: Wei Liu 
CC: Andrew Cooper 
CC: Jan Beulich 
CC: Stefano Stabellini 
CC: Konrad Wilk 
CC: Tim Deegan 
CC: Julien Grall 


For the ARM bits:

Acked-by: Julien Grall 

Cheers,


---
  SUPPORT.md | 27 +++
  1 file changed, 27 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index df429cb3c4..dd3632b913 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -196,6 +196,33 @@ on embedded platforms.
  
  Enables NUMA aware scheduling in Xen
  
+## Scalability

+
+### Super page support
+
+Status, x86 HVM/PVH, HAP: Supported
+Status, x86 HVM/PVH, Shadow, 2MiB: Supported
+Status, ARM: Supported
+
+NB that this refers to the ability of guests
+to have higher-level page table entries point directly to memory,
+improving TLB performance.
+On ARM, and on x86 in HAP mode,
+the guest has whatever support is enabled by the hardware.
+On x86 in shadow mode, only 2MiB (L2) superpages are available;
+furthermore, they do not have the performance characteristics of hardware 
superpages.
+
+Also note is feature independent of the ARM "page granularity" feature (see 
below).
+
+### x86/PVHVM
+
+Status: Supported
+
+This is a useful label for a set of hypervisor features
+which add paravirtualized functionality to HVM guests
+for improved performance and scalability.
+This includes exposing event channels to HVM guests.
+
  # Format and definitions
  
  This file contains prose, and machine-readable fragments.




--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 04/17] SUPPORT.md: Add core ARM features

2017-11-23 Thread George Dunlap
On 11/23/2017 11:11 AM, Julien Grall wrote:
> Hi George,
> 
> On 22/11/17 19:20, George Dunlap wrote:
>> Hardware support and guest type.
>>
>> Signed-off-by: George Dunlap 
>> ---
>> Changes since v2:
>> - Moved SMMUv* into generic IOMMU section
>>
>> CC: Ian Jackson 
>> CC: Wei Liu 
>> CC: Andrew Cooper 
>> CC: Jan Beulich 
>> CC: Stefano Stabellini 
>> CC: Konrad Wilk 
>> CC: Tim Deegan 
>> CC: Julien Grall 
>> ---
>>   SUPPORT.md | 25 -
>>   1 file changed, 24 insertions(+), 1 deletion(-)
>>
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index a4cf2da50d..5945ab4926 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -22,6 +22,14 @@ for the definitions of the support status levels etc.
>>     Status: Supported
>>   +### ARM v7 + Virtualization Extensions
>> +
>> +    Status: Supported
>> +
>> +### ARM v8
>> +
>> +    Status: Supported
>> +
>>   ## Host hardware support
>>     ### Physical CPU Hotplug
>> @@ -35,6 +43,7 @@ for the definitions of the support status levels etc.
>>   ### Host ACPI (via Domain 0)
>>     Status, x86 PV: Supported
>> +    Status, ARM: Experimental
>>     ### x86/Intel Platform QoS Technologies
>>   @@ -44,6 +53,14 @@ for the definitions of the support status levels
>> etc.
>>     Status, AMD IOMMU: Supported
>>   Status, Intel VT-d: Supported
>> +    Status, ARM SMMUv1: Supported
>> +    Status, ARM SMMUv2: Supported
>> +
>> +### ARM/GICv3 ITS
>> +
>> +    Status: Experimental
>> +
>> +Extension to the GICv3 interrupt controller to support MSI.
>>     ## Guest Type
>>   @@ -67,12 +84,18 @@ Requires hardware virtualisation support (Intel
>> VMX / AMD SVM)
>>     Status: Supported
>>   -PVH is a next-generation paravirtualized mode
>> +PVH is a next-generation paravirtualized mode
> 
> I am not sure to see the difference between the 2 lines. Is it intented?

The difference is the whitespace at the end -- this change should have
been made in the previous patch instead.

> The rest looks good.

Thanks. With that moved, can it have your Ack?

 -George


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 09/17] SUPPORT.md: Add ARM-specific virtual hardware

2017-11-23 Thread Julien Grall

Hi George,

On 22/11/17 19:20, George Dunlap wrote:

Signed-off-by: George Dunlap 
---
Changes since v2:
- Update "non-pci passthrough" section
- Add DT / ACPI sections

CC: Ian Jackson 
CC: Wei Liu 
CC: Andrew Cooper 
CC: Jan Beulich 
CC: Stefano Stabellini 
CC: Konrad Wilk 
CC: Tim Deegan 
CC: Julien Grall 
---
  SUPPORT.md | 21 +
  1 file changed, 21 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 98ed18098a..f357291e4e 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -408,6 +408,27 @@ Virtual Performance Management Unit for HVM guests
  Disabled by default (enable with hypervisor command line option).
  This feature is not security supported: see 
http://xenbits.xen.org/xsa/advisory-163.html
  
+### ARM/Non-PCI device passthrough

+
+Status: Supported, not security supported
+
+Note that this still requires an IOMMU
+that covers the DMA of the device to be passed through.
+
+### ARM: 16K and 64K page granularity in guests
+
+Status: Supported, with caveats
+
+No support for QEMU backends in a 16K or 64K domain.
+
+### ARM: Guest Devicetree support


NIT: s/Devicetree/Device Tree/

Acked-by: Julien Grall 

Cheers,


+
+Status: Supported
+
+### ARM: Guest ACPI support
+
+Status: Supported
+
  ## Virtual Hardware, QEMU
  
  These are devices available in HVM mode using a qemu devicemodel (the default).




--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 10/17] SUPPORT.md: Add Debugging, analysis, crash post-portem

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> +## Debugging, analysis, and crash post-mortem
> +
> +### Host serial console
> +
> +Status, NS16550: Supported
> + Status, EHCI: Supported

Inconsistent indentation.

> + Status, Cadence UART (ARM): Supported
> + Status, PL011 UART (ARM): Supported
> + Status, Exynos 4210 UART (ARM): Supported
> + Status, OMAP UART (ARM): Supported
> + Status, SCI(F) UART: Supported
> +
> +XXX Should NS16550 and EHCI be limited to x86?  Unlike the ARM
> +entries, they don't depend on x86 being configured

ns16550 ought to be usable everywhere. EHCI is x86-only
anyway (presumably first of all because it takes PCI as a prereq)
 - there's a "select" needed, which only x86 has. In the end I
view the ARM way of expressing things wrong there: I think all
"HAS_*" items would better require "select"s (unless, like for
ns16550, they're there sort of for documentation / consistency
purpose only).

With this XXX dropped (and with or without adding (x86) to
EHCI)
Acked-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-next test] 116438: regressions - trouble: broken/fail/pass

2017-11-23 Thread osstest service owner
flight 116438 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116438/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-qcow2 broken
 test-amd64-amd64-xl-qemuu-ws16-amd64 broken
 test-amd64-amd64-xl-credit2  broken
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   broken
 test-amd64-amd64-xl-qemuu-ws16-amd64 4 host-install(4) broken REGR. vs. 116398
 test-amd64-amd64-xl-credit2   4 host-install(4)broken REGR. vs. 116398
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 4 host-install(4) broken 
REGR. vs. 116398
 test-amd64-i386-libvirt-qcow2  4 host-install(4)   broken REGR. vs. 116398
 test-amd64-amd64-xl-qemut-debianhvm-amd64  7 xen-bootfail REGR. vs. 116398
 test-amd64-amd64-xl-qemut-ws16-amd64  7 xen-boot fail REGR. vs. 116398
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 116398
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 116398
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-boot  fail REGR. vs. 116398
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 116398
 test-amd64-amd64-xl-qemuu-ovmf-amd64  7 xen-boot fail REGR. vs. 116398
 test-amd64-amd64-xl   7 xen-boot fail REGR. vs. 116398
 test-amd64-i386-libvirt-pair 10 xen-boot/src_hostfail REGR. vs. 116398
 test-amd64-i386-libvirt-pair 11 xen-boot/dst_hostfail REGR. vs. 116398
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 116398
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-boot  fail REGR. vs. 116398
 test-amd64-amd64-libvirt-vhd  7 xen-boot fail REGR. vs. 116398
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 116398

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot  fail blocked in 116398
 test-amd64-amd64-xl-multivcpu  7 xen-boot   fail blocked in 116398
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-boot  fail blocked in 116398
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop  fail blocked in 116398
 test-amd64-i386-xl-xsm7 xen-boot fail  like 116398
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot  fail like 116398
 test-amd64-amd64-xl-pvhv2-amd  7 xen-boot fail like 116398
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm  7 xen-boot fail like 116398
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot  fail like 116398
 test-amd64-i386-xl-raw7 xen-boot fail  like 116398
 test-amd64-i386-rumprun-i386  7 xen-boot fail  like 116398
 test-amd64-i386-examine   8 reboot   fail  like 116398
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot  fail like 116398
 test-amd64-i386-freebsd10-i386  7 xen-bootfail like 116398
 test-amd64-i386-libvirt-xsm   7 xen-boot fail  like 116398
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot fail like 116398
 test-amd64-i386-pair 10 xen-boot/src_hostfail  like 116398
 test-amd64-i386-pair 11 xen-boot/dst_hostfail  like 116398
 test-amd64-i386-freebsd10-amd64  7 xen-boot   fail like 116398
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116398
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116398
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116398
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116398
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116398
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116398
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   

Re: [Xen-devel] [PATCH v3 04/17] SUPPORT.md: Add core ARM features

2017-11-23 Thread Julien Grall



On 23/11/17 11:13, George Dunlap wrote:

On 11/23/2017 11:11 AM, Julien Grall wrote:

The rest looks good.


Thanks. With that moved, can it have your Ack?


Sure

Acked-by: Julien Grall 

Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 12/17] SUPPORT.md: Add Security-releated features

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> +### Live Patching
> +
> +Status, x86: Supported
> +Status, ARM: Experimental
> +
> +Compile time disabled for ARM

"... by default"?

> +### XSM & FLASK
> +
> +Status: Experimental
> +
> +Compile time disabled.

Same here.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 14/17] SUPPORT.md: Add statement on PCI passthrough

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> Signed-off-by: George Dunlap 

With the XXX suitably addressed
Acked-by: Jan Beulich 

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 15/17] SUPPORT.md: Add statement on migration RFC

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> +XXX Need to check the following:
> +
> + * Guest serial console
> + * Crash kernels
> + * Transcendent Memory
> + * Alternative p2m
> + * vMCE

vMCE has provisions for migration (albeit there has been breakage
here more than once in the past, iirc).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 16/17] SUPPORT.md: Add limits RFC

2017-11-23 Thread Jan Beulich
>>> On 22.11.17 at 20:20,  wrote:
> +### Virtual RAM
> +
> +Limit-security, x86 PV 64-bit: 2047GiB
> +Limit-security, x86 PV 32-bit: 168GiB (see below)
> +Limit-security, x86 HVM: 1.5TiB
> +Limit, ARM32: 16GiB
> +Limit, ARM64: 1TiB
> +
> +Note that there are no theoretical limits to 64-bit PV or HVM guest sizes
> +other than those determined by the processor architecture.
> +
> +All 32-bit PV guest memory must be under 168GiB;
> +this means the total memory for all 32-bit PV guests cannot exced 168GiB.

While certainly harder to grok for the reader, I think we need to be
precise here: The factor isn't the amount of memory, but the
addresses at which it surfaces. Host memory must not extend
beyond the 168MiB boundary for that to also be the limit for
32-bit PV guests.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread Olaf Hering
On Thu, Nov 23, Olaf Hering wrote:

> On Thu, Nov 23, Jan Beulich wrote:
> > Olaf, are you still playing with it every now and then?
> No, I have not tried it since I last touched it.

I just tried it, and it failed:

root@stein-schneider:~ # /usr/lib/xen/bin/xenpaging -d 7 -f /dev/shm/p -v
xc: detail: xenpaging init
xc: detail: watching '/local/domain/7/memory/target-tot_pages'
xc: detail: Failed allocation for dom 7: 1 extents of order 0
xc: error: Failed to populate ring gfn
 (16 = Device or resource busy): Internal error


Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread George Dunlap
On 11/23/2017 11:55 AM, Olaf Hering wrote:
> On Thu, Nov 23, Olaf Hering wrote:
> 
>> On Thu, Nov 23, Jan Beulich wrote:
>>> Olaf, are you still playing with it every now and then?
>> No, I have not tried it since I last touched it.
> 
> I just tried it, and it failed:
> 
> root@stein-schneider:~ # /usr/lib/xen/bin/xenpaging -d 7 -f /dev/shm/p -v
> xc: detail: xenpaging init
> xc: detail: watching '/local/domain/7/memory/target-tot_pages'
> xc: detail: Failed allocation for dom 7: 1 extents of order 0
> xc: error: Failed to populate ring gfn
>  (16 = Device or resource busy): Internal error

That looks like just a memory allocation.  Do you use autoballooning
dom0?  Maybe try ballooning dom0 down first?

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable test] 116445: tolerable FAIL - PUSHED

2017-11-23 Thread osstest service owner
flight 116445 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116445/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-rumprun-amd64 17 rumprun-demo-xenstorels/xenstorels.repeat 
fail like 116199
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116199
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116214
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116214
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116214
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  d2f86bf604698806d311cc251c1b66fbb752673c
baseline version:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f

Last test of basis   116214  2017-11-16 02:14:29 Z7 days
Failing since116224  2017-11-16 11:51:35 Z6 days8 attempts
Testing same since   116421  2017-11-21 20:55:49 Z1 days2 attempts


People who touched revisions under test:
  Adrian Pop 
  Andrew Cooper 
  Jan Beulich 
  Julien Grall 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libv

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread Andrew Cooper
On 23/11/17 12:00, George Dunlap wrote:
> On 11/23/2017 11:55 AM, Olaf Hering wrote:
>> On Thu, Nov 23, Olaf Hering wrote:
>>
>>> On Thu, Nov 23, Jan Beulich wrote:
 Olaf, are you still playing with it every now and then?
>>> No, I have not tried it since I last touched it.
>> I just tried it, and it failed:
>>
>> root@stein-schneider:~ # /usr/lib/xen/bin/xenpaging -d 7 -f /dev/shm/p -v
>> xc: detail: xenpaging init
>> xc: detail: watching '/local/domain/7/memory/target-tot_pages'
>> xc: detail: Failed allocation for dom 7: 1 extents of order 0
>> xc: error: Failed to populate ring gfn
>>  (16 = Device or resource busy): Internal error
> That looks like just a memory allocation.  Do you use autoballooning
> dom0?  Maybe try ballooning dom0 down first?

Its not that.  This failure comes from the ring living inside the p2m,
and has already been found with introspection.

When a domain has ballooned exactly to its allocation, it is not
possible to attach a vmevent/sharing/paging ring, because attaching the
ring requires an add_to_physmap.  In principle, the toolstack could bump
the allocation by one frame, but that's racy with the guest trying to
claim the frame itself.

Pauls work to allow access to pages not in the p2m is the precursor to
fixing this problem, after which the rings move out of the guest
(reduction in attack surface), and there is nothing the guest can do to
inhibit toolstack/privileged operations like this.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread Olaf Hering
On Thu, Nov 23, Andrew Cooper wrote:

> Its not that.  This failure comes from the ring living inside the p2m,
> and has already been found with introspection.

In my case it was just a wrong domid. Now I use 'xl domid domU' and
xenpaging does something. It seems paging out and in works still to some
degree.  But it still/again needs lots of testing and fixing.

I get errors like this, and xl dmesg has also errors:

...
xc: detail: populate_page < gfn 10100 pageslot 127
xc: detail: Need to resume 200 pages to reach 131328 target_tot_pages
xc: detail: Got event from evtchn
xc: detail: populate_page < gfn 10101 pageslot 128
xenforeignmemory: error: mmap failedxc: : Invalid argument
detail: populate_page < gfn 10102 pageslot 129
xc: detail: populate_page < gfn 10103 pageslot 130
xc: detail: populate_page < gfn 10104 pageslot 131
...

...
(XEN) vm_event.c:289:d0v0 d2v0 was not paused.
(XEN) vm_event.c:289:d0v0 d2v0 was not paused.
(XEN) vm_event.c:289:d0v2 d2v2 was not paused.
...


Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-3.18 test] 116448: trouble: broken/fail/pass

2017-11-23 Thread osstest service owner
flight 116448 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116448/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 broken
 test-amd64-i386-qemuu-rhel6hvm-amd broken
 test-amd64-amd64-xl-qemuu-ws16-amd64 4 host-install(4) broken REGR. vs. 116308
 test-amd64-i386-qemuu-rhel6hvm-amd  4 host-install(4)  broken REGR. vs. 116308
 test-amd64-i386-freebsd10-i386broken in 116422
 test-amd64-i386-pair broken  in 116422
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm broken in 116422

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 4 host-install(4) broken in 
116422 pass in 116448
 test-amd64-i386-pair 5 host-install/dst_host(5) broken in 116422 pass in 116448
 test-amd64-i386-freebsd10-i386 4 host-install(4) broken in 116422 pass in 
116448
 test-amd64-i386-xl-raw 19 guest-start/debian.repeat fail in 116422 pass in 
116448
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail pass in 
116422
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat  fail pass in 116422

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116308
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116308
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116308
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116308
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116308
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116308
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116308
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linuxc35c375efa4e2c832946a04e83155f928135e8f6
baseline version:
 linux   

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread Andrew Cooper
On 23/11/17 12:45, Olaf Hering wrote:
> On Thu, Nov 23, Andrew Cooper wrote:
>
>> Its not that.  This failure comes from the ring living inside the p2m,
>> and has already been found with introspection.
> In my case it was just a wrong domid. Now I use 'xl domid domU' and
> xenpaging does something. It seems paging out and in works still to some
> degree.  But it still/again needs lots of testing and fixing.
>
> I get errors like this, and xl dmesg has also errors:
>
> ...
> xc: detail: populate_page < gfn 10100 pageslot 127
> xc: detail: Need to resume 200 pages to reach 131328 target_tot_pages
> xc: detail: Got event from evtchn
> xc: detail: populate_page < gfn 10101 pageslot 128
> xenforeignmemory: error: mmap failedxc: : Invalid argument
> detail: populate_page < gfn 10102 pageslot 129
> xc: detail: populate_page < gfn 10103 pageslot 130
> xc: detail: populate_page < gfn 10104 pageslot 131
> ...
>
> ...
> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
> (XEN) vm_event.c:289:d0v2 d2v2 was not paused.
> ...

Hmm ok.  Either way, I think this demonstrates that the feature is not
of "Tech Preview" quality.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable-smoke test] 116472: tolerable all pass - PUSHED

2017-11-23 Thread osstest service owner
flight 116472 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116472/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  79136f2673b52db7b4bbd6cb5da194f2f4c39a9d
baseline version:
 xen  d2f86bf604698806d311cc251c1b66fbb752673c

Last test of basis   116406  2017-11-21 12:01:35 Z2 days
Testing same since   116472  2017-11-23 11:20:16 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Jan Beulich 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   d2f86bf..79136f2  79136f2673b52db7b4bbd6cb5da194f2f4c39a9d -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v9 4/5] x86/PCI: Enable a 64bit BAR on AMD Family 15h (Models 30h-3fh) Processors v5

2017-11-23 Thread Boris Ostrovsky



On 11/23/2017 03:11 AM, Christian König wrote:

Am 22.11.2017 um 18:27 schrieb Boris Ostrovsky:

On 11/22/2017 11:54 AM, Christian König wrote:

Am 22.11.2017 um 17:24 schrieb Boris Ostrovsky:

On 11/22/2017 05:09 AM, Christian König wrote:

Am 21.11.2017 um 23:26 schrieb Boris Ostrovsky:

On 11/21/2017 08:34 AM, Christian König wrote:

Hi Boris,

attached are two patches.

The first one is a trivial fix for the infinite loop issue, it now
correctly aborts the fixup when it can't find address space for the
root window.

The second is a workaround for your board. It simply checks if there
is exactly one Processor Function to apply this fix on.

Both are based on linus current master branch. Please test if they
fix
your issue.

Yes, they do fix it but that's because the feature is disabled.

Do you know what the actual problem was (on Xen)?

I still haven't understood what you actually did with Xen.

When you used PCI pass through with those devices then you have made a
major configuration error.

When the problem happened on dom0 then the explanation is most likely
that some PCI device ended up in the configured space, but the routing
was only setup correctly on one CPU socket.

The problem is that dom0 can be (and was in my case() booted with less
than full physical memory and so the "rest" of the host memory is not
necessarily reflected in iomem. Your patch then tried to configure that
memory for MMIO and the system hang.

And so my guess is that this patch will break dom0 on a single-socket
system as well.

Oh, thanks!

I've thought about that possibility before, but wasn't able to find a
system which actually does that.

May I ask why the rest of the memory isn't reported to the OS?
That memory doesn't belong to the OS (dom0), it is owned by the 
hypervisor.



Sounds like I can't trust Linux resource management and probably need
to read the DRAM config to figure things out after all.


My question is whether what you are trying to do should ever be done for
a guest at all (any guest, not necessarily Xen).


The issue is probably that I don't know enough about Xen: What exactly 
is dom0? My understanding was that dom0 is the hypervisor, but that 
seems to be incorrect.


The issue is that under no circumstances *EVER* a virtualized guest 
should have access to the PCI devices marked as "Processor Function" on 
AMD platforms. Otherwise it is trivial to break out of the virtualization.


When dom0 is something like the system domain with all hardware access 
then the approach seems legitimate, but then the hypervisor should 
report the stolen memory to the OS using the e820 table.


When the hypervisor doesn't do that and the Linux kernel isn't aware 
that there is memory at a given location mapping PCI space there will 
obviously crash the hypervisor.


Possible solutions as far as I can see are either disabling this feature 
when we detect that we are a Xen dom0, scanning the DRAM settings to 
update Linux resource handling or fixing Xen to report stolen memory to 
the dom0 OS as reserved.


Opinions?


You are right, these functions are not exposed to a regular guest.

I think for dom0 (which is a special Xen guest, with additional 
privileges) we may be able to add a reserved e820 region for host memory 
that is not assigned to dom0. Let me try it on Monday (I am out until then).


-boris

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCHv2] xen-netfront: remove warning when unloading module

2017-11-23 Thread Eduardo Otubo
v2:
 * Replace busy wait with wait_event()/wake_up_all()
 * Cannot garantee that at the time xennet_remove is called, the
   xen_netback state will not be XenbusStateClosed, so added a
   condition for that
 * There's a small chance for the xen_netback state is
   XenbusStateUnknown by the time the xen_netfront switches to Closed,
   so added a condition for that.

When unloading module xen_netfront from guest, dmesg would output
warning messages like below:

  [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
  [  105.236839] deferring g.e. 0x903 (pfn 0x35805)

This problem relies on netfront and netback being out of sync. By the time
netfront revokes the g.e.'s netback didn't have enough time to free all of
them, hence displaying the warnings on dmesg.

The trick here is to make netfront to wait until netback frees all the g.e.'s
and only then continue to cleanup for the module removal, and this is done by
manipulating both device states.

Signed-off-by: Eduardo Otubo 
---
 drivers/net/xen-netfront.c | 18 ++
 1 file changed, 18 insertions(+)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 8b8689c6d887..391432e2725d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -87,6 +87,8 @@ struct netfront_cb {
 /* IRQ name is queue name with "-tx" or "-rx" appended */
 #define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
 
+static DECLARE_WAIT_QUEUE_HEAD(module_unload_q);
+
 struct netfront_stats {
u64 packets;
u64 bytes;
@@ -2021,10 +2023,12 @@ static void netback_changed(struct xenbus_device *dev,
break;
 
case XenbusStateClosed:
+   wake_up_all(&module_unload_q);
if (dev->state == XenbusStateClosed)
break;
/* Missed the backend's CLOSING state -- fallthrough */
case XenbusStateClosing:
+   wake_up_all(&module_unload_q);
xenbus_frontend_closed(dev);
break;
}
@@ -2130,6 +2134,20 @@ static int xennet_remove(struct xenbus_device *dev)
 
dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
+   if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+   xenbus_switch_state(dev, XenbusStateClosing);
+   wait_event(module_unload_q,
+  xenbus_read_driver_state(dev->otherend) ==
+  XenbusStateClosing);
+
+   xenbus_switch_state(dev, XenbusStateClosed);
+   wait_event(module_unload_q,
+  xenbus_read_driver_state(dev->otherend) ==
+  XenbusStateClosed ||
+  xenbus_read_driver_state(dev->otherend) ==
+  XenbusStateUnknown);
+   }
+
xennet_disconnect_backend(info);
 
unregister_netdev(info->netdev);
-- 
2.13.6


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] x86/HVM: fix hvmemul_rep_outs_set_context()

2017-11-23 Thread Jan Beulich
There were two issues with this function: Its use of
hvmemul_do_pio_buffer() was wrong (the function deals only with
individual port accesses, not repeated ones, i.e. passing it
"*reps * bytes_per_rep" does not have the intended effect). And it
could have processed a larger set of operations in one go than was
probably intended (limited just by the size that xmalloc() can hand
back).

By converting to proper use of hvmemul_do_pio_buffer(), no intermediate
buffer is needed at all. As a result a preemption check is being added.

Also drop unused parameters from the function.

Signed-off-by: Jan Beulich 

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1348,28 +1348,41 @@ static int hvmemul_rep_ins(
 }
 
 static int hvmemul_rep_outs_set_context(
-enum x86_segment src_seg,
-unsigned long src_offset,
 uint16_t dst_port,
 unsigned int bytes_per_rep,
-unsigned long *reps,
-struct x86_emulate_ctxt *ctxt)
+unsigned long *reps)
 {
-unsigned int bytes = *reps * bytes_per_rep;
-char *buf;
-int rc;
-
-buf = xmalloc_array(char, bytes);
+const struct arch_vm_event *ev = current->arch.vm_event;
+const uint8_t *ptr;
+unsigned int avail;
+unsigned long done;
+int rc = X86EMUL_OKAY;
 
-if ( buf == NULL )
+ASSERT(bytes_per_rep <= 4);
+if ( !ev )
 return X86EMUL_UNHANDLEABLE;
 
-rc = set_context_data(buf, bytes);
+ptr = ev->emul.read.data;
+avail = ev->emul.read.size;
 
-if ( rc == X86EMUL_OKAY )
-rc = hvmemul_do_pio_buffer(dst_port, bytes, IOREQ_WRITE, buf);
+for ( done = 0; done < *reps; ++done )
+{
+unsigned int size = min(bytes_per_rep, avail);
+uint32_t data = 0;
+
+if ( done && hypercall_preempt_check() )
+break;
+
+memcpy(&data, ptr, size);
+avail -= size;
+ptr += size;
+
+rc = hvmemul_do_pio_buffer(dst_port, bytes_per_rep, IOREQ_WRITE, 
&data);
+if ( rc != X86EMUL_OKAY )
+break;
+}
 
-xfree(buf);
+*reps = done;
 
 return rc;
 }
@@ -1391,8 +1404,7 @@ static int hvmemul_rep_outs(
 int rc;
 
 if ( unlikely(hvmemul_ctxt->set_context) )
-return hvmemul_rep_outs_set_context(src_seg, src_offset, dst_port,
-bytes_per_rep, reps, ctxt);
+return hvmemul_rep_outs_set_context(dst_port, bytes_per_rep, reps);
 
 rc = hvmemul_virtual_to_linear(
 src_seg, src_offset, bytes_per_rep, reps, hvm_access_read,




___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] x86/HVM: fix hvmemul_rep_outs_set_context()

2017-11-23 Thread Razvan Cojocaru
On 11/23/2017 05:09 PM, Jan Beulich wrote:
> There were two issues with this function: Its use of
> hvmemul_do_pio_buffer() was wrong (the function deals only with
> individual port accesses, not repeated ones, i.e. passing it
> "*reps * bytes_per_rep" does not have the intended effect). And it
> could have processed a larger set of operations in one go than was
> probably intended (limited just by the size that xmalloc() can hand
> back).
> 
> By converting to proper use of hvmemul_do_pio_buffer(), no intermediate
> buffer is needed at all. As a result a preemption check is being added.
> 
> Also drop unused parameters from the function.
> 
> Signed-off-by: Jan Beulich 

Thank you for the patch!

FWIW, Reviewed-by: Razvan Cojocaru 


Thanks,
Razvan

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-4.9 test] 116452: FAIL

2017-11-23 Thread osstest service owner
flight 116452 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116452/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amdbroken in 116426
 test-armhf-armhf-libvirt broken  in 116426

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 4 host-install(4) broken in 116426 pass in 
116452
 test-armhf-armhf-libvirt 4 host-install(4) broken in 116426 pass in 116452
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
pass in 116426

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 116426 like 116332
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116309
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116332
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116332
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116332
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass

version targeted for testing:
 linux563c24f65f4fb009047cf4702dd16c7c592fd2b2
baseline version:
 linuxea88d5c5f41140cd531dab9cf718282b10996235

Last test of basis   116332  2017-11-19 08:43:09 Z4 days
Testing same since   116395  2017-11-21 09:03:03 Z2 days3 attempts


People who touched revisions under test:
  Aaron Brown 
  Aaron Sierra 
  Alan Stern 
  Alexander Duyck 
  Alexandre Belloni 
  Alexey Khoroshilov 
  Andrew Bowers 
  Andrew Gabbasov 
  Andrey K

[Xen-devel] [seabios test] 116451: regressions - FAIL

2017-11-23 Thread osstest service owner
flight 116451 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116451/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  df46d10c8a7b88eb82f3ceb2aa31782dee15593d
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   19 days
Failing since115733  2017-11-10 17:19:59 Z   12 days   20 attempts
Testing same since   116211  2017-11-16 00:20:45 Z7 days   10 attempts


People who touched revisions under test:
  Kevin O'Connor 
  Stefan Berger 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit df46d10c8a7b88eb82f3ceb2aa31782dee15593d
Author: Stefan Berger 
Date:   Tue Nov 14 15:03:47 2017 -0500

tpm: Add support for TPM2 ACPI table

Add support for the TPM2 ACPI table. If we find it and its
of the appropriate size, we can get the log_area_start_address
and log_area_minimum_size from it.

The latest version of the spec can be found here:

https://trustedcomputinggroup.org/tcg-acpi-specification/

Signed-off-by: Stefan Berger 

commit 0541f2f0f246e77d7c726926976920e8072d1119
Author: Kevin O'Connor 
Date:   Fri Nov 10 12:20:35 2017 -0500

paravirt: Only enable sercon in NOGRAPHIC mode if no other console specified

Signed-off-by: Kevin O'Connor 

commit 9ce6778f08c632c52b25bc8f754291ef18710d53
Author: Kevin O'Connor 
Date:   Fri Nov 10 12:16:36 2017 -0500

docs: Add sercon-port to Runtime_config.md documentation

Signed-off-by: Kevin O'Connor 

commit 63451fca13c75870e1703eb3e20584d91179aebc
Author: Kevin O'Connor 
Date:   Fri Nov 10 11:49:19 2017 -0500

docs: Note v1.11.0 release

Signed-off-by: Kevin O'Conno

Re: [Xen-devel] [PATCH v13 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources

2017-11-23 Thread Jan Beulich
>>> On 30.10.17 at 18:48,  wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -965,6 +965,94 @@ static long xatp_permission_check(struct domain *d, 
> unsigned int space)
>  return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>  }
>  
> +static int acquire_resource(
> +XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> +{
> +struct domain *d, *currd = current->domain;
> +xen_mem_acquire_resource_t xmar;
> +/*
> + * The mfn_list and gfn_list (below) arrays are ok on stack for the
> + * moment since they are small, but if they need to grow in future
> + * use-cases then per-CPU arrays or heap allocations may be required.
> + */
> +xen_pfn_t mfn_list[2];
> +int rc;
> +
> +if ( copy_from_guest(&xmar, arg, 1) )
> +return -EFAULT;
> +
> +if ( xmar.pad != 0 )
> +return -EINVAL;
> +
> +if ( guest_handle_is_null(xmar.frame_list) )
> +{
> +if ( xmar.nr_frames > 0 )

I generally consider "!= 0" (which then could be omitted altogether
better than "> 0" when the quantity is unsigned, to avoid giving
the impression that negative values might also take the other path.

> +return -EINVAL;
> +
> +xmar.nr_frames = ARRAY_SIZE(mfn_list);
> +
> +if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
> +return -EFAULT;
> +
> +return 0;
> +}
> +
> +if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
> +return -E2BIG;
> +
> +rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
> +if ( rc )
> +return rc;
> +
> +rc = xsm_domain_resource_map(XSM_DM_PRIV, d);
> +if ( rc )
> +goto out;
> +
> +switch ( xmar.type )
> +{
> +default:
> +rc = -EOPNOTSUPP;
> +break;
> +}
> +
> +if ( rc )
> +goto out;
> +
> +if ( !paging_mode_translate(currd) )
> +{
> +if ( copy_to_guest(xmar.frame_list, mfn_list, xmar.nr_frames) )
> +rc = -EFAULT;
> +}
> +else
> +{
> +xen_pfn_t gfn_list[ARRAY_SIZE(mfn_list)];
> +unsigned int i;
> +
> +rc = -EFAULT;
> +if ( copy_from_guest(gfn_list, xmar.frame_list, xmar.nr_frames) )
> +goto out;

This will result in requests with nr_frames being zero to fail with
-EFAULT afaict. Let's please have such no-op requests succeed.

> +for ( i = 0; i < xmar.nr_frames; i++ )
> +{
> +rc = set_foreign_p2m_entry(currd, gfn_list[i],
> +   _mfn(mfn_list[i]));
> +if ( rc )
> +{
> +/*
> + * Make sure rc is -EIO for any iteration other than
> + * the first.
> + */
> +rc = (i != 0) ? -EIO : rc;

Along the lines of what I've said above, "!=0" could be dropped
here, too.

I won't insist on the cosmetic remarks to be taken care of, but
the return value aspect should be fixed for
Reviewed-by: Jan Beulich 
to apply.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 06/17] SUPPORT.md: Add scalability features

2017-11-23 Thread George Dunlap
On 11/23/2017 10:50 AM, Jan Beulich wrote:
 On 22.11.17 at 20:20,  wrote:
>> Superpage support and PVHVM.
>>
>> Signed-off-by: George Dunlap 
> 
> Acked-by: Jan Beulich 
> with one remark:
> 
>> +## Scalability
>> +
>> +### Super page support
>> +
>> +Status, x86 HVM/PVH, HAP: Supported
>> +Status, x86 HVM/PVH, Shadow, 2MiB: Supported
>> +Status, ARM: Supported
>> +
>> +NB that this refers to the ability of guests
>> +to have higher-level page table entries point directly to memory,
>> +improving TLB performance.
>> +On ARM, and on x86 in HAP mode,
>> +the guest has whatever support is enabled by the hardware.
>> +On x86 in shadow mode, only 2MiB (L2) superpages are available;
>> +furthermore, they do not have the performance characteristics of hardware 
>> superpages.
>> +
>> +Also note is feature independent of the ARM "page granularity" feature (see 
>> below).
> 
> Earlier lines in this block suggest you've tried to honor a certain
> line length limit, while the two last non-empty ones clearly go
> beyond 80 columns.

Yes, the "semantic newlines" is a bit ambiguous: It rather implies that
we expect people to use a processed version of this file (in which case
the line length isn't as important).

But I'll trim these down anyway.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 07/17] SUPPORT.md: Add virtual devices common to ARM and x86

2017-11-23 Thread George Dunlap
On 11/23/2017 10:59 AM, Jan Beulich wrote:
 On 22.11.17 at 20:20,  wrote:
>> Mostly PV protocols.
>>
>> Signed-off-by: George Dunlap 
> 
> Acked-by: Jan Beulich 
> with a couple of remarks.
> 
>> @@ -223,6 +227,152 @@ which add paravirtualized functionality to HVM guests
>>  for improved performance and scalability.
>>  This includes exposing event channels to HVM guests.
>>  
>> +## Virtual driver support, guest side
> 
> With "guest side" here, ...
> 
>> +### Blkfront
>> +
>> +Status, Linux: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +Status, OpenBSD: Supported, Security support external
>> +Status, Windows: Supported
>> +
>> +Guest-side driver capable of speaking the Xen PV block protocol
>> +
>> +### Netfront
>> +
>> +Status, Linux: Supported
>> +States, Windows: Supported
>> +Status, FreeBSD: Supported, Security support external
>> +Status, NetBSD: Supported, Security support external
>> +Status, OpenBSD: Supported, Security support external
>> +
>> +Guest-side driver capable of speaking the Xen PV networking protocol
>> +
>> +### PV Framebuffer (frontend)
> 
> ... is "(frontend)" here (also on entries further down) really useful?
> Same for "host side" and "(backend)" then further down.

These were specifically requested, because the frontend and backend
entries end up looking very similar, and it's difficult to tell which
section you're in.

> Also would it perhaps make sense to sort multiple OS entries by
> some criteria (name, support status, ...)? Just like we ask that
> new source files have #include-s sorted, this helps reduce patch
> conflicts when otherwise everyone adds to the end of such lists.

Probably, yes.  I generally tried to rank them in order of {Linux, qemu,
*BSD, Windows}, on the grounds that Linux and QEMU are generally
developed by the "core" team (and have the most testing and attention),
and we should favor fellow open-source project (like the BSDs) over
proprietary systems (i.e., Windows).  But I don't seem to have been very
consistent in that.

>> +### PV SCSI protocol (frontend)
>> +
>> +Status, Linux: Supported, with caveats
>> +
>> +NB that while the PV SCSI backend is in Linux and tested regularly,
>> +there is currently no xl support.
> 
> Perhaps a copy-and-paste mistake saying "backend" here?

Good catch, thanks.

>> +### PV Framebuffer (backend)
>> +
>> +Status, QEMU: Supported
>> +
>> +Host-side implementaiton of the Xen PV framebuffer protocol
> 
> implementation

Ack

> 
> Jan
> 


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 10/17] SUPPORT.md: Add Debugging, analysis, crash post-portem

2017-11-23 Thread George Dunlap
On 11/23/2017 11:15 AM, Jan Beulich wrote:
 On 22.11.17 at 20:20,  wrote:
>> +## Debugging, analysis, and crash post-mortem
>> +
>> +### Host serial console
>> +
>> +Status, NS16550: Supported
>> +Status, EHCI: Supported
> 
> Inconsistent indentation.

And I was so sure I'd checked all those. :-/

> 
>> +Status, Cadence UART (ARM): Supported
>> +Status, PL011 UART (ARM): Supported
>> +Status, Exynos 4210 UART (ARM): Supported
>> +Status, OMAP UART (ARM): Supported
>> +Status, SCI(F) UART: Supported
>> +
>> +XXX Should NS16550 and EHCI be limited to x86?  Unlike the ARM
>> +entries, they don't depend on x86 being configured
> 
> ns16550 ought to be usable everywhere. EHCI is x86-only
> anyway (presumably first of all because it takes PCI as a prereq)

But that's just an accident at the moment; I thought there were plans at
some point for ARM servers to have PCI, weren't there?

I'll probably just leave this as it is, unless someone thinks differently.

> With this XXX dropped (and with or without adding (x86) to
> EHCI)
> Acked-by: Jan Beulich 

Thanks,
 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 12/17] SUPPORT.md: Add Security-releated features

2017-11-23 Thread George Dunlap
On 11/23/2017 11:16 AM, Jan Beulich wrote:
 On 22.11.17 at 20:20,  wrote:
>> +### Live Patching
>> +
>> +Status, x86: Supported
>> +Status, ARM: Experimental
>> +
>> +Compile time disabled for ARM
> 
> "... by default"?
> 
>> +### XSM & FLASK
>> +
>> +Status: Experimental
>> +
>> +Compile time disabled.
> 
> Same here.

Ack.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH] tools: fix description of Linux ioctl_evtchn_notify

2017-11-23 Thread Jonathan Davies
Signed-off-by: Jonathan Davies 
---
 tools/include/xen-sys/Linux/evtchn.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/include/xen-sys/Linux/evtchn.h 
b/tools/include/xen-sys/Linux/evtchn.h
index 08ee0b7..002be5b 100644
--- a/tools/include/xen-sys/Linux/evtchn.h
+++ b/tools/include/xen-sys/Linux/evtchn.h
@@ -73,7 +73,7 @@ struct ioctl_evtchn_unbind {
 };
 
 /*
- * Unbind previously allocated @port.
+ * Send event to previously allocated @port.
  */
 #define IOCTL_EVTCHN_NOTIFY\
_IOC(_IOC_NONE, 'E', 4, sizeof(struct ioctl_evtchn_notify))
-- 
2.7.4


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH v3 16/17] SUPPORT.md: Add limits RFC

2017-11-23 Thread George Dunlap
On 11/23/2017 11:21 AM, Jan Beulich wrote:
 On 22.11.17 at 20:20,  wrote:
>> +### Virtual RAM
>> +
>> +Limit-security, x86 PV 64-bit: 2047GiB
>> +Limit-security, x86 PV 32-bit: 168GiB (see below)
>> +Limit-security, x86 HVM: 1.5TiB
>> +Limit, ARM32: 16GiB
>> +Limit, ARM64: 1TiB
>> +
>> +Note that there are no theoretical limits to 64-bit PV or HVM guest sizes
>> +other than those determined by the processor architecture.
>> +
>> +All 32-bit PV guest memory must be under 168GiB;
>> +this means the total memory for all 32-bit PV guests cannot exced 168GiB.
> 
> While certainly harder to grok for the reader, I think we need to be
> precise here: The factor isn't the amount of memory, but the
> addresses at which it surfaces. Host memory must not extend
> beyond the 168MiB boundary for that to also be the limit for
> 32-bit PV guests.

Yes, I'd intended "under 168GiB" to more clearly imply physical
addresses; but I agree as written that's unlikely to be picked up by
anyone not already familiar with the concept.

What about something like this:

"32-bit PV guests can only access physical addresses below 168GiB;
this means that the total memory of all 32-bit PV guests cannot exceed
168GiB.  For hosts with more than 168GiB RAM, this limit becomes 128GiB."

 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH 13/16] SUPPORT.md: Add secondary memory management features

2017-11-23 Thread George Dunlap
On 11/23/2017 12:58 PM, Andrew Cooper wrote:
> On 23/11/17 12:45, Olaf Hering wrote:
>> On Thu, Nov 23, Andrew Cooper wrote:
>>
>>> Its not that.  This failure comes from the ring living inside the p2m,
>>> and has already been found with introspection.
>> In my case it was just a wrong domid. Now I use 'xl domid domU' and
>> xenpaging does something. It seems paging out and in works still to some
>> degree.  But it still/again needs lots of testing and fixing.
>>
>> I get errors like this, and xl dmesg has also errors:
>>
>> ...
>> xc: detail: populate_page < gfn 10100 pageslot 127
>> xc: detail: Need to resume 200 pages to reach 131328 target_tot_pages
>> xc: detail: Got event from evtchn
>> xc: detail: populate_page < gfn 10101 pageslot 128
>> xenforeignmemory: error: mmap failedxc: : Invalid argument
>> detail: populate_page < gfn 10102 pageslot 129
>> xc: detail: populate_page < gfn 10103 pageslot 130
>> xc: detail: populate_page < gfn 10104 pageslot 131
>> ...
>>
>> ...
>> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
>> (XEN) vm_event.c:289:d0v0 d2v0 was not paused.
>> (XEN) vm_event.c:289:d0v2 d2v2 was not paused.
>> ...
> 
> Hmm ok.  Either way, I think this demonstrates that the feature is not
> of "Tech Preview" quality.

Indeed; I've changed it back to "Experimental".

Thanks all,
 -George

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] Xen 4.10 RC6

2017-11-23 Thread Julien Grall

Hi all,

Xen 4.10 RC6 is tagged. You can check that out from xen.git:

  git://xenbits.xen.org/xen.git 4.10.0-rc6

For your convenience there is also a tarball at:
https://downloads.xenproject.org/release/xen/4.10.0-rc6/xen-4.10.0-rc6.tar.gz

And the signature is at:
https://downloads.xenproject.org/release/xen/4.10.0-rc6/xen-4.10.0-rc6.tar.gz.sig

Please send bug reports and test reports to
xen-devel@lists.xenproject.org. When sending bug reports, please CC
relevant maintainers and me (julien.gr...@linaro.org).

Thanks,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-4.7-testing test] 116455: regressions - FAIL

2017-11-23 Thread osstest service owner
flight 116455 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116455/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64  broken  in 116432
 build-amd644 host-install(4) broken in 116432 REGR. vs. 116348
 test-xtf-amd64-amd64-3 49 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 116348

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 17 guest-start.2  fail pass in 116432

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qcow2  1 build-check(1)  blocked in 116432 n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 116432 
n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1) blocked in 116432 n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)blocked in 116432 n/a
 test-xtf-amd64-amd64-11 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemut-win10-i386  1 build-check(1)blocked in 116432 n/a
 test-amd64-i386-xl-qemuu-win10-i386  1 build-check(1)blocked in 116432 n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)  blocked in 116432 n/a
 test-amd64-amd64-libvirt  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)blocked in 116432 n/a
 test-amd64-amd64-xl-qemut-win10-i386  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64 1 build-check(1) blocked in 116432 n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)blocked in 116432 n/a
 test-amd64-amd64-pair 1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-xl-qemuu-win10-i386  1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 
116432 n/a
 test-amd64-amd64-pygrub   1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)blocked in 116432 n/a
 test-amd64-amd64-xl-qcow2 1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)  blocked in 116432 n/a
 test-xtf-amd64-amd64-21 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64 1 build-check(1) blocked in 116432 n/a
 build-amd64-rumprun   1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)   blocked in 116432 n/a
 test-xtf-amd64-amd64-41 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)   blocked in 116432 n/a
 build-amd64-libvirt   1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-rumprun-amd64  1 build-check(1) blocked in 116432 n/a
 test-xtf-amd64-amd64-31 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)blocked in 116432 n/a
 test-amd64-i386-xl-raw1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1) blocked in 116432 n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)   blocked in 116432 n/a
 test-xtf-amd64-amd64-51 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)  blocked in 116432 n/a
 test-amd64-i386-libvirt   1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-migrupgrade   1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)blocked in 116432 n/a
 test-amd64-amd64-xl   1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-pair  1 build-check(1)   blocked in 116432 n/a
 test-amd64-i386-rumprun-i386  1 build-check(1)   blocked in 116432 n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)  blocked in 116432 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)blocked in 116432 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 buil

[Xen-devel] [linux-linus bisection] complete test-amd64-i386-freebsd10-i386

2017-11-23 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-freebsd10-i386
testid xen-boot

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  0c86a6bd85ff0629cd2c5141027fc1c8bb6cde9c
  Bug not present: 15f859ae5c43c7f0a064ed92d33f7a5bc5de6de0
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/116482/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-i386-freebsd10-i386.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-i386-freebsd10-i386.xen-boot
 --summary-out=tmp/116482.bisection-summary --basis-template=115643 
--blessings=real,real-bisect linux-linus test-amd64-i386-freebsd10-i386 xen-boot
Searching for failure / basis pass:
 116433 fail [host=pinot1] / 116215 [host=baroque0] 116182 [host=rimava1] 
116164 [host=fiano0] 116152 [host=chardonnay0] 116136 [host=merlot1] 116119 
[host=italia0] 116103 [host=fiano1] 115718 [host=nobling1] 115690 
[host=elbling1] 115678 [host=italia1] 115643 [host=merlot0] 115628 
[host=elbling0] 115615 [host=chardonnay1] 115599 [host=nocera0] 115573 
[host=pinot0] 115543 [host=baroque1] 115487 [host=baroque0] 115475 
[host=italia0] 115469 [host=nocera1] 115459 [host=rimava1] 115438 [host=fiano0] 
115414 [host=chardonnay0] 115387 [host=merlot1] 115373 [host=nobling1] 115353 
[host=nobling0] 115338 [host=huxelrebe1] 115321 [host=fiano1] 115302 
[host=rimava0] 115279 ok.
Failure / basis pass flights: 116433 / 115279
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0c86a6bd85ff0629cd2c5141027fc1c8bb6cde9c 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
b79708a8ed1b3d18bee67baeaf33b3fa529493e2 
b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
Basis pass 15f859ae5c43c7f0a064ed92d33f7a5bc5de6de0 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f588947bd40 
24fb44e971a62b345c7b6ca3c03b454a1e150abe
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#15f859ae5c43c7f0a064ed92d33f7a5bc5de6de0-0c86a6bd85ff0629cd2c5141027fc1c8bb6cde9c
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#c8ea0457495342c417c3dc033bba25148b279f60-c8ea0457495342c417c3dc033bba25148b279f60
 
git://xenbits.xen.org/qemu-xen.git#5cd7ce5dde3f228b3b669ed9ca432f588947bd40-b79708a8ed1b3d18bee67baeaf33b3fa529493e2
 
git://xenbits.xen.org/xen.git#24fb44e971a62b345c7b6ca3c03b454a1e150abe-b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
adhoc-revtuple-generator: tree discontiguous: linux-2.6
Loaded 2006 nodes in revision graph
Searching for test results:
 114643 [host=nocera1]
 114658 [host=chardonnay0]
 114781 [host=rimava1]
 114682 [host=nobling1]
 114820 [host=baroque1]
 114883 [host=italia0]
 115009 [host=italia1]
 115121 [host=elbling0]
 115153 [host=chardonnay1]
 115182 [host=merlot0]
 115203 [host=huxelrebe0]
 115244 [host=elbling1]
 115279 pass 15f859ae5c43c7f0a064ed92d33f7a5bc5de6de0 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f588947bd40 
24fb44e971a62b345c7b6ca3c03b454a1e150abe
 115321 [host=fiano1]
 115302 [host=rimava0]
 115338 [host=huxelrebe1]
 115353 [host=nobling0]
 115387 [host=merlot1]
 115373 [host=nobling1]
 115469 [host=nocera1]
 115414 [host=chardonnay0]
 115459 [host=rimava1]
 115438 [host=fiano0]
 115475 [host=italia0]
 115487 [host=baroque0]
 115599 [host=nocera0]
 115543 [host=baroque1]
 115573 [host=pinot0]
 115615 [host=chardonnay1]
 115628 [host=elbling0]
 115643 [host=merlot0]
 115678 [host=italia1]
 115690 [host=elbling1]
 115718 [host=nobling1]
 116103 [host=fiano1]
 116152 [host=chardonnay0]
 116119 [host=italia0]
 116136 [host=merlot1]
 116164 [host=fiano0]
 116182 [host=rimava1]
 116215 [host=baroque0]
 116226 fail irrelevant
 116268 fail irrelevant
 116316 fail irrelevant
 116343 fail irrelevant
 116433 fail 0c86

[Xen-devel] [PATCH for-next 11/16] xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync

2017-11-23 Thread Julien Grall
Rename p2m_flush_tlb and p2m_flush_tlb_sync to respectively
p2m_tlb_flush and p2m_force_tlb_flush_sync.

At first glance, inverting 'flush' and 'tlb'  might seem pointless but
would be helpful in the future in order to get more easily some code ported
from x86 P2M or even to shared with.

For p2m_flush_tlb_sync, the 'force' was added because the TLBs are
flush unconditionally. A follow-up patch will add an helper to flush
TLBs only in certain cases.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 417609ede2..d466a5bc43 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,7 +52,7 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
 { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static void p2m_flush_tlb(struct p2m_domain *p2m);
+static void p2m_tlb_flush(struct p2m_domain *p2m);
 
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
@@ -65,7 +65,7 @@ void p2m_write_unlock(struct p2m_domain *p2m)
  * to avoid someone else modify the P2M before the TLB
  * invalidation has completed.
  */
-p2m_flush_tlb(p2m);
+p2m_tlb_flush(p2m);
 }
 
 write_unlock(&p2m->lock);
@@ -138,7 +138,7 @@ void p2m_restore_state(struct vcpu *n)
 *last_vcpu_ran = n->vcpu_id;
 }
 
-static void p2m_flush_tlb(struct p2m_domain *p2m)
+static void p2m_tlb_flush(struct p2m_domain *p2m)
 {
 unsigned long flags = 0;
 uint64_t ovttbr;
@@ -170,11 +170,11 @@ static void p2m_flush_tlb(struct p2m_domain *p2m)
  *
  * Must be called with the p2m lock held.
  */
-static void p2m_flush_tlb_sync(struct p2m_domain *p2m)
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
 {
 ASSERT(p2m_is_write_locked(p2m));
 
-p2m_flush_tlb(p2m);
+p2m_tlb_flush(p2m);
 p2m->need_flush = false;
 }
 
@@ -675,7 +675,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
  * flush?
  */
 if ( p2m->need_flush )
-p2m_flush_tlb_sync(p2m);
+p2m_force_tlb_flush_sync(p2m);
 
 mfn = _mfn(entry.p2m.base);
 ASSERT(mfn_valid(mfn));
@@ -864,7 +864,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
  * For more details see (D4.7.1 in ARM DDI 0487A.j).
  */
 p2m_remove_pte(entry, p2m->clean_pte);
-p2m_flush_tlb_sync(p2m);
+p2m_force_tlb_flush_sync(p2m);
 
 p2m_write_pte(entry, split_pte, p2m->clean_pte);
 
@@ -940,7 +940,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
 {
 if ( likely(!p2m->mem_access_enabled) ||
  P2M_CLEAR_PERM(pte) != P2M_CLEAR_PERM(orig_pte) )
-p2m_flush_tlb_sync(p2m);
+p2m_force_tlb_flush_sync(p2m);
 else
 p2m->need_flush = true;
 }
@@ -1144,7 +1144,7 @@ static int p2m_alloc_table(struct domain *d)
  * Make sure that all TLBs corresponding to the new VMID are flushed
  * before using it
  */
-p2m_flush_tlb(p2m);
+p2m_tlb_flush(p2m);
 
 return 0;
 }
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 05/16] xen/arm: guest_copy: Extend the prototype to pass the vCPU

2017-11-23 Thread Julien Grall
Currently, guest_copy assumes the copy will only be done for the current
vCPU. A follow-up patch will require to use a different vCPU.

So extend the prototype to pass the vCPU.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c | 13 +++--
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 3aaa80859e..487f5ab82d 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -10,7 +10,7 @@
 #define COPY_to_guest   (1U << 1)
 
 static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
-unsigned int flags)
+struct vcpu *v, unsigned int flags)
 {
 /* XXX needs to handle faults */
 unsigned offset = addr & ~PAGE_MASK;
@@ -21,7 +21,7 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current, addr,
+page = get_page_from_gva(v, addr,
  (flags & COPY_to_guest) ? GV2M_WRITE : 
GV2M_READ);
 if ( page == NULL )
 return len;
@@ -62,24 +62,25 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-return copy_guest((void *)from, (unsigned long)to, len, COPY_to_guest);
+return copy_guest((void *)from, (unsigned long)to, len,
+  current, COPY_to_guest);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
  unsigned len)
 {
 return copy_guest((void *)from, (unsigned long)to, len,
-  COPY_to_guest | COPY_flush_dcache);
+  current, COPY_to_guest | COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-return copy_guest(NULL, (unsigned long)to, len, COPY_to_guest);
+return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned 
len)
 {
-return copy_guest(to, (unsigned long)from, len, COPY_from_guest);
+return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
 }
 
 /*
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 13/16] xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync

2017-11-23 Thread Julien Grall
p2m_tlb_flush is called in 2 places: p2m_alloc_table and
p2m_force_tlb_flush_sync.

p2m_alloc_table is called when the domain is initialized and could be
replace by a call to p2m_force_tlb_flush_sync with the P2M write locked.

This seems a bit pointless but would allow to have a single API for
flushing and avoid misusage in the P2M code.

So update p2m_alloc_table to use p2m_force_tlb_flush_sync and fold
p2m_tlb_flush in p2m_force_tlb_flush_sync.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c | 24 +++-
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 37498d8ff1..5294113afe 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -132,11 +132,18 @@ void p2m_restore_state(struct vcpu *n)
 *last_vcpu_ran = n->vcpu_id;
 }
 
-static void p2m_tlb_flush(struct p2m_domain *p2m)
+/*
+ * Force a synchronous P2M TLB flush.
+ *
+ * Must be called with the p2m lock held.
+ */
+static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
 {
 unsigned long flags = 0;
 uint64_t ovttbr;
 
+ASSERT(p2m_is_write_locked(p2m));
+
 /*
  * ARM only provides an instruction to flush TLBs for the current
  * VMID. So switch to the VTTBR of a given P2M if different.
@@ -157,18 +164,7 @@ static void p2m_tlb_flush(struct p2m_domain *p2m)
 isb();
 local_irq_restore(flags);
 }
-}
-
-/*
- * Force a synchronous P2M TLB flush.
- *
- * Must be called with the p2m lock held.
- */
-static void p2m_force_tlb_flush_sync(struct p2m_domain *p2m)
-{
-ASSERT(p2m_is_write_locked(p2m));
 
-p2m_tlb_flush(p2m);
 p2m->need_flush = false;
 }
 
@@ -1143,7 +1139,9 @@ static int p2m_alloc_table(struct domain *d)
  * Make sure that all TLBs corresponding to the new VMID are flushed
  * before using it
  */
-p2m_tlb_flush(p2m);
+p2m_write_lock(p2m);
+p2m_force_tlb_flush_sync(p2m);
+p2m_write_unlock(p2m);
 
 return 0;
 }
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 06/16] xen/arm: Extend copy_to_guest to support copying from/to guest physical address

2017-11-23 Thread Julien Grall
The only differences between copy_to_guest and access_guest_memory_by_ipa are:
- The latter does not support copying data crossing page boundary
- The former is copying from/to guest VA whilst the latter from
guest PA

copy_to_guest can easily be extended to support copying from/to guest
physical address. For that a new bit is used to tell whether linear
address or ipa is been used.

Lastly access_guest_memory_by_ipa is reimplemented using copy_to_guest.
This also has the benefits to extend the use of it, it is now possible
to copy data crossing page boundary.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c | 86 ++--
 1 file changed, 39 insertions(+), 47 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 487f5ab82d..be53bee559 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -8,6 +8,31 @@
 #define COPY_flush_dcache   (1U << 0)
 #define COPY_from_guest (0U << 1)
 #define COPY_to_guest   (1U << 1)
+#define COPY_ipa(0U << 2)
+#define COPY_linear (1U << 2)
+
+static struct page_info *translate_get_page(struct vcpu *v, paddr_t addr,
+bool linear, bool write)
+{
+p2m_type_t p2mt;
+struct page_info *page;
+
+if ( linear )
+return get_page_from_gva(v, addr, write ? GV2M_WRITE : GV2M_READ);
+
+page = get_page_from_gfn(v->domain, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
+
+if ( !page )
+return NULL;
+
+if ( !p2m_is_ram(p2mt) )
+{
+put_page(page);
+return NULL;
+}
+
+return page;
+}
 
 static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
 struct vcpu *v, unsigned int flags)
@@ -21,8 +46,8 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(v, addr,
- (flags & COPY_to_guest) ? GV2M_WRITE : 
GV2M_READ);
+page = translate_get_page(v, addr, flags & COPY_linear,
+  flags & COPY_to_guest);
 if ( page == NULL )
 return len;
 
@@ -63,73 +88,40 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
 return copy_guest((void *)from, (unsigned long)to, len,
-  current, COPY_to_guest);
+  current, COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
  unsigned len)
 {
 return copy_guest((void *)from, (unsigned long)to, len,
-  current, COPY_to_guest | COPY_flush_dcache);
+  current, COPY_to_guest | COPY_flush_dcache | 
COPY_linear);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-return copy_guest(NULL, (unsigned long)to, len, current, COPY_to_guest);
+return copy_guest(NULL, (unsigned long)to, len, current,
+  COPY_to_guest | COPY_linear);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned 
len)
 {
-return copy_guest(to, (unsigned long)from, len, current, COPY_from_guest);
+return copy_guest(to, (unsigned long)from, len, current,
+  COPY_from_guest | COPY_linear);
 }
 
-/*
- * Temporarily map one physical guest page and copy data to or from it.
- * The data to be copied cannot cross a page boundary.
- */
 int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
uint32_t size, bool is_write)
 {
-struct page_info *page;
-uint64_t offset = gpa & ~PAGE_MASK;  /* Offset within the mapped page */
-p2m_type_t p2mt;
-void *p;
-
-/* Do not cross a page boundary. */
-if ( size > (PAGE_SIZE - offset) )
-{
-printk(XENLOG_G_ERR "d%d: guestcopy: memory access crosses page 
boundary.\n",
-   d->domain_id);
-return -EINVAL;
-}
-
-page = get_page_from_gfn(d, paddr_to_pfn(gpa), &p2mt, P2M_ALLOC);
-if ( !page )
-{
-printk(XENLOG_G_ERR "d%d: guestcopy: failed to get table entry.\n",
-   d->domain_id);
-return -EINVAL;
-}
-
-if ( !p2m_is_ram(p2mt) )
-{
-put_page(page);
-printk(XENLOG_G_ERR "d%d: guestcopy: guest memory should be RAM.\n",
-   d->domain_id);
-return -EINVAL;
-}
+unsigned long left;
+int flags = COPY_ipa;
 
-p = __map_domain_page(page);
+flags |= is_write ? COPY_to_guest : COPY_from_guest;
 
-if ( is_write )
-memcpy(p + offset, buf, size);
-else
-memcpy(buf, p + offset, size);
+/* P2M is shared between all vCPUs, so the vcpu used does not matter. */

[Xen-devel] [PATCH for-next 03/16] xen/arm: Extend copy_to_guest to support copying from guest VA and use it

2017-11-23 Thread Julien Grall
The only differences between copy_to_guest (formerly called
raw_copy_to_guest_helper) and raw_copy_from_guest is:
- The direction of the memcpy
- The permission use for translating the address

Extend copy_to_guest to support copying from guest VA by adding using a
bit in the flags to tell the direction of the copy.

Lastly, reimplement raw_copy_from_guest using copy_to_guest.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c | 46 +-
 1 file changed, 13 insertions(+), 33 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index d1cfbe922c..1ffa717ca6 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -6,6 +6,8 @@
 #include 
 
 #define COPY_flush_dcache   (1U << 0)
+#define COPY_from_guest (0U << 1)
+#define COPY_to_guest   (1U << 1)
 
 static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
 unsigned int flags)
@@ -19,13 +21,18 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current, addr, GV2M_WRITE);
+page = get_page_from_gva(current, addr,
+ (flags & COPY_to_guest) ? GV2M_WRITE : 
GV2M_READ);
 if ( page == NULL )
 return len;
 
 p = __map_domain_page(page);
 p += offset;
-memcpy(p, buf, size);
+if ( flags & COPY_to_guest )
+memcpy(p, buf, size);
+else
+memcpy(buf, p, size);
+
 if ( flags & COPY_flush_dcache )
 clean_dcache_va_range(p, size);
 
@@ -46,13 +53,14 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-return copy_guest((void *)from, (unsigned long)to, len, 0);
+return copy_guest((void *)from, (unsigned long)to, len, COPY_to_guest);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
  unsigned len)
 {
-return copy_guest((void *)from, (unsigned long)to, len, COPY_flush_dcache);
+return copy_guest((void *)from, (unsigned long)to, len,
+  COPY_to_guest | COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
@@ -90,35 +98,7 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned 
len)
 {
-unsigned offset = (vaddr_t)from & ~PAGE_MASK;
-
-while ( len )
-{
-void *p;
-unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
-struct page_info *page;
-
-page = get_page_from_gva(current, (vaddr_t) from, GV2M_READ);
-if ( page == NULL )
-return len;
-
-p = __map_domain_page(page);
-p += ((vaddr_t)from & (~PAGE_MASK));
-
-memcpy(to, p, size);
-
-unmap_domain_page(p);
-put_page(page);
-len -= size;
-from += size;
-to += size;
-/*
- * After the first iteration, guest virtual address is correctly
- * aligned to PAGE_SIZE.
- */
-offset = 0;
-}
-return 0;
+return copy_guest(to, (unsigned long)from, len, COPY_from_guest);
 }
 
 /*
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 02/16] xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it

2017-11-23 Thread Julien Grall
All the helpers within arch/arm/guestcopy.c are doing the same things:
copy data from/to the guest.

At the moment, the logic is duplicated in each helpers making more
difficult to implement new variant.

The first step for the consolidation is to get a common prototype and a
base. For convenience (it is at the beginning of the file!),
raw_copy_to_guest_helper is chosen.

The function is now renamed copy_guest to show it will be a
generic function to copy data from/to the guest. Note that for now, only
copying to guest virtual address is supported. Follow-up patches will
extend the support.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c | 18 +-
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 2620e659b4..d1cfbe922c 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -7,11 +7,11 @@
 
 #define COPY_flush_dcache   (1U << 0)
 
-static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
-  unsigned len, int flags)
+static unsigned long copy_guest(void *buf, paddr_t addr, unsigned int len,
+unsigned int flags)
 {
 /* XXX needs to handle faults */
-unsigned offset = (vaddr_t)to & ~PAGE_MASK;
+unsigned offset = addr & ~PAGE_MASK;
 
 while ( len )
 {
@@ -19,21 +19,21 @@ static unsigned long raw_copy_to_guest_helper(void *to, 
const void *from,
 unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 struct page_info *page;
 
-page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
+page = get_page_from_gva(current, addr, GV2M_WRITE);
 if ( page == NULL )
 return len;
 
 p = __map_domain_page(page);
 p += offset;
-memcpy(p, from, size);
+memcpy(p, buf, size);
 if ( flags & COPY_flush_dcache )
 clean_dcache_va_range(p, size);
 
 unmap_domain_page(p - offset);
 put_page(page);
 len -= size;
-from += size;
-to += size;
+buf += size;
+addr += size;
 /*
  * After the first iteration, guest virtual address is correctly
  * aligned to PAGE_SIZE.
@@ -46,13 +46,13 @@ static unsigned long raw_copy_to_guest_helper(void *to, 
const void *from,
 
 unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
 {
-return raw_copy_to_guest_helper(to, from, len, 0);
+return copy_guest((void *)from, (unsigned long)to, len, 0);
 }
 
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
  unsigned len)
 {
-return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
+return copy_guest((void *)from, (unsigned long)to, len, COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 01/16] xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags

2017-11-23 Thread Julien Grall
In a follow-up patch, it will be necessary to pass more flags to the
function.

Rename flush_dcache to flags and introduce a define to tell whether the
cache needs to be flushed after the copy.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 4ee07fcea3..2620e659b4 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -5,8 +5,10 @@
 #include 
 #include 
 
+#define COPY_flush_dcache   (1U << 0)
+
 static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
-  unsigned len, int flush_dcache)
+  unsigned len, int flags)
 {
 /* XXX needs to handle faults */
 unsigned offset = (vaddr_t)to & ~PAGE_MASK;
@@ -24,7 +26,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const 
void *from,
 p = __map_domain_page(page);
 p += offset;
 memcpy(p, from, size);
-if ( flush_dcache )
+if ( flags & COPY_flush_dcache )
 clean_dcache_va_range(p, size);
 
 unmap_domain_page(p - offset);
@@ -50,7 +52,7 @@ unsigned long raw_copy_to_guest(void *to, const void *from, 
unsigned len)
 unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
  unsigned len)
 {
-return raw_copy_to_guest_helper(to, from, len, 1);
+return raw_copy_to_guest_helper(to, from, len, COPY_flush_dcache);
 }
 
 unsigned long raw_clear_guest(void *to, unsigned len)
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 09/16] xen/arm: domain_build: Rework initrd_load to use the generic copy helper

2017-11-23 Thread Julien Grall
The function initrd_load is dealing with IPA but uses gvirt_to_maddr to
do the translation. This is currently working fine because the stage-1 MMU
is disabled.

Furthermore, the function is implementing its own copy to guest resulting
in code duplication and making more difficult to update the logic in
page-tables (such support for Populate On Demand).

The new copy_to_guest_phys_flush_dcache could be used here by temporarily
mapping the full initrd in the virtual space.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/domain_build.c | 31 ---
 1 file changed, 8 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 3f87bf2051..42c2e16ef6 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1966,11 +1966,11 @@ static void initrd_load(struct kernel_info *kinfo)
 const struct bootmodule *mod = kinfo->initrd_bootmodule;
 paddr_t load_addr = kinfo->initrd_paddr;
 paddr_t paddr, len;
-unsigned long offs;
 int node;
 int res;
 __be32 val[2];
 __be32 *cellp;
+void __iomem *initrd;
 
 if ( !mod || !mod->size )
 return;
@@ -2000,29 +2000,14 @@ static void initrd_load(struct kernel_info *kinfo)
 if ( res )
 panic("Cannot fix up \"linux,initrd-end\" property");
 
-for ( offs = 0; offs < len; )
-{
-uint64_t par;
-paddr_t s, l, ma = 0;
-void *dst;
-
-s = offs & ~PAGE_MASK;
-l = min(PAGE_SIZE - s, len);
-
-par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-if ( par )
-{
-panic("Unable to translate guest address");
-return;
-}
-
-dst = map_domain_page(maddr_to_mfn(ma));
+initrd = ioremap_wc(paddr, len);
+if ( !initrd )
+panic("Unable to map the hwdom initrd");
 
-copy_from_paddr(dst + s, paddr + offs, l);
-
-unmap_domain_page(dst);
-offs += l;
-}
+res = copy_to_guest_phys_flush_dcache(kinfo->d, load_addr,
+  initrd, len);
+if ( res != 0 )
+panic("Unable to copy the initrd in the hwdom memory");
 }
 
 static void evtchn_fixup(struct domain *d, struct kernel_info *kinfo)
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 10/16] xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load

2017-11-23 Thread Julien Grall
The function dtb_load is dealing with IPA but uses gvirt_to_maddr to do
the translation. This is currently working fine because the stage-1 MMU
is disabled.

Rather than relying on such assumption, use the new
copy_to_guest_phys_flush_dcache. This also result to a slightly more
comprehensible code.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/domain_build.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 42c2e16ef6..9245753a6b 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1948,14 +1948,15 @@ static int prepare_acpi(struct domain *d, struct 
kernel_info *kinfo)
 #endif
 static void dtb_load(struct kernel_info *kinfo)
 {
-void * __user dtb_virt = (void * __user)(register_t)kinfo->dtb_paddr;
 unsigned long left;
 
 printk("Loading dom0 DTB to 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
kinfo->dtb_paddr, kinfo->dtb_paddr + fdt_totalsize(kinfo->fdt));
 
-left = raw_copy_to_guest_flush_dcache(dtb_virt, kinfo->fdt,
-fdt_totalsize(kinfo->fdt));
+left = copy_to_guest_phys_flush_dcache(kinfo->d, kinfo->dtb_paddr,
+   kinfo->fdt,
+   fdt_totalsize(kinfo->fdt));
+
 if ( left != 0 )
 panic("Unable to copy the DTB to dom0 memory (left = %lu bytes)", 
left);
 xfree(kinfo->fdt);
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 12/16] xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it

2017-11-23 Thread Julien Grall
Multiple places in the code requires to flush the TLBs wonly when
p2m->need_flush is set.

Rather than open-coding it, introduce a new helper p2m_tlb_flush_sync to
do it.

Note that p2m_tlb_flush_sync is exported as it might be used by other
part of Xen.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/p2m.c| 27 +--
 xen/include/asm-arm/p2m.h |  2 ++
 2 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d466a5bc43..37498d8ff1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -52,21 +52,15 @@ static const paddr_t level_masks[] =
 static const uint8_t level_orders[] =
 { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
 
-static void p2m_tlb_flush(struct p2m_domain *p2m);
-
 /* Unlock the flush and do a P2M TLB flush if necessary */
 void p2m_write_unlock(struct p2m_domain *p2m)
 {
-if ( p2m->need_flush )
-{
-p2m->need_flush = false;
-/*
- * The final flush is done with the P2M write lock taken to
- * to avoid someone else modify the P2M before the TLB
- * invalidation has completed.
- */
-p2m_tlb_flush(p2m);
-}
+/*
+ * The final flush is done with the P2M write lock taken to avoid
+ * someone else modifying the P2M wbefore the TLB invalidation has
+ * completed.
+ */
+p2m_tlb_flush_sync(p2m);
 
 write_unlock(&p2m->lock);
 }
@@ -178,6 +172,12 @@ static void p2m_force_tlb_flush_sync(struct p2m_domain 
*p2m)
 p2m->need_flush = false;
 }
 
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+if ( p2m->need_flush )
+p2m_force_tlb_flush_sync(p2m);
+}
+
 /*
  * Find and map the root page table. The caller is responsible for
  * unmapping the table.
@@ -674,8 +674,7 @@ static void p2m_free_entry(struct p2m_domain *p2m,
  * XXX: Should we defer the free of the page table to avoid the
  * flush?
  */
-if ( p2m->need_flush )
-p2m_force_tlb_flush_sync(p2m);
+p2m_tlb_flush_sync(p2m);
 
 mfn = _mfn(entry.p2m.base);
 ASSERT(mfn_valid(mfn));
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index faadcfe8fe..a0abc84ed8 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -204,6 +204,8 @@ static inline int p2m_is_write_locked(struct p2m_domain 
*p2m)
 return rw_is_write_locked(&p2m->lock);
 }
 
+void p2m_tlb_flush_sync(struct p2m_domain *p2m);
+
 /* Look up the MFN corresponding to a domain's GFN. */
 mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache

2017-11-23 Thread Julien Grall
This new function will be used in a follow-up patch to copy data to the guest
using the IPA (aka guest physical address) and then clean the cache.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c   | 10 ++
 xen/include/asm-arm/guest_access.h |  6 ++
 2 files changed, 16 insertions(+)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index be53bee559..7958663970 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void 
__user *from, unsigned le
   COPY_from_guest | COPY_linear);
 }
 
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+  paddr_t gpa,
+  void *buf,
+  unsigned int len)
+{
+/* P2M is shared between all vCPUs, so the vCPU used does not matter. */
+return copy_guest(buf, gpa, len, d->vcpu[0],
+  COPY_to_guest | COPY_ipa | COPY_flush_dcache);
+}
+
 int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
uint32_t size, bool is_write)
 {
diff --git a/xen/include/asm-arm/guest_access.h 
b/xen/include/asm-arm/guest_access.h
index 6796801cfe..224d2a033b 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const 
void *from,
 unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
 unsigned long raw_clear_guest(void *to, unsigned len);
 
+/* Copy data to guest physical address, then clean the region. */
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+  paddr_t phys,
+  void *buf,
+  unsigned int len);
+
 int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
uint32_t size, bool is_write);
 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 00/16] xen/arm: Stage-2 handling cleanup

2017-11-23 Thread Julien Grall
Hi all,

This patch series is a collection of cleanup around stage-2 handling. They
are consolidating different pieces of the hypervisor. This will make easier
to maintain and update stage-2 change in the future.

Cheers,

Julien Grall (16):
  xen/arm: raw_copy_to_guest_helper: Rename flush_dcache to flags
  xen/arm: raw_copy_to_guest_helper: Rework the prototype and rename it
  xen/arm: Extend copy_to_guest to support copying from guest VA and use
it
  xen/arm: Extend copy_to_guest to support zeroing guest VA and use it
  xen/arm: guest_copy: Extend the prototype to pass the vCPU
  xen/arm: Extend copy_to_guest to support copying from/to guest
physical address
  xen/arm: Introduce copy_to_guest_phys_flush_dcache
  xen/arm: kernel: Rework kernel_zimage_load to use the generic copy
helper
  xen/arm: domain_build: Rework initrd_load to use the generic copy
helper
  xen/arm: domain_build: Use copy_to_guest_phys_flush_dcache in dtb_load
  xen/arm: p2m: Rename p2m_flush_tlb and p2m_flush_tlb_sync
  xen/arm: p2m: Introduce p2m_tlb_flush_sync, export it and use it
  xen/arm: p2m: Fold p2m_tlb_flush into p2m_force_tlb_flush_sync
  xen/arm: traps: Remove the field gva from mmio_info_t
  xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio
  xen/arm: traps: Merge do_trap_instr_abort_guest and
do_trap_data_abort_guest

 xen/arch/arm/domain_build.c|  39 +++-
 xen/arch/arm/guestcopy.c   | 182 +++--
 xen/arch/arm/kernel.c  |  33 +++
 xen/arch/arm/kernel.h  |   2 +
 xen/arch/arm/p2m.c |  53 +--
 xen/arch/arm/traps.c   | 161 
 xen/include/asm-arm/guest_access.h |   6 ++
 xen/include/asm-arm/mmio.h |   1 -
 xen/include/asm-arm/p2m.h  |   2 +
 9 files changed, 191 insertions(+), 288 deletions(-)

-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 15/16] xen/arm: traps: Move the definition of mmio_info_t in try_handle_mmio

2017-11-23 Thread Julien Grall
mmio_info_t is currently filled by do_trap_data_guest_abort but only
important when emulation an MMIO region.

A follow-up patch will merge stage-2 prefetch abort and stage-2 data abort
in a single helper. To prepare that, mmio_info_t is now filled by
try_handle_mmio.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c | 31 +--
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index e30dd9b7e2..a68e01b457 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1936,9 +1936,14 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
 }
 
 static bool try_handle_mmio(struct cpu_user_regs *regs,
-mmio_info_t *info)
+const union hsr hsr,
+paddr_t gpa)
 {
-const struct hsr_dabt dabt = info->dabt;
+const struct hsr_dabt dabt = hsr.dabt;
+mmio_info_t info = {
+.gpa = gpa,
+.dabt = dabt
+};
 int rc;
 
 /* stage-1 page table should never live in an emulated MMIO region */
@@ -1956,7 +1961,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
 if ( check_workaround_766422() && (regs->cpsr & PSR_THUMB) &&
  dabt.write )
 {
-rc = decode_instruction(regs, &info->dabt);
+rc = decode_instruction(regs, &info.dabt);
 if ( rc )
 {
 gprintk(XENLOG_DEBUG, "Unable to decode instruction\n");
@@ -1964,7 +1969,7 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
 }
 }
 
-return !!handle_mmio(info);
+return !!handle_mmio(&info);
 }
 
 /*
@@ -2002,7 +2007,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 const struct hsr_dabt dabt = hsr.dabt;
 int rc;
 vaddr_t gva;
-mmio_info_t info;
+paddr_t gpa;
 uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
 mfn_t mfn;
 
@@ -2013,15 +2018,13 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 if ( dabt.eat )
 return __do_trap_serror(regs, true);
 
-info.dabt = dabt;
-
 gva = get_hfar(true /* is_data */);
 
 if ( hpfar_is_valid(dabt.s1ptw, fsc) )
-info.gpa = get_faulting_ipa(gva);
+gpa = get_faulting_ipa(gva);
 else
 {
-rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
+rc = gva_to_ipa(gva, &gpa, GV2M_READ);
 /*
  * We may not be able to translate because someone is
  * playing with the Stage-2 page table of the domain.
@@ -2042,7 +2045,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
 };
 
-p2m_mem_access_check(info.gpa, gva, npfec);
+p2m_mem_access_check(gpa, gva, npfec);
 /*
  * The only way to get here right now is because of mem_access,
  * thus reinjecting the exception to the guest is never required.
@@ -2054,7 +2057,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
  * Attempt first to emulate the MMIO as the data abort will
  * likely happen in an emulated region.
  */
-if ( try_handle_mmio(regs, &info) )
+if ( try_handle_mmio(regs, hsr, gpa) )
 {
 advance_pc(regs, hsr);
 return;
@@ -2065,11 +2068,11 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
  * with the Stage-2 page table. Walk the Stage-2 PT to check
  * if the entry exists. If it's the case, return to the guest
  */
-mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(info.gpa));
+mfn = gfn_to_mfn(current->domain, gaddr_to_gfn(gpa));
 if ( !mfn_eq(mfn, INVALID_MFN) )
 return;
 
-if ( try_map_mmio(gaddr_to_gfn(info.gpa)) )
+if ( try_map_mmio(gaddr_to_gfn(gpa)) )
 return;
 
 break;
@@ -2079,7 +2082,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 }
 
 gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
- " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
+ " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
 inject_dabt_exception(regs, gva, hsr.len);
 }
 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 04/16] xen/arm: Extend copy_to_guest to support zeroing guest VA and use it

2017-11-23 Thread Julien Grall
The function copy_to_guest can easily be extended to support zeroing
guest VA. To avoid using a new bit, it is considered that a NULL buffer
(i.e buf == NULL) means the guest memory will be zeroed.

Lastly, reimplement raw_clear_guest using copy_to_guest.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/guestcopy.c | 41 +++--
 1 file changed, 11 insertions(+), 30 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 1ffa717ca6..3aaa80859e 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -29,7 +29,16 @@ static unsigned long copy_guest(void *buf, paddr_t addr, 
unsigned int len,
 p = __map_domain_page(page);
 p += offset;
 if ( flags & COPY_to_guest )
-memcpy(p, buf, size);
+{
+/*
+ * buf will be NULL when the caller request to zero the
+ * guest memory.
+ */
+if ( buf )
+memcpy(p, buf, size);
+else
+memset(p, 0, size);
+}
 else
 memcpy(buf, p, size);
 
@@ -65,35 +74,7 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const 
void *from,
 
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
-/* XXX needs to handle faults */
-unsigned offset = (vaddr_t)to & ~PAGE_MASK;
-
-while ( len )
-{
-void *p;
-unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
-struct page_info *page;
-
-page = get_page_from_gva(current, (vaddr_t) to, GV2M_WRITE);
-if ( page == NULL )
-return len;
-
-p = __map_domain_page(page);
-p += offset;
-memset(p, 0x00, size);
-
-unmap_domain_page(p - offset);
-put_page(page);
-len -= size;
-to += size;
-/*
- * After the first iteration, guest virtual address is correctly
- * aligned to PAGE_SIZE.
- */
-offset = 0;
-}
-
-return 0;
+return copy_guest(NULL, (unsigned long)to, len, COPY_to_guest);
 }
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned 
len)
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 08/16] xen/arm: kernel: Rework kernel_zimage_load to use the generic copy helper

2017-11-23 Thread Julien Grall
The function kernel_zimage is dealing with IPA but uses gvirt_to_maddr to
do the translation. This is currently working fine because the stage-1 MMU
is disabled.

Furthermore, the function is implementing its own copy to guest resulting
in code duplication and making more difficult to update the logic in
page-tables (such support for Populate On Demand).

The new copy_to_guest_phys_flush_dcache could be used here by
temporarily mapping the full kernel in the virtual space.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/domain_build.c |  1 +
 xen/arch/arm/kernel.c   | 33 -
 xen/arch/arm/kernel.h   |  2 ++
 3 files changed, 15 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c74f4dd69d..3f87bf2051 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2133,6 +2133,7 @@ int construct_dom0(struct domain *d)
 d->max_pages = ~0U;
 
 kinfo.unassigned_mem = dom0_mem;
+kinfo.d = d;
 
 rc = kernel_probe(&kinfo);
 if ( rc < 0 )
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index a6c6413712..2fb0b9684d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -15,6 +15,8 @@
 #include 
 #include 
 
+#include 
+
 #include "kernel.h"
 
 #define UIMAGE_MAGIC  0x27051956
@@ -157,7 +159,8 @@ static void kernel_zimage_load(struct kernel_info *info)
 paddr_t load_addr = kernel_zimage_place(info);
 paddr_t paddr = info->zimage.kernel_addr;
 paddr_t len = info->zimage.len;
-unsigned long offs;
+void *kernel;
+int rc;
 
 info->entry = load_addr;
 
@@ -165,29 +168,17 @@ static void kernel_zimage_load(struct kernel_info *info)
 
 printk("Loading zImage from %"PRIpaddr" to %"PRIpaddr"-%"PRIpaddr"\n",
paddr, load_addr, load_addr + len);
-for ( offs = 0; offs < len; )
-{
-uint64_t par;
-paddr_t s, l, ma = 0;
-void *dst;
-
-s = offs & ~PAGE_MASK;
-l = min(PAGE_SIZE - s, len);
-
-par = gvirt_to_maddr(load_addr + offs, &ma, GV2M_WRITE);
-if ( par )
-{
-panic("Unable to map translate guest address");
-return;
-}
 
-dst = map_domain_page(maddr_to_mfn(ma));
+kernel = ioremap_wc(paddr, len);
+if ( !kernel )
+panic("Unable to map the hwdom kernel");
 
-copy_from_paddr(dst + s, paddr + offs, l);
+rc = copy_to_guest_phys_flush_dcache(info->d, load_addr,
+ kernel, len);
+if ( rc != 0 )
+panic("Unable to copy the kernel in the hwdom memory");
 
-unmap_domain_page(dst);
-offs += l;
-}
+iounmap(kernel);
 }
 
 /*
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index c1b07d4f7b..6d695097b5 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -15,6 +15,8 @@ struct kernel_info {
 enum domain_type type;
 #endif
 
+struct domain *d;
+
 void *fdt; /* flat device tree */
 paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
 struct meminfo mem;
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [PATCH for-next 16/16] xen/arm: traps: Merge do_trap_instr_abort_guest and do_trap_data_abort_guest

2017-11-23 Thread Julien Grall
The two helpers do_trap_instr_abort_guest and do_trap_data_abort_guest
are used trap stage-2 abort. While the former is only handling prefetch
abort and the latter data abort, they are very similarly and does not
warrant to have separate helpers.

For instance, merging the both will make easier to maintain stage-2 abort
handling. So consolidate the two helpers in a new helper
do_trap_stage2_abort.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c | 133 ---
 1 file changed, 41 insertions(+), 92 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index a68e01b457..b83a2d9244 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1862,79 +1862,6 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t 
fsc)
 return s1ptw || (fsc == FSC_FLT_TRANS && !check_workaround_834220());
 }
 
-static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
-  const union hsr hsr)
-{
-int rc;
-register_t gva;
-uint8_t fsc = hsr.iabt.ifsc & ~FSC_LL_MASK;
-paddr_t gpa;
-mfn_t mfn;
-
-gva = get_hfar(false /* is_data */);
-
-/*
- * If this bit has been set, it means that this instruction abort is caused
- * by a guest external abort. We can handle this instruction abort as guest
- * SError.
- */
-if ( hsr.iabt.eat )
-return __do_trap_serror(regs, true);
-
-
-if ( hpfar_is_valid(hsr.iabt.s1ptw, fsc) )
-gpa = get_faulting_ipa(gva);
-else
-{
-/*
- * Flush the TLB to make sure the DTLB is clear before
- * doing GVA->IPA translation. If we got here because of
- * an entry only present in the ITLB, this translation may
- * still be inaccurate.
- */
-flush_tlb_local();
-
-/*
- * We may not be able to translate because someone is
- * playing with the Stage-2 page table of the domain.
- * Return to the guest.
- */
-rc = gva_to_ipa(gva, &gpa, GV2M_READ);
-if ( rc == -EFAULT )
-return; /* Try again */
-}
-
-switch ( fsc )
-{
-case FSC_FLT_PERM:
-{
-const struct npfec npfec = {
-.insn_fetch = 1,
-.gla_valid = 1,
-.kind = hsr.iabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
-};
-
-p2m_mem_access_check(gpa, gva, npfec);
-/*
- * The only way to get here right now is because of mem_access,
- * thus reinjecting the exception to the guest is never required.
- */
-return;
-}
-case FSC_FLT_TRANS:
-/*
- * The PT walk may have failed because someone was playing
- * with the Stage-2 page table. Walk the Stage-2 PT to check
- * if the entry exists. If it's the case, return to the guest
- */
-mfn = gfn_to_mfn(current->domain, _gfn(paddr_to_pfn(gpa)));
-if ( !mfn_eq(mfn, INVALID_MFN) )
-return;
-}
-
-inject_iabt_exception(regs, gva, hsr.len);
-}
-
 static bool try_handle_mmio(struct cpu_user_regs *regs,
 const union hsr hsr,
 paddr_t gpa)
@@ -1946,6 +1873,8 @@ static bool try_handle_mmio(struct cpu_user_regs *regs,
 };
 int rc;
 
+ASSERT(hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
+
 /* stage-1 page table should never live in an emulated MMIO region */
 if ( dabt.s1ptw )
 return false;
@@ -2001,29 +1930,43 @@ static bool try_map_mmio(gfn_t gfn)
 return !map_regions_p2mt(d, gfn, 1, mfn, p2m_mmio_direct_c);
 }
 
-static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
- const union hsr hsr)
+static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
+   const union hsr hsr)
 {
-const struct hsr_dabt dabt = hsr.dabt;
+/*
+ * The encoding of hsr_iabt is a subset of hsr_dabt. So use
+ * hsr_dabt to represent an abort fault.
+ */
+const struct hsr_xabt xabt = hsr.xabt;
 int rc;
 vaddr_t gva;
 paddr_t gpa;
-uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
+uint8_t fsc = xabt.fsc & ~FSC_LL_MASK;
 mfn_t mfn;
+bool is_data = (hsr.ec == HSR_EC_DATA_ABORT_LOWER_EL);
 
 /*
- * If this bit has been set, it means that this data abort is caused
- * by a guest external abort. We treat this data abort as guest SError.
+ * If this bit has been set, it means that this stage-2 abort is caused
+ * by a guest external abort. We treat this stage-2 abort as guest SError.
  */
-if ( dabt.eat )
+if ( xabt.eat )
 return __do_trap_serror(regs, true);
 
-gva = get_hfar(true /* is_data */);
+gva = get_hfar(is_data);
 
-if ( hpfar_is_valid(dabt.s1ptw, fsc) )
+if ( hpfar_is_valid(xabt.s1ptw, fsc) )
 gpa = get_faulting_ipa(gva);
 else
 {
+/*
+ * Flush the TLB 

[Xen-devel] [PATCH for-next 14/16] xen/arm: traps: Remove the field gva from mmio_info_t

2017-11-23 Thread Julien Grall
mmio_info_t is used to gather information in order do emulation a
region. Guest virtual address is unlikely to be a useful information and
not currently used. So remove the field gva from mmio_info_t and replace
by a local variable.

Signed-off-by: Julien Grall 
---
 xen/arch/arm/traps.c   | 13 +++--
 xen/include/asm-arm/mmio.h |  1 -
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index f6f6de3691..e30dd9b7e2 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2001,6 +2001,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 {
 const struct hsr_dabt dabt = hsr.dabt;
 int rc;
+vaddr_t gva;
 mmio_info_t info;
 uint8_t fsc = hsr.dabt.dfsc & ~FSC_LL_MASK;
 mfn_t mfn;
@@ -2014,13 +2015,13 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
 
 info.dabt = dabt;
 
-info.gva = get_hfar(true /* is_data */);
+gva = get_hfar(true /* is_data */);
 
 if ( hpfar_is_valid(dabt.s1ptw, fsc) )
-info.gpa = get_faulting_ipa(info.gva);
+info.gpa = get_faulting_ipa(gva);
 else
 {
-rc = gva_to_ipa(info.gva, &info.gpa, GV2M_READ);
+rc = gva_to_ipa(gva, &info.gpa, GV2M_READ);
 /*
  * We may not be able to translate because someone is
  * playing with the Stage-2 page table of the domain.
@@ -2041,7 +2042,7 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 .kind = dabt.s1ptw ? npfec_kind_in_gpt : npfec_kind_with_gla
 };
 
-p2m_mem_access_check(info.gpa, info.gva, npfec);
+p2m_mem_access_check(info.gpa, gva, npfec);
 /*
  * The only way to get here right now is because of mem_access,
  * thus reinjecting the exception to the guest is never required.
@@ -2078,8 +2079,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs 
*regs,
 }
 
 gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
- " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, info.gva, info.gpa);
-inject_dabt_exception(regs, info.gva, hsr.len);
+ " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, info.gpa);
+inject_dabt_exception(regs, gva, hsr.len);
 }
 
 static void enter_hypervisor_head(struct cpu_user_regs *regs)
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index c620eed4cd..37e2b7a707 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -29,7 +29,6 @@
 typedef struct
 {
 struct hsr_dabt dabt;
-vaddr_t gva;
 paddr_t gpa;
 } mmio_info_t;
 
-- 
2.11.0


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH] x86/HVM: fix hvmemul_rep_outs_set_context()

2017-11-23 Thread Andrew Cooper
On 23/11/17 15:09, Jan Beulich wrote:
> There were two issues with this function: Its use of
> hvmemul_do_pio_buffer() was wrong (the function deals only with
> individual port accesses, not repeated ones, i.e. passing it
> "*reps * bytes_per_rep" does not have the intended effect). And it
> could have processed a larger set of operations in one go than was
> probably intended (limited just by the size that xmalloc() can hand
> back).
>
> By converting to proper use of hvmemul_do_pio_buffer(), no intermediate
> buffer is needed at all. As a result a preemption check is being added.
>
> Also drop unused parameters from the function.
>
> Signed-off-by: Jan Beulich 

While this does look like real bug, and bugfix, it isn't the issue I'm
hitting.  I've distilled the repro scenario down to a tiny XTF test,
which is just a `rep outsb` with a buffer which crosses a page boundary.

The results are reliably:

(d1) --- Xen Test Framework ---
(d1) Environment: HVM 32bit (No paging)
(d1) Test hvm-print
(d1) String crossing a page boundary
(XEN) MMIO emulation failed (1): d1v0 32bit @ 0010:001032b0 -> 5e c3 8d
b4 26 00 00 00 00 8d bc 27 00 00 00 00
(d1) Test result: SUCCESS

The Port IO hits a retry because of hitting the page boundary, and the
retry logic successes, as evident by all data hitting hvm_print_line(). 
Somewhere however, the PIO turns into MMIO, and a failure is reported
after the PIO completed successfully.  %rip in the failure message
points after the `rep outsb`, rather than at it.

If anyone has any ideas, I'm all ears.  If not, I will try to find some
time to look deeper into the issue.

~Andrew
>From 9141a36374f52434a291e3be41bd259cfb9bda72 Mon Sep 17 00:00:00 2001
From: Andrew Cooper 
Date: Thu, 23 Nov 2017 18:31:40 +
Subject: [PATCH] MMIO failure trigger

---
 tests/hvm-print/Makefile |  9 +
 tests/hvm-print/main.c   | 40 
 2 files changed, 49 insertions(+)
 create mode 100644 tests/hvm-print/Makefile
 create mode 100644 tests/hvm-print/main.c

diff --git a/tests/hvm-print/Makefile b/tests/hvm-print/Makefile
new file mode 100644
index 000..c70bede
--- /dev/null
+++ b/tests/hvm-print/Makefile
@@ -0,0 +1,9 @@
+include $(ROOT)/build/common.mk
+
+NAME  := hvm-print
+CATEGORY  := utility
+TEST-ENVS := hvm32
+
+obj-perenv += main.o
+
+include $(ROOT)/build/gen.mk
diff --git a/tests/hvm-print/main.c b/tests/hvm-print/main.c
new file mode 100644
index 000..882b716
--- /dev/null
+++ b/tests/hvm-print/main.c
@@ -0,0 +1,40 @@
+/**
+ * @file tests/hvm-print/main.c
+ * @ref test-hvm-print
+ *
+ * @page test-hvm-print hvm-print
+ *
+ * @todo Docs for test-hvm-print
+ *
+ * @see tests/hvm-print/main.c
+ */
+#include 
+
+const char test_title[] = "Test hvm-print";
+
+static char buf[2 * PAGE_SIZE] __page_aligned_bss;
+
+void test_main(void)
+{
+char *ptr = &buf[4090];
+size_t len;
+
+strcpy(ptr, "String crossing a page boundary\n");
+len = strlen(ptr);
+
+asm volatile("rep; outsb"
+ : "+S" (ptr), "+c" (len)
+ : "d" (0xe9));
+
+xtf_success(NULL);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.1.4

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache

2017-11-23 Thread Andrew Cooper
On 23/11/17 18:32, Julien Grall wrote:
> This new function will be used in a follow-up patch to copy data to the guest
> using the IPA (aka guest physical address) and then clean the cache.
>
> Signed-off-by: Julien Grall 
> ---
>  xen/arch/arm/guestcopy.c   | 10 ++
>  xen/include/asm-arm/guest_access.h |  6 ++
>  2 files changed, 16 insertions(+)
>
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index be53bee559..7958663970 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void 
> __user *from, unsigned le
>COPY_from_guest | COPY_linear);
>  }
>  
> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> +  paddr_t gpa,
> +  void *buf,
> +  unsigned int len)
> +{
> +/* P2M is shared between all vCPUs, so the vCPU used does not matter. */

Be very careful with this line of thinking.  It is only works after
DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
NULL pointer dereference.

Also, what about vcpus configured with alternative views?

~Andrew

> +return copy_guest(buf, gpa, len, d->vcpu[0],
> +  COPY_to_guest | COPY_ipa | COPY_flush_dcache);
> +}
> +
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
> uint32_t size, bool is_write)
>  {
> diff --git a/xen/include/asm-arm/guest_access.h 
> b/xen/include/asm-arm/guest_access.h
> index 6796801cfe..224d2a033b 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -11,6 +11,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, 
> const void *from,
>  unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
>  unsigned long raw_clear_guest(void *to, unsigned len);
>  
> +/* Copy data to guest physical address, then clean the region. */
> +unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
> +  paddr_t phys,
> +  void *buf,
> +  unsigned int len);
> +
>  int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
> uint32_t size, bool is_write);
>  


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [PATCH for-next 07/16] xen/arm: Introduce copy_to_guest_phys_flush_dcache

2017-11-23 Thread Julien Grall

Hi Andrew,

On 23/11/17 18:49, Andrew Cooper wrote:

On 23/11/17 18:32, Julien Grall wrote:

This new function will be used in a follow-up patch to copy data to the guest
using the IPA (aka guest physical address) and then clean the cache.

Signed-off-by: Julien Grall 
---
  xen/arch/arm/guestcopy.c   | 10 ++
  xen/include/asm-arm/guest_access.h |  6 ++
  2 files changed, 16 insertions(+)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index be53bee559..7958663970 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -110,6 +110,16 @@ unsigned long raw_copy_from_guest(void *to, const void 
__user *from, unsigned le
COPY_from_guest | COPY_linear);
  }
  
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,

+  paddr_t gpa,
+  void *buf,
+  unsigned int len)
+{
+/* P2M is shared between all vCPUs, so the vCPU used does not matter. */


Be very careful with this line of thinking.  It is only works after
DOMCTL_max_vcpus has succeeded, and before that point, it is a latent
NULL pointer dereference.


I really don't expect that function been used before DOMCT_max_vcpus is 
set. It is only used for hardware emulation or Xen loading image into 
the hardware domain memory. I could add a check d->vcpus to be safe.




Also, what about vcpus configured with alternative views?


It is not important because the underlying call is get_page_from_gfn 
that does not care about the alternative view (that function take a 
domain in parameter). I can update the comment.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] live migration is not aborted in xen-4.10

2017-11-23 Thread Olaf Hering
If a custom precopy_policy, called by
tools/libxc/xc_sr_save.c:send_memory_live, returns XGS_POLICY_ABORT then
the migration is not aborted as expected. Instead the domU is suspended
and transfered. How is the caller supposed to stop the migration and
cleanup?

Olaf


signature.asc
Description: PGP signature
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [examine test] 116480: ALL FAIL

2017-11-23 Thread osstest service owner
flight 116480 examine real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116480/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 examine-pinot02 hosts-allocate broken REGR. vs. 115400
 examine-arndale-bluewater 2 hosts-allocate broken REGR. vs. 115400
 examine-rimava0   2 hosts-allocate broken REGR. vs. 115400
 examine-huxelrebe02 hosts-allocate broken REGR. vs. 115400
 examine-nocera1   2 hosts-allocate broken REGR. vs. 115400
 examine-nocera0   2 hosts-allocate broken REGR. vs. 115400
 examine-godello1  2 hosts-allocate broken REGR. vs. 115400
 examine-godello0  2 hosts-allocate broken REGR. vs. 115400
 examine-pinot12 hosts-allocate broken REGR. vs. 115400
 examine-nobling0  2 hosts-allocate broken REGR. vs. 115400
 examine-baroque1  2 hosts-allocate broken REGR. vs. 115400
 examine-elbling1  2 hosts-allocate broken REGR. vs. 115400
 examine-merlot1   2 hosts-allocate broken REGR. vs. 115400
 examine-chardonnay0   2 hosts-allocate broken REGR. vs. 115400
 examine-fiano02 hosts-allocate broken REGR. vs. 115400
 examine-merlot0   2 hosts-allocate broken REGR. vs. 115400
 examine-fiano12 hosts-allocate broken REGR. vs. 115400
 examine-cubietruck-picasso2 hosts-allocate broken REGR. vs. 115400
 examine-cubietruck-braque 2 hosts-allocate broken REGR. vs. 115400
 examine-rimava1   2 hosts-allocate broken REGR. vs. 115400
 examine-chardonnay1   2 hosts-allocate broken REGR. vs. 115400
 examine-arndale-lakeside  2 hosts-allocate broken REGR. vs. 115400
 examine-italia0   2 hosts-allocate broken REGR. vs. 115400
 examine-cubietruck-metzinger  2 hosts-allocate broken REGR. vs. 115400
 examine-baroque0  2 hosts-allocate broken REGR. vs. 115400
 examine-nobling1  2 hosts-allocate broken REGR. vs. 115400
 examine-cubietruck-gleizes2 hosts-allocate broken REGR. vs. 115400
 examine-arndale-metrocentre   2 hosts-allocate broken REGR. vs. 115400
 examine-arndale-westfield 2 hosts-allocate broken REGR. vs. 115400
 examine-huxelrebe12 hosts-allocate broken REGR. vs. 115400
 examine-italia1   2 hosts-allocate broken REGR. vs. 115400

Tests which did not succeed, but are not blocking:
 examine-elbling0  2 hosts-allocate  broken like 115400

baseline version:
 flight   115400

jobs:
 examine-baroque0 fail
 examine-baroque1 fail
 examine-arndale-bluewaterfail
 examine-cubietruck-braquefail
 examine-chardonnay0  fail
 examine-chardonnay1  fail
 examine-elbling0 fail
 examine-elbling1 fail
 examine-fiano0   fail
 examine-fiano1   fail
 examine-cubietruck-gleizes   fail
 examine-godello0 fail
 examine-godello1 fail
 examine-huxelrebe0   fail
 examine-huxelrebe1   fail
 examine-italia0  fail
 examine-italia1  fail
 examine-arndale-lakeside fail
 examine-merlot0  fail
 examine-merlot1  fail
 examine-arndale-metrocentre  fail
 examine-cubietruck-metzinger fail
 examine-nobling0 fail
 examine-nobling1 fail
 examine-nocera0  fail
 examine-nocera1  fail
 examine-cubietruck-picasso   fail
 examine-pinot0   fail
 examine-pinot1   fail
 examine-rimava0 

[Xen-devel] [xen-unstable-smoke test] 116483: tolerable all pass - PUSHED

2017-11-23 Thread osstest service owner
flight 116483 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116483/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass

version targeted for testing:
 xen  bf87b7f7d91a25404216e0a0f3e628ce9bf1f82e
baseline version:
 xen  79136f2673b52db7b4bbd6cb5da194f2f4c39a9d

Last test of basis   116472  2017-11-23 11:20:16 Z0 days
Testing same since   116483  2017-11-23 18:03:11 Z0 days1 attempts


People who touched revisions under test:
  George Dunlap 
  Ian Jackson 
  Jan Beulich 
  Julien Grall 

jobs:
 build-amd64  pass
 build-armhf  pass
 build-amd64-libvirt  pass
 test-armhf-armhf-xl  pass
 test-amd64-amd64-xl-qemuu-debianhvm-i386 pass
 test-amd64-amd64-libvirt pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xen.git
   79136f2..bf87b7f  bf87b7f7d91a25404216e0a0f3e628ce9bf1f82e -> smoke

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [xen-unstable baseline-only test] 72485: regressions - FAIL

2017-11-23 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72485 xen-unstable real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72485/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvhv2-amd  7 xen-boot fail REGR. vs. 72455
 test-amd64-amd64-xl-qemuu-ws16-amd64  7 xen-boot  fail REGR. vs. 72455
 test-armhf-armhf-xl-xsm  12 guest-start   fail REGR. vs. 72455
 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
72455

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail blocked 
in 72455
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail like 72455
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail like 72455
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   like 72455
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   like 72455
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   like 72455
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail like 72455
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail like 72455
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail like 72455
 test-amd64-amd64-examine  4 memdisk-try-append   fail   never pass
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 10 windows-install fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 17 guest-stop fail never pass

version targeted for testing:
 xen  d2f86bf604698806d311cc251c1b66fbb752673c
baseline version:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f

Last test of basis72455  2017-11-16 02:21:58 Z7 days
Testing same since72485  2017-11-23 12:16:24 Z0 days1 attempts


People who touched revisions under test:
  Adrian Pop 
  Andrew Cooper 
  Jan Beulich 
  Julien Grall 
  Stefano Stabellini 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt 

[Xen-devel] [xen-4.9-testing test] 116463: tolerable FAIL - PUSHED

2017-11-23 Thread osstest service owner
flight 116463 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116463/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail blocked in 
116234
 test-amd64-amd64-xl-qemuu-win7-amd64 18 guest-start/win.repeat fail blocked in 
116234
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116220
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116220
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore.2fail like 116234
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 116234
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116234
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116234
 test-amd64-amd64-xl-rtds 10 debian-install   fail  like 116234
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 116234
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  ae34ab8c5d2e977f6d8081c2ce4494875232f563
baseline version:
 xen  d6ce860bbdf9dbdc88e4f2692e16776a622b2949

Last test of basis   116234  2017-11-16 19:03:58 Z7 days
Testing same since   116378  2017-11-20 15:15:44 Z3 days4 attempts


People who touched revisions under test:
  Jan Beulich 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev   

Re: [Xen-devel] [PATCH] tools: fix description of Linux ioctl_evtchn_notify

2017-11-23 Thread Wei Liu
On Thu, Nov 23, 2017 at 05:16:51PM +, Jonathan Davies wrote:
> Signed-off-by: Jonathan Davies 

Acked-by: Wei Liu 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [libvirt test] 116465: tolerable all pass - PUSHED

2017-11-23 Thread osstest service owner
flight 116465 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116465/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116430
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116430
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116430
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  9baf50c414ba50f71103d416c16b8cc4e7b1409b
baseline version:
 libvirt  a785186446de785d1b8b5e1b59973d6e0d7ecd17

Last test of basis   116430  2017-11-22 04:22:15 Z1 days
Testing same since   116465  2017-11-23 04:20:13 Z0 days1 attempts


People who touched revisions under test:
  Erik Skultety 
  Martin Kletzander 
  Peter Krempa 
  ZhiPeng Lu 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-amd64-i386-libvirt-qcow2pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/libvirt.git
   a785186..9baf50c  9baf50c414ba50f71103d416c16b8cc4e7b1409b -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

[Xen-devel] [linux-linus test] 116461: regressions - FAIL

2017-11-23 Thread osstest service owner
flight 116461 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116461/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-amd64-xl-qemuu-win10-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-amd64-xl-pvhv2-amd  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 115643
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 115643
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 115643
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 115643
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 115643
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qcow2  7 xen-bootfail REGR. vs. 115643
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 115643
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 115643
 test-amd64-amd64-xl-xsm   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 115643

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds  7 xen-boot   fail pass in 116433
 test-amd64-amd64-xl-qemuu-ovmf-amd64  7 xen-boot   fail pass in 116433

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115643
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 

[Xen-devel] [qemu-mainline test] 116471: tolerable FAIL - PUSHED

2017-11-23 Thread osstest service owner
flight 116471 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116471/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm   6 xen-install  fail in 116440 pass in 116471
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-saverestore fail in 116440 pass 
in 116471
 test-armhf-armhf-xl-arndale  19 leak-check/check fail in 116440 pass in 116471
 test-armhf-armhf-xl-rtds 12 guest-startfail pass in 116440

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds13 migrate-support-check fail in 116440 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 116440 never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116190
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116190
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116190
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116190
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuua15d835f00dce270fd3194e83d9910f4b5b44ac0
baseline version:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf

Last test of basis   116190  2017-11-15 06:53:12 Z8 days
Failing since116227  2017-11-16 13:17:17 Z7 days9 attempts
Testing same since   116440  2017-11-22 09:32:36 Z1 days2 attempts


People who touched revisions under test:
  "Daniel P. Berrange" 
  Alberto Garcia 
  Alex Bennée 
  Alexey Kardashevskiy 
  Anton Nefedov 
  BALATON Zoltan 
  Christian Borntraeger 
  Cornelia Huck 
  Daniel Henrique Barboza 
  Daniel P. Berrange 
  Dariusz Stojaczyk 
  David Gibson 
  David Hildenbrand 
  Dou Liyang 
  Dr. David Alan Gilbert 
  Ed Swierk 
  Emilio G. Cota 
  Eric Blake 
  Gerd Hoffmann 
  Greg Kurz 
  Helge Deller 
  James Clarke 
  James Cowgill 
  Jason Wang 
  Jeff Cody 
  Jindrich Makovicka 
  Joel Stanley 
  John Paul Adrian Glaubitz 
  Kevin Wolf 
  linzhecheng 
  Mao Zhongyi 
  Marc-André Lureau 
  Marcel Apfelbaum 
  Maria Klimushenkova 
  Max Reitz 
  Michael Roth 
  Michael S. Tsirkin 
  Mike Nawrocki 
  Paolo Bonzini 
  Pavel Dovgalyuk 
  Peter Maydell 
  Philippe Mathieu-Daudé 
  Richard He

[Xen-devel] [xen-unstable test] 116474: tolerable FAIL

2017-11-23 Thread osstest service owner
flight 116474 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116474/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116445
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116445
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116445
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116445
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116445
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116445
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116445
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116445
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116445
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116445
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  d2f86bf604698806d311cc251c1b66fbb752673c
baseline version:
 xen  d2f86bf604698806d311cc251c1b66fbb752673c

Last test of basis   116474  2017-11-23 12:24:15 Z0 days
Testing same since  (not found) 0 attempts

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-prev pass
 build-i386-prev  pass
 build-amd64-pvops 

[Xen-devel] [xen-4.9-testing baseline-only test] 72487: regressions - FAIL

2017-11-23 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72487 xen-4.9-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72487/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm  7 xen-boot  fail REGR. vs. 72463
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 16 guest-localmigrate/x10 fail REGR. 
vs. 72463
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 
72463
 test-amd64-i386-xl-qemuu-ovmf-amd64 21 leak-check/check   fail REGR. vs. 72463
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm 21 leak-check/check fail REGR. 
vs. 72463

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail like 72463
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail like 72463
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 72463
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10  fail like 72463
 test-amd64-i386-xl-qemuu-win7-amd64 16 guest-localmigrate/x10  fail like 72463
 test-amd64-amd64-xl-qemuu-win10-i386 17 guest-stop fail like 72463
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-installfail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10  fail never pass

version targeted for testing:
 xen  ae34ab8c5d2e977f6d8081c2ce4494875232f563
baseline version:
 xen  d6ce860bbdf9dbdc88e4f2692e16776a622b2949

Last test of basis72463  2017-11-17 17:46:53 Z6 days
Testing same since72487  2017-11-24 00:21:10 Z0 days1 attempts


People who touched revisions under test:
  Jan Beulich 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt   

Re: [Xen-devel] MMIO emulation failure on REP OUTS (was: [PATCH] x86/HVM: fix hvmemul_rep_outs_set_context())

2017-11-23 Thread Jan Beulich
(shrinking Cc list)

>>> On 23.11.17 at 19:37,  wrote:
> On 23/11/17 15:09, Jan Beulich wrote:
>> There were two issues with this function: Its use of
>> hvmemul_do_pio_buffer() was wrong (the function deals only with
>> individual port accesses, not repeated ones, i.e. passing it
>> "*reps * bytes_per_rep" does not have the intended effect). And it
>> could have processed a larger set of operations in one go than was
>> probably intended (limited just by the size that xmalloc() can hand
>> back).
>>
>> By converting to proper use of hvmemul_do_pio_buffer(), no intermediate
>> buffer is needed at all. As a result a preemption check is being added.
>>
>> Also drop unused parameters from the function.
>>
>> Signed-off-by: Jan Beulich 
> 
> While this does look like real bug, and bugfix, it isn't the issue I'm
> hitting.  I've distilled the repro scenario down to a tiny XTF test,
> which is just a `rep outsb` with a buffer which crosses a page boundary.
> 
> The results are reliably:
> 
> (d1) --- Xen Test Framework ---
> (d1) Environment: HVM 32bit (No paging)
> (d1) Test hvm-print
> (d1) String crossing a page boundary
> (XEN) MMIO emulation failed (1): d1v0 32bit @ 0010:001032b0 -> 5e c3 8d
> b4 26 00 00 00 00 8d bc 27 00 00 00 00
> (d1) Test result: SUCCESS
> 
> The Port IO hits a retry because of hitting the page boundary, and the
> retry logic successes, as evident by all data hitting hvm_print_line(). 
> Somewhere however, the PIO turns into MMIO, and a failure is reported
> after the PIO completed successfully.  %rip in the failure message
> points after the `rep outsb`, rather than at it.
> 
> If anyone has any ideas, I'm all ears.  If not, I will try to find some
> time to look deeper into the issue.

The failure being UNHANDLEABLE, I have another possibility in
mind: What if there's a bogus extra retry attempt after the REP
OUTS was already handled? The POP which is the next insn
would result in x86_insn_is_mem_access() returning false (its
memory access is an implicit stack one, which the function
intentionally produces false for). I'll see if I can find time later
today to debug this a little - thanks for shrinking it down to an
XTF test.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel