[Xen-devel] [ovmf baseline-only test] 71729: all pass

2017-07-21 Thread Platform Team regression test user
This run is configured for baseline tests only. flight 71729 ovmf real [real] http://osstest.xs.citrite.net/~osstest/testlogs/logs/71729/ Perfect :-) All tests in this flight passed as required version targeted for testing: ovmf 1683ecec41a7c944783c51efa75375f1e0a71d08 baseline v

[Xen-devel] [linux-4.9 test] 112086: regressions - FAIL

2017-07-21 Thread osstest service owner
flight 112086 linux-4.9 real [real] http://logs.test-lab.xenproject.org/osstest/logs/112086/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 111883 Tests which di

[Xen-devel] [linux-linus test] 112083: regressions - FAIL

2017-07-21 Thread osstest service owner
flight 112083 linux-linus real [real] http://logs.test-lab.xenproject.org/osstest/logs/112083/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 110515 test-amd64

Re: [Xen-devel] [PATCH 09/15] xen: vmx: handle SGX related MSRs

2017-07-21 Thread Huang, Kai
On 7/21/2017 9:42 PM, Huang, Kai wrote: On 7/20/2017 5:27 AM, Andrew Cooper wrote: On 09/07/17 09:09, Kai Huang wrote: This patch handles IA32_FEATURE_CONTROL and IA32_SGXLEPUBKEYHASHn MSRs. For IA32_FEATURE_CONTROL, if SGX is exposed to domain, then SGX_ENABLE bit is always set. If SGX l

Re: [Xen-devel] [PATCH 03/15] xen: x86: add early stage SGX feature detection

2017-07-21 Thread Huang, Kai
On 7/21/2017 9:17 PM, Huang, Kai wrote: On 7/20/2017 2:23 AM, Andrew Cooper wrote: On 09/07/17 09:09, Kai Huang wrote: This patch adds early stage SGX feature detection via SGX CPUID 0x12. Function detect_sgx is added to detect SGX info on each CPU (called from vmx_cpu_up). SDM says SGX in

[Xen-devel] [PULL for-2.10 1/2] xen: fix compilation on 32-bit hosts

2017-07-21 Thread Stefano Stabellini
From: Igor Druzhinin Signed-off-by: Igor Druzhinin Reviewed-by: Stefano Stabellini --- hw/i386/xen/xen-mapcache.c | 9 + 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c index 2a1fbd1..bb1078c 100644 --- a/hw/i386/xen

[Xen-devel] [PULL for-2.10 0/2] please pull xen-20170721-tag

2017-07-21 Thread Stefano Stabellini
The following changes since commit 91939262ffcd3c85ea6a4793d3029326eea1d649: configure: Drop ancient Solaris 9 and earlier support (2017-07-21 15:04:05 +0100) are available in the git repository at: git://xenbits.xen.org/people/sstabellini/qemu-dm.git tags/xen-20170721-tag for you to

[Xen-devel] [PULL for-2.10 2/2] xen-mapcache: Fix the bug when overlapping emulated DMA operations may cause inconsistency in guest memory mappings

2017-07-21 Thread Stefano Stabellini
From: Alexey G Under certain circumstances normal xen-mapcache functioning may be broken by guest's actions. This may lead to either QEMU performing exit() due to a caught bad pointer (and with QEMU process gone the guest domain simply appears hung afterwards) or actual use of the incorrect point

Re: [Xen-devel] [PATCH] xen-mapcache: Fix the bug when overlapping emulated DMA operations may cause inconsistency in guest memory mappings

2017-07-21 Thread Stefano Stabellini
On Thu, 20 Jul 2017, Alexey G wrote: > On Wed, 19 Jul 2017 11:00:26 -0700 (PDT) > Stefano Stabellini wrote: > > > My expectation is that unlocked mappings are much more frequent than > > locked mappings. Also, I expect that only very rarely we'll be able to > > reuse locked mappings. Over the cou

Re: [Xen-devel] [PULL for-2.10 6/7] xen/mapcache: introduce xen_replace_cache_entry()

2017-07-21 Thread Stefano Stabellini
On Fri, 21 Jul 2017, Igor Druzhinin wrote: > On 21/07/17 14:50, Anthony PERARD wrote: > > On Tue, Jul 18, 2017 at 03:22:41PM -0700, Stefano Stabellini wrote: > > > From: Igor Druzhinin > > > > ... > > > > > +static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr, > > > +

[Xen-devel] [PATCH v1 04/13] xen/pvcalls: implement connect command

2017-07-21 Thread Stefano Stabellini
Send PVCALLS_CONNECT to the backend. Allocate a new ring and evtchn for the active socket. Introduce a data structure to keep track of sockets. Introduce a waitqueue to allow the frontend to wait on data coming from the backend on the active socket (recvmsg command). Two mutexes (one of reads and

[Xen-devel] [PATCH v1 09/13] xen/pvcalls: implement recvmsg

2017-07-21 Thread Stefano Stabellini
Implement recvmsg by copying data from the "in" ring. If not enough data is available and the recvmsg call is blocking, then wait on the inflight_conn_req waitqueue. Take the active socket in_mutex so that only one function can access the ring at any given time. If not enough data is available on

[Xen-devel] [PATCH v1 11/13] xen/pvcalls: implement release command

2017-07-21 Thread Stefano Stabellini
Send PVCALLS_RELEASE to the backend and wait for a reply. Take both in_mutex and out_mutex to avoid concurrent accesses. Then, free the socket. Signed-off-by: Stefano Stabellini CC: boris.ostrov...@oracle.com CC: jgr...@suse.com --- drivers/xen/pvcalls-front.c | 86 ++

[Xen-devel] [PATCH v1 06/13] xen/pvcalls: implement listen command

2017-07-21 Thread Stefano Stabellini
Send PVCALLS_LISTEN to the backend. Signed-off-by: Stefano Stabellini CC: boris.ostrov...@oracle.com CC: jgr...@suse.com --- drivers/xen/pvcalls-front.c | 49 + drivers/xen/pvcalls-front.h | 1 + 2 files changed, 50 insertions(+) diff --git a/drivers

[Xen-devel] [PATCH v1 08/13] xen/pvcalls: implement sendmsg

2017-07-21 Thread Stefano Stabellini
Send data to an active socket by copying data to the "out" ring. Take the active socket out_mutex so that only one function can access the ring at any given time. If not enough room is available on the ring, rather than returning immediately or sleep-waiting, spin for up to 5000 cycles. This small

[Xen-devel] [PATCH v1 13/13] xen: introduce a Kconfig option to enable the pvcalls frontend

2017-07-21 Thread Stefano Stabellini
Also add pvcalls-front to the Makefile. Signed-off-by: Stefano Stabellini CC: boris.ostrov...@oracle.com CC: jgr...@suse.com --- drivers/xen/Kconfig | 9 + drivers/xen/Makefile | 1 + 2 files changed, 10 insertions(+) diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig index 4545561

[Xen-devel] [PATCH v1 00/13] introduce the Xen PV Calls frontend

2017-07-21 Thread Stefano Stabellini
Hi all, this series introduces the frontend for the newly introduced PV Calls procotol. PV Calls is a paravirtualized protocol that allows the implementation of a set of POSIX functions in a different domain. The PV Calls frontend sends POSIX function calls to the backend, which implements them a

[Xen-devel] [PATCH v1 05/13] xen/pvcalls: implement bind command

2017-07-21 Thread Stefano Stabellini
Send PVCALLS_BIND to the backend. Introduce a new structure, part of struct sock_mapping, to store information specific to passive sockets. Introduce a status field to keep track of the status of the passive socket. Introduce a waitqueue for the "accept" command (see the accept command implementa

[Xen-devel] [PATCH v1 01/13] xen/pvcalls: introduce the pvcalls xenbus frontend

2017-07-21 Thread Stefano Stabellini
Introduce a xenbus frontend for the pvcalls protocol, as defined by https://xenbits.xen.org/docs/unstable/misc/pvcalls.html. This patch only adds the stubs, the code will be added by the following patches. Signed-off-by: Stefano Stabellini CC: boris.ostrov...@oracle.com CC: jgr...@suse.com ---

[Xen-devel] [PATCH v1 10/13] xen/pvcalls: implement poll command

2017-07-21 Thread Stefano Stabellini
For active sockets, check the indexes and use the inflight_conn_req waitqueue to wait. For passive sockets, send PVCALLS_POLL to the backend. Use the inflight_accept_req waitqueue if an accept is outstanding. Otherwise use the inflight_req waitqueue: inflight_req is awaken when a new response is r

[Xen-devel] [PATCH v1 03/13] xen/pvcalls: implement socket command and handle events

2017-07-21 Thread Stefano Stabellini
Send a PVCALLS_SOCKET command to the backend, use the masked req_prod_pvt as req_id. This way, req_id is guaranteed to be between 0 and PVCALLS_NR_REQ_PER_RING. We already have a slot in the rsp array ready for the response, and there cannot be two outstanding responses with the same req_id. Wait

[Xen-devel] [PATCH v1 02/13] xen/pvcalls: connect to the backend

2017-07-21 Thread Stefano Stabellini
Implement the probe function for the pvcalls frontend. Read the supported versions, max-page-order and function-calls nodes from xenstore. Introduce a data structure named pvcalls_bedata. It contains pointers to the command ring, the event channel, a list of active sockets and a list of passive so

[Xen-devel] [PATCH v1 07/13] xen/pvcalls: implement accept command

2017-07-21 Thread Stefano Stabellini
Send PVCALLS_ACCEPT to the backend. Allocate a new active socket. Make sure that only one accept command is executed at any given time by setting PVCALLS_FLAG_ACCEPT_INFLIGHT and waiting on the inflight_accept_req waitqueue. sock->sk->sk_send_head is not used for ip sockets: reuse the field to sto

[Xen-devel] [PATCH v1 12/13] xen/pvcalls: implement frontend disconnect

2017-07-21 Thread Stefano Stabellini
Implement pvcalls frontend removal function. Go through the list of active and passive sockets and free them all, one at a time. Signed-off-by: Stefano Stabellini CC: boris.ostrov...@oracle.com CC: jgr...@suse.com --- drivers/xen/pvcalls-front.c | 28 1 file changed,

Re: [Xen-devel] Question about hvm_monitor_interrupt

2017-07-21 Thread Razvan Cojocaru
On 07/22/2017 12:33 AM, Tamas K Lengyel wrote: > Hey Razvan, Hello, > the vm_event that is being generated by doing > VM_EVENT_FLAG_GET_NEXT_INTERRUPT sends almost all required information > about the interrupt to the listener to allow it to get reinjected, > except the instruction length. If the

[Xen-devel] Question about hvm_monitor_interrupt

2017-07-21 Thread Tamas K Lengyel
Hey Razvan, the vm_event that is being generated by doing VM_EVENT_FLAG_GET_NEXT_INTERRUPT sends almost all required information about the interrupt to the listener to allow it to get reinjected, except the instruction length. If the listener wants to reinject the interrupt to the guest via xc_hvm_

[Xen-devel] [libvirt test] 112081: tolerable all pass - PUSHED

2017-07-21 Thread osstest service owner
flight 112081 libvirt real [real] http://logs.test-lab.xenproject.org/osstest/logs/112081/ Failures :-/ but no regressions. Tests which did not succeed, but are not blocking: test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 112036 test-armhf-armhf-libvirt-xsm 14 saveresto

Re: [Xen-devel] [GIT PULL] xen: features and fixes for 4.13-rc2

2017-07-21 Thread Linus Torvalds
On Fri, Jul 21, 2017 at 3:17 AM, Juergen Gross wrote: > drivers/xen/pvcalls-back.c | 1236 > This really doesn't look like a fix. The merge window is over. So I'm not pulling this without way more explanations of why I should. Linu

Re: [Xen-devel] [PATCH] xen: selfballoon: remove unnecessary static in frontswap_selfshrink()

2017-07-21 Thread Gustavo A. R. Silva
Hi Juergen, On 07/21/2017 02:36 AM, Juergen Gross wrote: On 04/07/17 20:34, Gustavo A. R. Silva wrote: Remove unnecessary static on local variables last_frontswap_pages and tgt_frontswap_pages. Such variables are initialized before being used, on every execution path throughout the function. Th

Re: [Xen-devel] [PATCH 6/6] xen: sched: optimize exclusive pinning case (Credit1 & 2)

2017-07-21 Thread George Dunlap
On Fri, Jul 21, 2017 at 8:55 PM, Dario Faggioli wrote: > On Fri, 2017-07-21 at 18:19 +0100, George Dunlap wrote: >> On 06/23/2017 11:55 AM, Dario Faggioli wrote: >> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c >> > index 4f6330e..85e014d 100644 >> > --- a/xen/common/sched_c

[Xen-devel] [RFC PATCH v2 17/22] ARM: vGIC: introduce vgic_lock_vcpu_irq()

2017-07-21 Thread Andre Przywara
Since a VCPU can own multiple IRQs, the natural locking order is to take a VCPU lock first, then the individual per-IRQ locks. However there are situations where the target VCPU is not known without looking into the struct pending_irq first, which usually means we need to take the IRQ lock first. T

[Xen-devel] [RFC PATCH v2 18/22] ARM: vGIC: move virtual IRQ target VCPU from rank to pending_irq

2017-07-21 Thread Andre Przywara
The VCPU a shared virtual IRQ is targeting is currently stored in the irq_rank structure. For LPIs we already store the target VCPU in struct pending_irq, so move SPIs over as well. The ITS code, which was using this field already, was so far using the VCPU lock to protect the pending_irq, so move

[Xen-devel] [RFC PATCH v2 21/22] ARM: vITS: injecting LPIs: use pending_irq lock

2017-07-21 Thread Andre Przywara
Instead of using an atomic access and hoping for the best, let's use the new pending_irq lock now to make sure we read a sane version of the target VCPU. That still doesn't solve the problem mentioned in the comment, but paves the way for future improvements. Signed-off-by: Andre Przywara --- xe

[Xen-devel] [RFC PATCH v2 09/22] ARM: vITS: protect LPI priority update with pending_irq lock

2017-07-21 Thread Andre Przywara
As the priority value is now officially a member of struct pending_irq, we need to take its lock when manipulating it via ITS commands. Make sure we take the IRQ lock after the VCPU lock when we need both. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic-v3-its.c | 26 +++-

[Xen-devel] [RFC PATCH v2 14/22] ARM: vGIC: move virtual IRQ configuration from rank to pending_irq

2017-07-21 Thread Andre Przywara
The IRQ configuration (level or edge triggered) for a group of IRQs are still stored in the irq_rank structure. Introduce a new bit called GIC_IRQ_GUEST_LEVEL in the "status" field, which holds that information. Remove the storage from the irq_rank and use the existing wrappers to store and retriev

[Xen-devel] [RFC PATCH v2 03/22] ARM: vGIC: move gic_raise_inflight_irq() into vgic_vcpu_inject_irq()

2017-07-21 Thread Andre Przywara
Currently there is a gic_raise_inflight_irq(), which serves the very special purpose of handling a newly injected interrupt while an older one is still handled. This has only one user, in vgic_vcpu_inject_irq(). Now with the introduction of the pending_irq lock this will later on result in a nasty

[Xen-devel] [RFC PATCH v2 04/22] ARM: vGIC: rename pending_irq->priority to cur_priority

2017-07-21 Thread Andre Przywara
In preparation for storing the virtual interrupt priority in the struct pending_irq, rename the existing "priority" member to "cur_priority". This is to signify that this is the current priority of an interrupt which has been injected to a VCPU. Once this happened, its priority must stay fixed at t

[Xen-devel] [RFC PATCH v2 12/22] ARM: vGIC: protect gic_update_one_lr() with pending_irq lock

2017-07-21 Thread Andre Przywara
When we return from a domain with the active bit set in an LR, we update our pending_irq accordingly. This touches multiple status bits, so requires the pending_irq lock. Signed-off-by: Andre Przywara --- xen/arch/arm/gic.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/arch/arm/gic.c

[Xen-devel] [RFC PATCH v2 11/22] ARM: vGIC: protect gic_events_need_delivery() with pending_irq lock

2017-07-21 Thread Andre Przywara
gic_events_need_delivery() reads the cur_priority field twice, also relies on the consistency of status bits. So it should take pending_irq lock. Signed-off-by: Andre Przywara --- xen/arch/arm/gic.c | 24 +--- 1 file changed, 13 insertions(+), 11 deletions(-) diff --git a/xe

[Xen-devel] [RFC PATCH v2 20/22] ARM: vGIC: move virtual IRQ enable bit from rank to pending_irq

2017-07-21 Thread Andre Przywara
The enabled bits for a group of IRQs are still stored in the irq_rank structure, although we already have the same information in pending_irq, in the GIC_IRQ_GUEST_ENABLED bit of the "status" field. Remove the storage from the irq_rank and just utilize the existing wrappers to cover enabling/disabl

[Xen-devel] [RFC PATCH v2 19/22] ARM: vGIC: rework vgic_get_target_vcpu to take a domain instead of vcpu

2017-07-21 Thread Andre Przywara
For "historical" reasons we used to pass a vCPU pointer to vgic_get_target_vcpu(), which was only considered to distinguish private IRQs. Now since we have the unique pending_irq pointer already, we don't need the vCPU anymore, but just the domain. So change this function to avoid a rather hackish

[Xen-devel] [RFC PATCH v2 08/22] ARM: vGIC: move virtual IRQ priority from rank to pending_irq

2017-07-21 Thread Andre Przywara
So far a virtual interrupt's priority is stored in the irq_rank structure, which covers multiple IRQs and has a single lock for this group. Generalize the already existing priority variable in struct pending_irq to not only cover LPIs, but every IRQ. Access to this value is protected by the per-IRQ

[Xen-devel] [RFC PATCH v2 10/22] ARM: vGIC: protect gic_set_lr() with pending_irq lock

2017-07-21 Thread Andre Przywara
When putting a (pending) IRQ into an LR, we should better make sure that no-one changes it behind our back. So make sure we take the pending_irq lock. This bubbles up to all users of gic_add_to_lr_pending() and gic_raise_guest_irq(). Signed-off-by: Andre Przywara --- xen/arch/arm/gic.c | 14

[Xen-devel] [RFC PATCH v2 16/22] ARM: vITS: rename lpi_vcpu_id to vcpu_id

2017-07-21 Thread Andre Przywara
Since we will soon store a virtual IRQ's target VCPU in struct pending_irq, generalise the existing storage for an LPI's target to cover all IRQs. This just renames "lpi_vcpu_id" to "vcpu_id", but doesn't change anything else yet. Signed-off-by: Andre Przywara --- xen/arch/arm/gic-v3-lpi.c | 2

[Xen-devel] [RFC PATCH v2 05/22] ARM: vITS: rename pending_irq->lpi_priority to priority

2017-07-21 Thread Andre Przywara
Since we will soon store a virtual IRQ's priority in struct pending_irq, generalise the existing storage for an LPI's priority to cover all IRQs. This just renames "lpi_priority" to "priority", but doesn't change anything else yet. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic-v3-its.c | 4

[Xen-devel] [RFC PATCH v2 01/22] ARM: vGIC: introduce and initialize pending_irq lock

2017-07-21 Thread Andre Przywara
Currently we protect the pending_irq structure with the corresponding VGIC VCPU lock. There are problems in certain corner cases (for instance if an IRQ is migrating), so let's introduce a per-IRQ lock, which will protect the consistency of this structure independent from any VCPU. For now this jus

[Xen-devel] [RFC PATCH v2 22/22] ARM: vGIC: remove remaining irq_rank code

2017-07-21 Thread Andre Przywara
Now that we no longer need the struct vgic_irq_rank, we can remove the definition and all the helper functions. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic.c | 54 xen/include/asm-arm/domain.h | 6 + xen/include/asm-arm/vgic.h

[Xen-devel] [RFC PATCH v2 15/22] ARM: vGIC: rework vgic_get_target_vcpu to take a pending_irq

2017-07-21 Thread Andre Przywara
For now vgic_get_target_vcpu takes a VCPU and an IRQ number, because this is what we need for finding the proper rank and the VCPU in there. In the future the VCPU will be looked up in the struct pending_irq. To avoid locking issues, let's pass the pointer to the pending_irq instead. We can read th

[Xen-devel] [RFC PATCH v2 00/22] ARM: vGIC rework (attempt)

2017-07-21 Thread Andre Przywara
Hi, this is the first part of the attempt to rewrite the VGIC to solve the issues we discovered when adding the ITS emulation. The problems we identified resulted in the following list of things that need fixing: 1) introduce a per-IRQ lock 2) remove the IRQ rank scheme (of storing IRQ properties)

[Xen-devel] [RFC PATCH v2 13/22] ARM: vITS: remove no longer needed lpi_priority wrapper

2017-07-21 Thread Andre Przywara
For LPIs we stored the priority value in struct pending_irq, but all other type of IRQs were using the irq_rank structure for that. Now that every IRQ using pending_irq, we can remove the special handling we had in place for LPIs and just use the now unified access wrappers. Signed-off-by: Andre P

[Xen-devel] [RFC PATCH v2 02/22] ARM: vGIC: route/remove_irq: replace rank lock with IRQ lock

2017-07-21 Thread Andre Przywara
So far the rank lock is protecting the physical IRQ routing for a particular virtual IRQ (though this doesn't seem to be documented anywhere). So although these functions don't really touch the rank structure, the lock prevents them from running concurrently. This seems a bit like a kludge, so as w

[Xen-devel] [RFC PATCH v2 06/22] ARM: vGIC: introduce locking routines for multiple IRQs

2017-07-21 Thread Andre Przywara
When replacing the rank lock with individual per-IRQs lock soon, we will still need the ability to lock multiple IRQs. Provide two helper routines which lock and unlock a number of consecutive IRQs in the right order. Forward-looking the locking function fills an array of pending_irq pointers, so t

[Xen-devel] [RFC PATCH v2 07/22] ARM: vGIC: introduce priority setter/getter

2017-07-21 Thread Andre Przywara
Since the GICs MMIO access always covers a number of IRQs at once, introduce wrapper functions which loop over those IRQs, take their locks and read or update the priority values. This will be used in a later patch. Signed-off-by: Andre Przywara --- xen/arch/arm/vgic.c| 37 ++

Re: [Xen-devel] [PATCH 6/6] xen: sched: optimize exclusive pinning case (Credit1 & 2)

2017-07-21 Thread Dario Faggioli
On Fri, 2017-07-21 at 18:19 +0100, George Dunlap wrote: > On 06/23/2017 11:55 AM, Dario Faggioli wrote: > > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c > > index 4f6330e..85e014d 100644 > > --- a/xen/common/sched_credit.c > > +++ b/xen/common/sched_credit.c > > @@ -429,6 +429

Re: [Xen-devel] [PATCH 4/6] xen: credit2: rearrange members of control structures

2017-07-21 Thread Dario Faggioli
On Fri, 2017-07-21 at 18:05 +0100, George Dunlap wrote: > On 06/23/2017 11:55 AM, Dario Faggioli wrote: > > > > While there, improve the wording, style and alignment > > of comments too. > > > > Signed-off-by: Dario Faggioli > > I haven't taken a careful look at these; the idea sounds good and

Re: [Xen-devel] [PATCH 5/6] xen: RTDS: rearrange members of control structures

2017-07-21 Thread Dario Faggioli
On Fri, 2017-07-21 at 13:51 -0400, Meng Xu wrote: > On Fri, Jun 23, 2017 at 6:55 AM, Dario Faggioli > wrote: > > > > Nothing changed in `pahole` output, in terms of holes > > and padding, but some fields have been moved, to put > > related members in same cache line. > > > > Signed-off-by: Dario

[Xen-devel] [xen-unstable-smoke test] 112104: tolerable trouble: broken/pass - PUSHED

2017-07-21 Thread osstest service owner
flight 112104 xen-unstable-smoke real [real] http://logs.test-lab.xenproject.org/osstest/logs/112104/ Failures :-/ but no regressions. Tests which did not succeed, but are not blocking: test-arm64-arm64-xl-xsm 1 build-check(1) blocked n/a test-amd64-amd64-libvirt 13 mig

Re: [Xen-devel] [PATCH 22/25 v6] xen/arm: vpl011: Add support for vuart console in xenconsole

2017-07-21 Thread Stefano Stabellini
On Fri, 21 Jul 2017, Julien Grall wrote: > Hi, > > On 18/07/17 21:07, Stefano Stabellini wrote: > > On Mon, 17 Jul 2017, Bhupinder Thakur wrote: > > > This patch finally adds the support for vuart console. It adds > > > two new fields in the console initialization: > > > > > > - optional > > > -

Re: [Xen-devel] [RFC v3]Proposal to allow setting up shared memory areas between VMs from xl config file

2017-07-21 Thread Stefano Stabellini
On Fri, 21 Jul 2017, Julien Grall wrote: > > > @x86_cacheattrcan be 'uc', 'wc', 'wt', 'wp', 'wb' or 'suc'. > > > Default > > > is 'wb'. > > > > Also here, I would write: > > > > @x86_cacheattr Only 'wb' (write-back) is supported today. > > > > Like you wrote la

Re: [Xen-devel] [PATCH] xen/pvcalls: use WARN_ON(1) instead of __WARN()

2017-07-21 Thread Stefano Stabellini
On Fri, 21 Jul 2017, Arnd Bergmann wrote: > __WARN() is an internal helper that is only available on > some architectures, but causes a build error e.g. on ARM64 > in some configurations: > > drivers/xen/pvcalls-back.c: In function 'set_backend_state': > drivers/xen/pvcalls-back.c:1097:5: error: i

Re: [Xen-devel] [PULL for-2.10 6/7] xen/mapcache: introduce xen_replace_cache_entry()

2017-07-21 Thread Igor Druzhinin
On 21/07/17 14:50, Anthony PERARD wrote: On Tue, Jul 18, 2017 at 03:22:41PM -0700, Stefano Stabellini wrote: From: Igor Druzhinin ... +static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr, + hwaddr new_phys_addr, +

Re: [Xen-devel] [PATCH] xen/pvcalls: use WARN_ON(1) instead of __WARN()

2017-07-21 Thread Boris Ostrovsky
On 07/21/2017 12:17 PM, Arnd Bergmann wrote: > __WARN() is an internal helper that is only available on > some architectures, but causes a build error e.g. on ARM64 > in some configurations: > > drivers/xen/pvcalls-back.c: In function 'set_backend_state': > drivers/xen/pvcalls-back.c:1097:5: error:

Re: [Xen-devel] [PATCH 5/6] xen: RTDS: rearrange members of control structures

2017-07-21 Thread Meng Xu
On Fri, Jun 23, 2017 at 6:55 AM, Dario Faggioli wrote: > > Nothing changed in `pahole` output, in terms of holes > and padding, but some fields have been moved, to put > related members in same cache line. > > Signed-off-by: Dario Faggioli > --- > Cc: Meng Xu > Cc: George Dunlap > --- > xen/co

[Xen-devel] [linux-3.18 test] 112085: regressions - trouble: blocked/broken/fail/pass

2017-07-21 Thread osstest service owner
flight 112085 linux-3.18 real [real] http://logs.test-lab.xenproject.org/osstest/logs/112085/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-armhf-armhf-xl-arndale 4 host-install(4)broken REGR. vs. 111920 test-armhf-armhf-lib

Re: [Xen-devel] [PATCH 6/6] xen: sched: optimize exclusive pinning case (Credit1 & 2)

2017-07-21 Thread George Dunlap
On 06/23/2017 11:55 AM, Dario Faggioli wrote: > Exclusive pinning of vCPUs is used, sometimes, for > achieving the highest level of determinism, and the > least possible overhead, for the vCPUs in question. > > Although static 1:1 pinning is not recommended, for > general use cases, optimizing the

[Xen-devel] [PATCH] xen-blkfront: Fix handling of non-supported operations

2017-07-21 Thread Bart Van Assche
This patch fixes the following sparse warnings: drivers/block/xen-blkfront.c:916:45: warning: incorrect type in argument 2 (different base types) drivers/block/xen-blkfront.c:916:45:expected restricted blk_status_t [usertype] error drivers/block/xen-blkfront.c:916:45:got int [signed] err

Re: [Xen-devel] [PATCH 4/6] xen: credit2: rearrange members of control structures

2017-07-21 Thread George Dunlap
On 06/23/2017 11:55 AM, Dario Faggioli wrote: > With the aim of improving memory size and layout, and > at the same time trying to put related fields reside > in the same cacheline. > > Here's a summary of the output of `pahole`, with and > without this patch, for the affected data structures. >

Re: [Xen-devel] [PATCH 5/6] xen: RTDS: rearrange members of control structures

2017-07-21 Thread George Dunlap
On 06/23/2017 11:55 AM, Dario Faggioli wrote: > Nothing changed in `pahole` output, in terms of holes > and padding, but some fields have been moved, to put > related members in same cache line. > > Signed-off-by: Dario Faggioli Acked-by: George Dunlap > --- > Cc: Meng Xu > Cc: George Dunlap

Re: [Xen-devel] [PATCH] docs: fix superpage default value

2017-07-21 Thread Konrad Rzeszutek Wilk
On Fri, Jul 21, 2017 at 05:51:02PM +0100, Wei Liu wrote: > On Fri, Jul 21, 2017 at 12:44:18PM -0400, Konrad Rzeszutek Wilk wrote: > > On Thu, Jul 20, 2017 at 01:57:17PM +0100, Wei Liu wrote: > > > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote: > > > > On 20/07/17 12:47, Wei Liu wrot

Re: [Xen-devel] [xen-devel][xen/Arm]xen fail to boot on omap5 board

2017-07-21 Thread Andrii Anisov
Hello Julien, On 21.07.17 15:52, Julien Grall wrote: This is very early boot in head.S so having the full log will not really help here... What is more interesting is where the different modules have been loaded in memory: - Device Tree - Kernel - Xen - Initramfs (if any) We

Re: [Xen-devel] [PATCH 3/6] xen: credit: rearrange members of control structures

2017-07-21 Thread George Dunlap
On 06/23/2017 11:55 AM, Dario Faggioli wrote: > With the aim of improving memory size and layout, and > at the same time trying to put related fields reside > in the same cacheline. > > Here's a summary of the output of `pahole`, with and > without this patch, for the affected data structures. >

Re: [Xen-devel] [PATCH 2/6] xen: credit2: make the cpu to runqueue map per-cpu

2017-07-21 Thread George Dunlap
On 06/23/2017 11:54 AM, Dario Faggioli wrote: > Instead of keeping an NR_CPUS big array of int-s, > directly inside csched2_private, use a per-cpu > variable. > > That's especially beneficial (in terms of saved > memory) when there are more instance of Credit2 (in > different cpupools), and also h

Re: [Xen-devel] [PATCH] docs: fix superpage default value

2017-07-21 Thread Wei Liu
On Fri, Jul 21, 2017 at 12:44:18PM -0400, Konrad Rzeszutek Wilk wrote: > On Thu, Jul 20, 2017 at 01:57:17PM +0100, Wei Liu wrote: > > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote: > > > On 20/07/17 12:47, Wei Liu wrote: > > > > On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Mo

Re: [Xen-devel] [PATCH 1/6] xen: credit2: allocate runqueue data structure dynamically

2017-07-21 Thread George Dunlap
On 06/23/2017 11:54 AM, Dario Faggioli wrote: > Instead of keeping an NR_CPUS big array of csched2_runqueue_data > elements, directly inside the csched2_private structure, allocate > it dynamically. > > This has two positive effects: > - reduces the size of csched2_private sensibly, which is > e

Re: [Xen-devel] [PATCH] docs: fix superpage default value

2017-07-21 Thread Konrad Rzeszutek Wilk
On Thu, Jul 20, 2017 at 01:57:17PM +0100, Wei Liu wrote: > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote: > > On 20/07/17 12:47, Wei Liu wrote: > > > On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Monné wrote: > > > > On Thu, Jul 20, 2017 at 12:35:56PM +0100, Wei Liu wrote: > >

Re: [Xen-devel] xen/link: Move .data.rel.ro sections into .rodata for final link

2017-07-21 Thread Andrew Cooper
On 21/07/17 11:43, Julien Grall wrote: On 20/07/17 17:54, Wei Liu wrote: On Thu, Jul 20, 2017 at 05:46:50PM +0100, Wei Liu wrote: CC relevant maintainers On Thu, Jul 20, 2017 at 05:20:43PM +0200, David Woodhouse wrote: From: David Woodhouse This includes stuff lke the hypercall tables whi

Re: [Xen-devel] Regarding hdmi sharing in xen

2017-07-21 Thread Andrii Anisov
Dear George, First I would state terms as following: * Sharing HW - using the same hardware by different domains using PV drivers, so actually one domain accessing the HW directly and serves other domains. * Assigning HW - providing access to some particular HW for some particular domain. E.g.

Re: [Xen-devel] [PATCH XTF v3] Implement pv_read_some

2017-07-21 Thread Andrew Cooper
On 21/07/17 08:01, Felix Schmoll wrote: Much better. Just one final question. Do you intend this function to block until data becomes available? (because that appears to be how it behaves.) Yes. I could split it up into two functions if that bothers you. Or do you just want me

Re: [Xen-devel] [PATCH XTF] Functional: Add a UMIP test

2017-07-21 Thread Andrew Cooper
On 21/07/17 02:42, Boqun Feng wrote: On Thu, Jul 20, 2017 at 10:38:59AM +0100, Andrew Cooper wrote: On 20/07/17 06:29, Boqun Feng (Intel) wrote: Add a "umip" test for the User-Model Instruction Prevention. The test simply tries to run sgdt/sidt/sldt/str/smsw in guest user-mode with CR4_UMIP = 1

[Xen-devel] [ovmf test] 112091: all pass - PUSHED

2017-07-21 Thread osstest service owner
flight 112091 ovmf real [real] http://logs.test-lab.xenproject.org/osstest/logs/112091/ Perfect :-) All tests in this flight passed as required version targeted for testing: ovmf 1683ecec41a7c944783c51efa75375f1e0a71d08 baseline version: ovmf 79aac4dd756bb2809cdcb

[Xen-devel] [qemu-mainline test] 112072: regressions - FAIL

2017-07-21 Thread osstest service owner
flight 112072 qemu-mainline real [real] http://logs.test-lab.xenproject.org/osstest/logs/112072/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: build-i386-xsm6 xen-buildfail REGR. vs. 111765 build-i386

Re: [Xen-devel] [PATCH] docs: fix superpage default value

2017-07-21 Thread Wei Liu
On Fri, Jul 21, 2017 at 05:21:26PM +0100, Andrew Cooper wrote: > On 20/07/17 13:57, Wei Liu wrote: > > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote: > > > On 20/07/17 12:47, Wei Liu wrote: > > > > On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Monné wrote: > > > > > On Thu, Ju

Re: [Xen-devel] [PATCH v4 4/4] Xentrace: add support for HVM's PI blocking list operation

2017-07-21 Thread George Dunlap
On Fri, Jul 7, 2017 at 7:49 AM, Chao Gao wrote: > In order to analyze PI blocking list operation frequence and obtain > the list length, add some relevant events to xentrace and some > associated code in xenalyze. Event ASYNC_PI_LIST_DEL may happen in interrupt > context, which incurs current assu

Re: [Xen-devel] [PATCH] docs: fix superpage default value

2017-07-21 Thread Andrew Cooper
On 20/07/17 13:57, Wei Liu wrote: On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote: On 20/07/17 12:47, Wei Liu wrote: On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Monné wrote: On Thu, Jul 20, 2017 at 12:35:56PM +0100, Wei Liu wrote: The code says it defaults to false. Sig

[Xen-devel] [PATCH] xen/pvcalls: use WARN_ON(1) instead of __WARN()

2017-07-21 Thread Arnd Bergmann
__WARN() is an internal helper that is only available on some architectures, but causes a build error e.g. on ARM64 in some configurations: drivers/xen/pvcalls-back.c: In function 'set_backend_state': drivers/xen/pvcalls-back.c:1097:5: error: implicit declaration of function '__WARN' [-Werror=imp

Re: [Xen-devel] [PATCH v4 3/4] VT-d PI: restrict the vcpu number on a given pcpu

2017-07-21 Thread George Dunlap
On Fri, Jul 7, 2017 at 7:48 AM, Chao Gao wrote: > Currently, a blocked vCPU is put in its pCPU's pi blocking list. If > too many vCPUs are blocked on a given pCPU, it will incur that the list > grows too long. After a simple analysis, there are 32k domains and > 128 vcpu per domain, thus about 4M

[Xen-devel] Notes from Design Session: Solving Community Problems: Patch Volume vs Review Bandwidth, Community Meetings ... and other problems

2017-07-21 Thread Lars Kurth
Hi all, please find attached my notes. Lars Session URL: http://sched.co/AjB3 ACTIONS on Lars, Andy and Juergen ACTIONS on Stefano and Julien Community Call == This was a discussion about whether we should do more community calls, in critical areas. The background was whether we sh

Re: [Xen-devel] [PATCH v4 1/4] VT-d PI: track the vcpu number on pi blocking list

2017-07-21 Thread George Dunlap
On Fri, Jul 7, 2017 at 7:48 AM, Chao Gao wrote: > This patch adds a field, counter, in struct vmx_pi_blocking_vcpu to track > how many entries are on the pi blocking list. > > Signed-off-by: Chao Gao Minor nit: The grammar in the title isn't quite right; "vcpu number" would be "the number ident

[Xen-devel] [xen-unstable test] 112065: regressions - FAIL

2017-07-21 Thread osstest service owner
flight 112065 xen-unstable real [real] http://logs.test-lab.xenproject.org/osstest/logs/112065/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs. 112004 Regressions

Re: [Xen-devel] [xen-unstable test] 112033: regressions - trouble: broken/fail/pass

2017-07-21 Thread Julien Grall
Hi, On 20/07/17 20:01, osstest service owner wrote: > flight 112033 xen-unstable real [real] > http://logs.test-lab.xenproject.org/osstest/logs/112033/ > > Regressions :-( > > Tests which did not succeed and are blocking, > including tests which could not be run: > test-amd64-i386-xl-qemuu-ovmf

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu

2017-07-21 Thread Alexey G
> On Fri, 21 Jul 2017 10:57:55 + > "Zhang, Xiong Y" wrote: > > > On an intel skylake machine with upstream qemu, if I add > > "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't > > boot up and continues reboot. > > > > Steps to reproduce this issue: > > > > 1) Boot x

Re: [Xen-devel] [PULL for-2.10 6/7] xen/mapcache: introduce xen_replace_cache_entry()

2017-07-21 Thread Anthony PERARD
On Tue, Jul 18, 2017 at 03:22:41PM -0700, Stefano Stabellini wrote: > From: Igor Druzhinin ... > +static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr, > + hwaddr new_phys_addr, > + h

Re: [Xen-devel] [Bug] Intel RMRR support with upstream Qemu

2017-07-21 Thread Alexey G
Hi, On Fri, 21 Jul 2017 10:57:55 + "Zhang, Xiong Y" wrote: > On an intel skylake machine with upstream qemu, if I add > "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't boot > up and continues reboot. > > Steps to reproduce this issue: > > 1) Boot xen with iommu=1

[Xen-devel] Notes from Design Summit Hypervisor Fuzzing Session

2017-07-21 Thread Lars Kurth
Hi all, please find attached my notes. A lot of it went over my head, so I may have gotten things wrong and some are missing Feel free to modify, chip in, clarify, as needed Lars Session URL: http://sched.co/AjHN OPTION 1: Userspace Approach Dom0 Domu [AFL] [VM ne

Re: [Xen-devel] [PATCH] xen:Kconfig: Make SCIF built by default for ARM

2017-07-21 Thread Julien Grall
Hi Andrii, Please CC the relevant maintainers when sending a patch (or questions regarding a specific subsystems) on the ML. On 18/07/17 17:45, Andrii Anisov wrote: From: Andrii Anisov Both Renesas R-Car Gen2(ARM32) and Gen3(ARM64) are utilizing SCIF IP, so make its serial driver built by d

Re: [Xen-devel] [xen-devel][xen/Arm]xen fail to boot on omap5 board

2017-07-21 Thread Julien Grall
On 18/07/17 10:50, Andrii Anisov wrote: Dear Shishir, On 18.07.17 12:05, shishir tiwari wrote: Hi I want test and understand xen hypervisor implementation with dom0 and domU on omap5 board. I followed https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/OMAP5432_uEVM wi

Re: [Xen-devel] [RFC PATCH v3 13/24] ARM: NUMA: DT: Parse memory NUMA information

2017-07-21 Thread Julien Grall
On 21/07/17 12:10, Vijay Kilari wrote: Hi Julien, On Thu, Jul 20, 2017 at 4:56 PM, Julien Grall wrote: On 19/07/17 19:39, Julien Grall wrote: cell = (const __be32 *)prop->data; banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32)); -for ( i = 0; i < banks && bootinf

Re: [Xen-devel] [RFC PATCH v3 13/24] ARM: NUMA: DT: Parse memory NUMA information

2017-07-21 Thread Vijay Kilari
Hi Julien, On Thu, Jul 20, 2017 at 4:56 PM, Julien Grall wrote: > > > On 19/07/17 19:39, Julien Grall wrote: >>> >>> cell = (const __be32 *)prop->data; >>> banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32)); >>> >>> -for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM

[Xen-devel] [Bug] Intel RMRR support with upstream Qemu

2017-07-21 Thread Zhang, Xiong Y
On an intel skylake machine with upstream qemu, if I add "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't boot up and continues reboot. Steps to reproduce this issue: 1) Boot xen with iommu=1 to enable iommu 2) hvm.cfg contain: builder="hvm" memory= disk=[

Re: [Xen-devel] xen/link: Move .data.rel.ro sections into .rodata for final link

2017-07-21 Thread Julien Grall
On 20/07/17 17:54, Wei Liu wrote: On Thu, Jul 20, 2017 at 05:46:50PM +0100, Wei Liu wrote: CC relevant maintainers On Thu, Jul 20, 2017 at 05:20:43PM +0200, David Woodhouse wrote: From: David Woodhouse This includes stuff lke the hypercall tables which we really want lke -> like to be

  1   2   >