This run is configured for baseline tests only.
flight 71729 ovmf real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/71729/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 1683ecec41a7c944783c51efa75375f1e0a71d08
baseline v
flight 112086 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112086/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs.
111883
Tests which di
flight 112083 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112083/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR.
vs. 110515
test-amd64
On 7/21/2017 9:42 PM, Huang, Kai wrote:
On 7/20/2017 5:27 AM, Andrew Cooper wrote:
On 09/07/17 09:09, Kai Huang wrote:
This patch handles IA32_FEATURE_CONTROL and IA32_SGXLEPUBKEYHASHn MSRs.
For IA32_FEATURE_CONTROL, if SGX is exposed to domain, then
SGX_ENABLE bit
is always set. If SGX l
On 7/21/2017 9:17 PM, Huang, Kai wrote:
On 7/20/2017 2:23 AM, Andrew Cooper wrote:
On 09/07/17 09:09, Kai Huang wrote:
This patch adds early stage SGX feature detection via SGX CPUID 0x12.
Function
detect_sgx is added to detect SGX info on each CPU (called from
vmx_cpu_up).
SDM says SGX in
From: Igor Druzhinin
Signed-off-by: Igor Druzhinin
Reviewed-by: Stefano Stabellini
---
hw/i386/xen/xen-mapcache.c | 9 +
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
index 2a1fbd1..bb1078c 100644
--- a/hw/i386/xen
The following changes since commit 91939262ffcd3c85ea6a4793d3029326eea1d649:
configure: Drop ancient Solaris 9 and earlier support (2017-07-21 15:04:05
+0100)
are available in the git repository at:
git://xenbits.xen.org/people/sstabellini/qemu-dm.git tags/xen-20170721-tag
for you to
From: Alexey G
Under certain circumstances normal xen-mapcache functioning may be broken
by guest's actions. This may lead to either QEMU performing exit() due to
a caught bad pointer (and with QEMU process gone the guest domain simply
appears hung afterwards) or actual use of the incorrect point
On Thu, 20 Jul 2017, Alexey G wrote:
> On Wed, 19 Jul 2017 11:00:26 -0700 (PDT)
> Stefano Stabellini wrote:
>
> > My expectation is that unlocked mappings are much more frequent than
> > locked mappings. Also, I expect that only very rarely we'll be able to
> > reuse locked mappings. Over the cou
On Fri, 21 Jul 2017, Igor Druzhinin wrote:
> On 21/07/17 14:50, Anthony PERARD wrote:
> > On Tue, Jul 18, 2017 at 03:22:41PM -0700, Stefano Stabellini wrote:
> > > From: Igor Druzhinin
> >
> > ...
> >
> > > +static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr,
> > > +
Send PVCALLS_CONNECT to the backend. Allocate a new ring and evtchn for
the active socket.
Introduce a data structure to keep track of sockets. Introduce a
waitqueue to allow the frontend to wait on data coming from the backend
on the active socket (recvmsg command).
Two mutexes (one of reads and
Implement recvmsg by copying data from the "in" ring. If not enough data
is available and the recvmsg call is blocking, then wait on the
inflight_conn_req waitqueue. Take the active socket in_mutex so that
only one function can access the ring at any given time.
If not enough data is available on
Send PVCALLS_RELEASE to the backend and wait for a reply. Take both
in_mutex and out_mutex to avoid concurrent accesses. Then, free the
socket.
Signed-off-by: Stefano Stabellini
CC: boris.ostrov...@oracle.com
CC: jgr...@suse.com
---
drivers/xen/pvcalls-front.c | 86 ++
Send PVCALLS_LISTEN to the backend.
Signed-off-by: Stefano Stabellini
CC: boris.ostrov...@oracle.com
CC: jgr...@suse.com
---
drivers/xen/pvcalls-front.c | 49 +
drivers/xen/pvcalls-front.h | 1 +
2 files changed, 50 insertions(+)
diff --git a/drivers
Send data to an active socket by copying data to the "out" ring. Take
the active socket out_mutex so that only one function can access the
ring at any given time.
If not enough room is available on the ring, rather than returning
immediately or sleep-waiting, spin for up to 5000 cycles. This small
Also add pvcalls-front to the Makefile.
Signed-off-by: Stefano Stabellini
CC: boris.ostrov...@oracle.com
CC: jgr...@suse.com
---
drivers/xen/Kconfig | 9 +
drivers/xen/Makefile | 1 +
2 files changed, 10 insertions(+)
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 4545561
Hi all,
this series introduces the frontend for the newly introduced PV Calls
procotol.
PV Calls is a paravirtualized protocol that allows the implementation of
a set of POSIX functions in a different domain. The PV Calls frontend
sends POSIX function calls to the backend, which implements them a
Send PVCALLS_BIND to the backend. Introduce a new structure, part of
struct sock_mapping, to store information specific to passive sockets.
Introduce a status field to keep track of the status of the passive
socket.
Introduce a waitqueue for the "accept" command (see the accept command
implementa
Introduce a xenbus frontend for the pvcalls protocol, as defined by
https://xenbits.xen.org/docs/unstable/misc/pvcalls.html.
This patch only adds the stubs, the code will be added by the following
patches.
Signed-off-by: Stefano Stabellini
CC: boris.ostrov...@oracle.com
CC: jgr...@suse.com
---
For active sockets, check the indexes and use the inflight_conn_req
waitqueue to wait.
For passive sockets, send PVCALLS_POLL to the backend. Use the
inflight_accept_req waitqueue if an accept is outstanding. Otherwise use
the inflight_req waitqueue: inflight_req is awaken when a new response
is r
Send a PVCALLS_SOCKET command to the backend, use the masked
req_prod_pvt as req_id. This way, req_id is guaranteed to be between 0
and PVCALLS_NR_REQ_PER_RING. We already have a slot in the rsp array
ready for the response, and there cannot be two outstanding responses
with the same req_id.
Wait
Implement the probe function for the pvcalls frontend. Read the
supported versions, max-page-order and function-calls nodes from
xenstore.
Introduce a data structure named pvcalls_bedata. It contains pointers to
the command ring, the event channel, a list of active sockets and a list
of passive so
Send PVCALLS_ACCEPT to the backend. Allocate a new active socket. Make
sure that only one accept command is executed at any given time by
setting PVCALLS_FLAG_ACCEPT_INFLIGHT and waiting on the
inflight_accept_req waitqueue.
sock->sk->sk_send_head is not used for ip sockets: reuse the field to
sto
Implement pvcalls frontend removal function. Go through the list of
active and passive sockets and free them all, one at a time.
Signed-off-by: Stefano Stabellini
CC: boris.ostrov...@oracle.com
CC: jgr...@suse.com
---
drivers/xen/pvcalls-front.c | 28
1 file changed,
On 07/22/2017 12:33 AM, Tamas K Lengyel wrote:
> Hey Razvan,
Hello,
> the vm_event that is being generated by doing
> VM_EVENT_FLAG_GET_NEXT_INTERRUPT sends almost all required information
> about the interrupt to the listener to allow it to get reinjected,
> except the instruction length. If the
Hey Razvan,
the vm_event that is being generated by doing
VM_EVENT_FLAG_GET_NEXT_INTERRUPT sends almost all required information
about the interrupt to the listener to allow it to get reinjected,
except the instruction length. If the listener wants to reinject the
interrupt to the guest via xc_hvm_
flight 112081 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112081/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-armhf-armhf-libvirt 14 saverestore-support-checkfail like 112036
test-armhf-armhf-libvirt-xsm 14 saveresto
On Fri, Jul 21, 2017 at 3:17 AM, Juergen Gross wrote:
> drivers/xen/pvcalls-back.c | 1236
>
This really doesn't look like a fix.
The merge window is over.
So I'm not pulling this without way more explanations of why I should.
Linu
Hi Juergen,
On 07/21/2017 02:36 AM, Juergen Gross wrote:
On 04/07/17 20:34, Gustavo A. R. Silva wrote:
Remove unnecessary static on local variables last_frontswap_pages and
tgt_frontswap_pages. Such variables are initialized before being used,
on every execution path throughout the function. Th
On Fri, Jul 21, 2017 at 8:55 PM, Dario Faggioli
wrote:
> On Fri, 2017-07-21 at 18:19 +0100, George Dunlap wrote:
>> On 06/23/2017 11:55 AM, Dario Faggioli wrote:
>> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
>> > index 4f6330e..85e014d 100644
>> > --- a/xen/common/sched_c
Since a VCPU can own multiple IRQs, the natural locking order is to take
a VCPU lock first, then the individual per-IRQ locks.
However there are situations where the target VCPU is not known without
looking into the struct pending_irq first, which usually means we need to
take the IRQ lock first.
T
The VCPU a shared virtual IRQ is targeting is currently stored in the
irq_rank structure.
For LPIs we already store the target VCPU in struct pending_irq, so
move SPIs over as well.
The ITS code, which was using this field already, was so far using the
VCPU lock to protect the pending_irq, so move
Instead of using an atomic access and hoping for the best, let's use
the new pending_irq lock now to make sure we read a sane version of
the target VCPU.
That still doesn't solve the problem mentioned in the comment, but
paves the way for future improvements.
Signed-off-by: Andre Przywara
---
xe
As the priority value is now officially a member of struct pending_irq,
we need to take its lock when manipulating it via ITS commands.
Make sure we take the IRQ lock after the VCPU lock when we need both.
Signed-off-by: Andre Przywara
---
xen/arch/arm/vgic-v3-its.c | 26 +++-
The IRQ configuration (level or edge triggered) for a group of IRQs
are still stored in the irq_rank structure.
Introduce a new bit called GIC_IRQ_GUEST_LEVEL in the "status" field,
which holds that information.
Remove the storage from the irq_rank and use the existing wrappers to
store and retriev
Currently there is a gic_raise_inflight_irq(), which serves the very
special purpose of handling a newly injected interrupt while an older
one is still handled. This has only one user, in vgic_vcpu_inject_irq().
Now with the introduction of the pending_irq lock this will later on
result in a nasty
In preparation for storing the virtual interrupt priority in the struct
pending_irq, rename the existing "priority" member to "cur_priority".
This is to signify that this is the current priority of an interrupt
which has been injected to a VCPU. Once this happened, its priority must
stay fixed at t
When we return from a domain with the active bit set in an LR,
we update our pending_irq accordingly. This touches multiple status
bits, so requires the pending_irq lock.
Signed-off-by: Andre Przywara
---
xen/arch/arm/gic.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/xen/arch/arm/gic.c
gic_events_need_delivery() reads the cur_priority field twice, also
relies on the consistency of status bits.
So it should take pending_irq lock.
Signed-off-by: Andre Przywara
---
xen/arch/arm/gic.c | 24 +---
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/xe
The enabled bits for a group of IRQs are still stored in the irq_rank
structure, although we already have the same information in pending_irq,
in the GIC_IRQ_GUEST_ENABLED bit of the "status" field.
Remove the storage from the irq_rank and just utilize the existing
wrappers to cover enabling/disabl
For "historical" reasons we used to pass a vCPU pointer to
vgic_get_target_vcpu(), which was only considered to distinguish private
IRQs. Now since we have the unique pending_irq pointer already, we don't
need the vCPU anymore, but just the domain.
So change this function to avoid a rather hackish
So far a virtual interrupt's priority is stored in the irq_rank
structure, which covers multiple IRQs and has a single lock for this
group.
Generalize the already existing priority variable in struct pending_irq
to not only cover LPIs, but every IRQ. Access to this value is protected
by the per-IRQ
When putting a (pending) IRQ into an LR, we should better make sure that
no-one changes it behind our back. So make sure we take the pending_irq
lock. This bubbles up to all users of gic_add_to_lr_pending() and
gic_raise_guest_irq().
Signed-off-by: Andre Przywara
---
xen/arch/arm/gic.c | 14
Since we will soon store a virtual IRQ's target VCPU in struct pending_irq,
generalise the existing storage for an LPI's target to cover all IRQs.
This just renames "lpi_vcpu_id" to "vcpu_id", but doesn't change anything
else yet.
Signed-off-by: Andre Przywara
---
xen/arch/arm/gic-v3-lpi.c | 2
Since we will soon store a virtual IRQ's priority in struct pending_irq,
generalise the existing storage for an LPI's priority to cover all IRQs.
This just renames "lpi_priority" to "priority", but doesn't change
anything else yet.
Signed-off-by: Andre Przywara
---
xen/arch/arm/vgic-v3-its.c | 4
Currently we protect the pending_irq structure with the corresponding
VGIC VCPU lock. There are problems in certain corner cases (for
instance if an IRQ is migrating), so let's introduce a per-IRQ lock,
which will protect the consistency of this structure independent from
any VCPU.
For now this jus
Now that we no longer need the struct vgic_irq_rank, we can remove the
definition and all the helper functions.
Signed-off-by: Andre Przywara
---
xen/arch/arm/vgic.c | 54
xen/include/asm-arm/domain.h | 6 +
xen/include/asm-arm/vgic.h
For now vgic_get_target_vcpu takes a VCPU and an IRQ number, because
this is what we need for finding the proper rank and the VCPU in there.
In the future the VCPU will be looked up in the struct pending_irq.
To avoid locking issues, let's pass the pointer to the pending_irq
instead. We can read th
Hi,
this is the first part of the attempt to rewrite the VGIC to solve the
issues we discovered when adding the ITS emulation.
The problems we identified resulted in the following list of things that
need fixing:
1) introduce a per-IRQ lock
2) remove the IRQ rank scheme (of storing IRQ properties)
For LPIs we stored the priority value in struct pending_irq, but all
other type of IRQs were using the irq_rank structure for that.
Now that every IRQ using pending_irq, we can remove the special handling
we had in place for LPIs and just use the now unified access wrappers.
Signed-off-by: Andre P
So far the rank lock is protecting the physical IRQ routing for a
particular virtual IRQ (though this doesn't seem to be documented
anywhere). So although these functions don't really touch the rank
structure, the lock prevents them from running concurrently.
This seems a bit like a kludge, so as w
When replacing the rank lock with individual per-IRQs lock soon, we will
still need the ability to lock multiple IRQs.
Provide two helper routines which lock and unlock a number of consecutive
IRQs in the right order.
Forward-looking the locking function fills an array of pending_irq
pointers, so t
Since the GICs MMIO access always covers a number of IRQs at once,
introduce wrapper functions which loop over those IRQs, take their
locks and read or update the priority values.
This will be used in a later patch.
Signed-off-by: Andre Przywara
---
xen/arch/arm/vgic.c| 37 ++
On Fri, 2017-07-21 at 18:19 +0100, George Dunlap wrote:
> On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> > diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c
> > index 4f6330e..85e014d 100644
> > --- a/xen/common/sched_credit.c
> > +++ b/xen/common/sched_credit.c
> > @@ -429,6 +429
On Fri, 2017-07-21 at 18:05 +0100, George Dunlap wrote:
> On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> >
> > While there, improve the wording, style and alignment
> > of comments too.
> >
> > Signed-off-by: Dario Faggioli
>
> I haven't taken a careful look at these; the idea sounds good and
On Fri, 2017-07-21 at 13:51 -0400, Meng Xu wrote:
> On Fri, Jun 23, 2017 at 6:55 AM, Dario Faggioli
> wrote:
> >
> > Nothing changed in `pahole` output, in terms of holes
> > and padding, but some fields have been moved, to put
> > related members in same cache line.
> >
> > Signed-off-by: Dario
flight 112104 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112104/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-arm64-arm64-xl-xsm 1 build-check(1) blocked n/a
test-amd64-amd64-libvirt 13 mig
On Fri, 21 Jul 2017, Julien Grall wrote:
> Hi,
>
> On 18/07/17 21:07, Stefano Stabellini wrote:
> > On Mon, 17 Jul 2017, Bhupinder Thakur wrote:
> > > This patch finally adds the support for vuart console. It adds
> > > two new fields in the console initialization:
> > >
> > > - optional
> > > -
On Fri, 21 Jul 2017, Julien Grall wrote:
> > > @x86_cacheattrcan be 'uc', 'wc', 'wt', 'wp', 'wb' or 'suc'.
> > > Default
> > > is 'wb'.
> >
> > Also here, I would write:
> >
> > @x86_cacheattr Only 'wb' (write-back) is supported today.
> >
> > Like you wrote la
On Fri, 21 Jul 2017, Arnd Bergmann wrote:
> __WARN() is an internal helper that is only available on
> some architectures, but causes a build error e.g. on ARM64
> in some configurations:
>
> drivers/xen/pvcalls-back.c: In function 'set_backend_state':
> drivers/xen/pvcalls-back.c:1097:5: error: i
On 21/07/17 14:50, Anthony PERARD wrote:
On Tue, Jul 18, 2017 at 03:22:41PM -0700, Stefano Stabellini wrote:
From: Igor Druzhinin
...
+static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr,
+ hwaddr new_phys_addr,
+
On 07/21/2017 12:17 PM, Arnd Bergmann wrote:
> __WARN() is an internal helper that is only available on
> some architectures, but causes a build error e.g. on ARM64
> in some configurations:
>
> drivers/xen/pvcalls-back.c: In function 'set_backend_state':
> drivers/xen/pvcalls-back.c:1097:5: error:
On Fri, Jun 23, 2017 at 6:55 AM, Dario Faggioli
wrote:
>
> Nothing changed in `pahole` output, in terms of holes
> and padding, but some fields have been moved, to put
> related members in same cache line.
>
> Signed-off-by: Dario Faggioli
> ---
> Cc: Meng Xu
> Cc: George Dunlap
> ---
> xen/co
flight 112085 linux-3.18 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112085/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl-arndale 4 host-install(4)broken REGR. vs. 111920
test-armhf-armhf-lib
On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> Exclusive pinning of vCPUs is used, sometimes, for
> achieving the highest level of determinism, and the
> least possible overhead, for the vCPUs in question.
>
> Although static 1:1 pinning is not recommended, for
> general use cases, optimizing the
This patch fixes the following sparse warnings:
drivers/block/xen-blkfront.c:916:45: warning: incorrect type in argument 2
(different base types)
drivers/block/xen-blkfront.c:916:45:expected restricted blk_status_t
[usertype] error
drivers/block/xen-blkfront.c:916:45:got int [signed] err
On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> With the aim of improving memory size and layout, and
> at the same time trying to put related fields reside
> in the same cacheline.
>
> Here's a summary of the output of `pahole`, with and
> without this patch, for the affected data structures.
>
On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> Nothing changed in `pahole` output, in terms of holes
> and padding, but some fields have been moved, to put
> related members in same cache line.
>
> Signed-off-by: Dario Faggioli
Acked-by: George Dunlap
> ---
> Cc: Meng Xu
> Cc: George Dunlap
On Fri, Jul 21, 2017 at 05:51:02PM +0100, Wei Liu wrote:
> On Fri, Jul 21, 2017 at 12:44:18PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Thu, Jul 20, 2017 at 01:57:17PM +0100, Wei Liu wrote:
> > > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote:
> > > > On 20/07/17 12:47, Wei Liu wrot
Hello Julien,
On 21.07.17 15:52, Julien Grall wrote:
This is very early boot in head.S so having the full log will not
really help here...
What is more interesting is where the different modules have been
loaded in memory:
- Device Tree
- Kernel
- Xen
- Initramfs (if any)
We
On 06/23/2017 11:55 AM, Dario Faggioli wrote:
> With the aim of improving memory size and layout, and
> at the same time trying to put related fields reside
> in the same cacheline.
>
> Here's a summary of the output of `pahole`, with and
> without this patch, for the affected data structures.
>
On 06/23/2017 11:54 AM, Dario Faggioli wrote:
> Instead of keeping an NR_CPUS big array of int-s,
> directly inside csched2_private, use a per-cpu
> variable.
>
> That's especially beneficial (in terms of saved
> memory) when there are more instance of Credit2 (in
> different cpupools), and also h
On Fri, Jul 21, 2017 at 12:44:18PM -0400, Konrad Rzeszutek Wilk wrote:
> On Thu, Jul 20, 2017 at 01:57:17PM +0100, Wei Liu wrote:
> > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote:
> > > On 20/07/17 12:47, Wei Liu wrote:
> > > > On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Mo
On 06/23/2017 11:54 AM, Dario Faggioli wrote:
> Instead of keeping an NR_CPUS big array of csched2_runqueue_data
> elements, directly inside the csched2_private structure, allocate
> it dynamically.
>
> This has two positive effects:
> - reduces the size of csched2_private sensibly, which is
> e
On Thu, Jul 20, 2017 at 01:57:17PM +0100, Wei Liu wrote:
> On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote:
> > On 20/07/17 12:47, Wei Liu wrote:
> > > On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Monné wrote:
> > > > On Thu, Jul 20, 2017 at 12:35:56PM +0100, Wei Liu wrote:
> >
On 21/07/17 11:43, Julien Grall wrote:
On 20/07/17 17:54, Wei Liu wrote:
On Thu, Jul 20, 2017 at 05:46:50PM +0100, Wei Liu wrote:
CC relevant maintainers
On Thu, Jul 20, 2017 at 05:20:43PM +0200, David Woodhouse wrote:
From: David Woodhouse
This includes stuff lke the hypercall tables whi
Dear George,
First I would state terms as following:
* Sharing HW - using the same hardware by different domains using PV
drivers, so actually one domain accessing the HW directly and serves
other domains.
* Assigning HW - providing access to some particular HW for some
particular domain. E.g.
On 21/07/17 08:01, Felix Schmoll wrote:
Much better. Just one final question. Do you intend this
function to block until data becomes available? (because that
appears to be how it behaves.)
Yes. I could split it up into two functions if that bothers you. Or do
you just want me
On 21/07/17 02:42, Boqun Feng wrote:
On Thu, Jul 20, 2017 at 10:38:59AM +0100, Andrew Cooper wrote:
On 20/07/17 06:29, Boqun Feng (Intel) wrote:
Add a "umip" test for the User-Model Instruction Prevention. The test
simply tries to run sgdt/sidt/sldt/str/smsw in guest user-mode with
CR4_UMIP = 1
flight 112091 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112091/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 1683ecec41a7c944783c51efa75375f1e0a71d08
baseline version:
ovmf 79aac4dd756bb2809cdcb
flight 112072 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112072/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-i386-xsm6 xen-buildfail REGR. vs. 111765
build-i386
On Fri, Jul 21, 2017 at 05:21:26PM +0100, Andrew Cooper wrote:
> On 20/07/17 13:57, Wei Liu wrote:
> > On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote:
> > > On 20/07/17 12:47, Wei Liu wrote:
> > > > On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Monné wrote:
> > > > > On Thu, Ju
On Fri, Jul 7, 2017 at 7:49 AM, Chao Gao wrote:
> In order to analyze PI blocking list operation frequence and obtain
> the list length, add some relevant events to xentrace and some
> associated code in xenalyze. Event ASYNC_PI_LIST_DEL may happen in interrupt
> context, which incurs current assu
On 20/07/17 13:57, Wei Liu wrote:
On Thu, Jul 20, 2017 at 12:49:37PM +0100, Andrew Cooper wrote:
On 20/07/17 12:47, Wei Liu wrote:
On Thu, Jul 20, 2017 at 12:45:38PM +0100, Roger Pau Monné wrote:
On Thu, Jul 20, 2017 at 12:35:56PM +0100, Wei Liu wrote:
The code says it defaults to false.
Sig
__WARN() is an internal helper that is only available on
some architectures, but causes a build error e.g. on ARM64
in some configurations:
drivers/xen/pvcalls-back.c: In function 'set_backend_state':
drivers/xen/pvcalls-back.c:1097:5: error: implicit declaration of function
'__WARN' [-Werror=imp
On Fri, Jul 7, 2017 at 7:48 AM, Chao Gao wrote:
> Currently, a blocked vCPU is put in its pCPU's pi blocking list. If
> too many vCPUs are blocked on a given pCPU, it will incur that the list
> grows too long. After a simple analysis, there are 32k domains and
> 128 vcpu per domain, thus about 4M
Hi all,
please find attached my notes.
Lars
Session URL: http://sched.co/AjB3
ACTIONS on Lars, Andy and Juergen
ACTIONS on Stefano and Julien
Community Call
==
This was a discussion about whether we should do more community calls,
in critical areas. The background was whether we sh
On Fri, Jul 7, 2017 at 7:48 AM, Chao Gao wrote:
> This patch adds a field, counter, in struct vmx_pi_blocking_vcpu to track
> how many entries are on the pi blocking list.
>
> Signed-off-by: Chao Gao
Minor nit: The grammar in the title isn't quite right; "vcpu number"
would be "the number ident
flight 112065 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/112065/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail REGR. vs.
112004
Regressions
Hi,
On 20/07/17 20:01, osstest service owner wrote:
> flight 112033 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/112033/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
> test-amd64-i386-xl-qemuu-ovmf
> On Fri, 21 Jul 2017 10:57:55 +
> "Zhang, Xiong Y" wrote:
>
> > On an intel skylake machine with upstream qemu, if I add
> > "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't
> > boot up and continues reboot.
> >
> > Steps to reproduce this issue:
> >
> > 1) Boot x
On Tue, Jul 18, 2017 at 03:22:41PM -0700, Stefano Stabellini wrote:
> From: Igor Druzhinin
...
> +static uint8_t *xen_replace_cache_entry_unlocked(hwaddr old_phys_addr,
> + hwaddr new_phys_addr,
> + h
Hi,
On Fri, 21 Jul 2017 10:57:55 +
"Zhang, Xiong Y" wrote:
> On an intel skylake machine with upstream qemu, if I add
> "rdm=strategy=host, policy=strict" to hvm.cfg, win 8.1 DomU couldn't boot
> up and continues reboot.
>
> Steps to reproduce this issue:
>
> 1) Boot xen with iommu=1
Hi all,
please find attached my notes. A lot of it went over my head, so I may have
gotten things wrong and some are missing
Feel free to modify, chip in, clarify, as needed
Lars
Session URL: http://sched.co/AjHN
OPTION 1: Userspace Approach
Dom0 Domu
[AFL] [VM ne
Hi Andrii,
Please CC the relevant maintainers when sending a patch (or questions
regarding a specific subsystems) on the ML.
On 18/07/17 17:45, Andrii Anisov wrote:
From: Andrii Anisov
Both Renesas R-Car Gen2(ARM32) and Gen3(ARM64) are utilizing SCIF IP,
so make its serial driver built by d
On 18/07/17 10:50, Andrii Anisov wrote:
Dear Shishir,
On 18.07.17 12:05, shishir tiwari wrote:
Hi
I want test and understand xen hypervisor implementation with dom0 and
domU on omap5 board.
I followed
https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/OMAP5432_uEVM
wi
On 21/07/17 12:10, Vijay Kilari wrote:
Hi Julien,
On Thu, Jul 20, 2017 at 4:56 PM, Julien Grall wrote:
On 19/07/17 19:39, Julien Grall wrote:
cell = (const __be32 *)prop->data;
banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
-for ( i = 0; i < banks && bootinf
Hi Julien,
On Thu, Jul 20, 2017 at 4:56 PM, Julien Grall wrote:
>
>
> On 19/07/17 19:39, Julien Grall wrote:
>>>
>>> cell = (const __be32 *)prop->data;
>>> banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
>>>
>>> -for ( i = 0; i < banks && bootinfo.mem.nr_banks < NR_MEM
On an intel skylake machine with upstream qemu, if I add "rdm=strategy=host,
policy=strict" to hvm.cfg, win 8.1 DomU couldn't boot up and continues reboot.
Steps to reproduce this issue:
1) Boot xen with iommu=1 to enable iommu
2) hvm.cfg contain:
builder="hvm"
memory=
disk=[
On 20/07/17 17:54, Wei Liu wrote:
On Thu, Jul 20, 2017 at 05:46:50PM +0100, Wei Liu wrote:
CC relevant maintainers
On Thu, Jul 20, 2017 at 05:20:43PM +0200, David Woodhouse wrote:
From: David Woodhouse
This includes stuff lke the hypercall tables which we really want
lke -> like
to be
1 - 100 of 124 matches
Mail list logo