On 1/28/19 08:35, Jan Beulich wrote:
On 27.01.19 at 21:28, wrote:
>> On 1/25/19 14:09, Jan Beulich wrote:
>> On 25.01.19 at 11:50, wrote:
On 1/25/19 11:14, Jan Beulich wrote:
On 24.01.19 at 22:29, wrote:
>> Worse is the "evaluate condition, stash result, fence, use var
Ian,
back in October you've added quite a number of "xen" prefixes to
various pieces there. Now that I've finally had time to connect this
change of yours with PV domain creation failures that I've since
been observing (not a bug in any way, merely resulting from the
fact that I'm running everythi
>>> On 27.01.19 at 21:28, wrote:
> On 1/25/19 14:09, Jan Beulich wrote:
> On 25.01.19 at 11:50, wrote:
>>> On 1/25/19 11:14, Jan Beulich wrote:
>>> On 24.01.19 at 22:29, wrote:
> Worse is the "evaluate condition, stash result, fence, use variable"
> option, which is almost comple
flight 132494 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132494/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-examine 4 memdisk-try-append fail REGR. vs. 132421
test-amd64-amd64-xl-q
This patch ports microcode improvement patches from linux kernel.
Before you read any further: the early loading method is still the
preferred one and you should always do that. The following patch is
improving the late loading mechanism for long running jobs and cloud use
cases.
Gather all cores
to a more generic function. Then, this function can compare two given
microcodes' signature/revision as well. Comparing two microcodes is
used to update the global microcode cache (introduced by the later
patches in this series) when a new microcode is given.
Signed-off-by: Chao Gao
---
Changes i
microcode pointer and size were passed to other CPUs to parse
microcode locally. Now, parsing microcode is done on one CPU.
Other CPUs needn't parse the microcode blob; the pointer and
size can be removed.
Signed-off-by: Chao Gao
---
xen/arch/x86/microcode.c | 33 +---
apply_microcode() now gets ucode patch from the global cache rather
than using the microcode stored in "mc" field of ucode_cpu_info.
Also remove 'microcode_resume_match' from microcode_ops because the
matching is done in find_patch(). The cpu status notifier is also
removed. It was used to free the
This check has been done in microcode_sanity_check(). Needn't do it
again in get_matching_microcode().
Signed-off-by: Chao Gao
---
xen/arch/x86/microcode_intel.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/xen/arch/x86/microcode_intel.c b/xen/arch/x86/microcode_intel.c
index 9657575..4
to replace the current per-cpu cache 'uci->mc'.
Compared to the current per-cpu cache, the benefits of the global
microcode cache are:
1. It reduces the work that need to be done on each CPU. Parsing ucode
file can be done once on one CPU. Other CPUs needn't parse ucode file.
Instead, they can fin
During late microcode update, apply_microcode() is invoked in
cpu_request_microcode(). To make late microcode update more reliable,
we want to put the apply_microcode() into stop_machine context. So
we split out it from cpu_request_microcode(). As a consequence,
apply_microcode() should be invoked
Changes in this version:
- support parallel microcode updates for all cores (see patch 8)
- Address Roger's comments on the last version.
The intention of this series is to make the late microcode loading
more reliable by rendezvousing all cpus in stop_machine context.
This idea comes from Ashok
Currently, microcode_update_lock and microcode_mutex prevent cores
from updating microcode in parallel. Below changes are made to support
parallel microcode update on cores.
microcode_update_lock is removed. The purpose of this lock is to
prevent logic threads of a same core from updating microcod
On 1/26/19 2:05 PM, YueHaibing wrote:
> There is no need to have the 'struct drm_framebuffer *fb' variable
> static since new value always be assigned before use it.
>
> Signed-off-by: YueHaibing
Good catch, thank you!
Reviewed-by: Oleksandr Andrushchenko
> ---
> drivers/gpu/drm/xen/xen_drm_fro
On Mon, Jan 14, 2019 at 4:08 PM Oleksandr Andrushchenko
wrote:
>
> On 1/7/19 7:37 PM, Souptick Joarder wrote:
> > Remove duplicate header which is included twice.
> >
> > Signed-off-by: Souptick Joarder
> Reviewed-by: Oleksandr Andrushchenko
Can we get this patch in queue for 5.1 ?
> > ---
> >
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
>
> Saving and restoring the value of this MSR is currently handled by
> implementation-specific code despite it being architectural. This patch
> moves handling of accesses to this MSR from hvm.c into th
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
>
> Currently the value is saved directly in struct hvm_vcpu. This patch simply
> co-locates it with other saved MSR values. No functional change.
>
> Signed-off-by: Paul Durrant
Reviewed-by: Kevin Tia
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
>
> ...to avoid the need for a VMCS reload when the value of
> MSR_IA32_BNDCFGS is
> read by the tool-stack.
the frequency of context switch is much higher than the
one of reading by tool-stack (at least
flight 132493 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132493/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-examine 4 memdisk-try-append fail like 132420
test-amd64-i386-xl-pvshim12 guest-
> From: Paul Durrant [mailto:paul.durr...@citrix.com]
> Sent: Monday, January 7, 2019 8:03 PM
>
> Saving and restoring the value of this MSR is currently handled by
> implementation-specific code despite it being architectural. This patch
> moves handling of accesses to this MSR from hvm.c into th
> From: Andrew Cooper [mailto:andrew.coop...@citrix.com]
> Sent: Friday, January 25, 2019 2:28 AM
>
> Code clearing the "Suppress VE" bit in an EPT entry isn't nececsserily
> running
> in current context. In ALTP2M_external mode, it definitely is not, and in PV
> context, vcpu_altp2m(current) act
qemu build config:
http://paste.debian.net/plain/1062777/
domU startup trace:
http://paste.debian.net/plain/1062768/
This release uses qemu-3.0.0 which has a depends on libxentoolcore.
In xen-4.11.1 with qemu-2.11.2 vfb objects (VNC) always worked in pv
domU. Only now with qemu-3.x is it failing
With some of the patches required to build (already discussed and
queued for v7) I gave this series a test run on my Intel 64-bit
laptop.
With a very rudimentary benchmark method using the libargo interposer,
I was able to transfer ~100MB/s in dom0 <-> dom0, and dom0 <-> PV
domU.
Tested-by: Chris
flight 132488 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132488/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12
guest-start/debianhvm.repeat fail REGR.
On 1/25/19 14:09, Jan Beulich wrote:
On 25.01.19 at 11:50, wrote:
>> On 1/25/19 11:14, Jan Beulich wrote:
>> On 24.01.19 at 22:29, wrote:
Worse is the "evaluate condition, stash result, fence, use variable"
option, which is almost completely useless. If you work out the
r
On 1/24/19 23:29, Andrew Cooper wrote:
> On 23/01/2019 11:57, Norbert Manthey wrote:
>> While the lfence instruction was added for all x86 platform in the
>> beginning, it's useful to not block platforms that are not affected
>> by the L1TF vulnerability. Therefore, the lfence instruction should
>>
flight 132490 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132490/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat fail REGR. vs. 132469
Tests which did not su
flight 132485 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132485/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-xtf-amd64-amd64-5 69 xtf/test-hvm64-xsa-278 fail like 131061
test-xtf-amd64-amd64-3 69
flight 132506 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/132506/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
coverity-amd647 coverity-upload fail REGR. vs. 132424
version t
Hi,
On 1/25/19 9:36 PM, Stefano Stabellini wrote:
On Thu, 24 Jan 2019, Julien Grall wrote:
@James, please correct me if I am wrong below :).
On 24/01/2019 00:52, Stefano Stabellini wrote:
On Wed, 28 Nov 2018, Julien Grall wrote:
... in the context of the errata, you have to imagine what can
30 matches
Mail list logo