This patch ports microcode improvement patches from linux kernel.
Before you read any further: the early loading method is still the
preferred one and you should always do that. The following patch is
improving the late loading mechanism for long running jobs and cloud use
cases.
Gather all cores
Some callbacks in microcode_ops or related functions take a cpu
id parameter. But at current call sites, the cpu id parameter is
always equal to current cpu id. Some of them even use an assertion
to guarantee this. Remove this redundent 'cpu' parameter.
Signed-off-by: Chao Gao
Reviewed-by: Jan Be
Sometimes, an ucode with a level lower than or equal to current CPU's
patch level is useful. For example, to work around a broken bios which
only loads ucode for BSP, when BSP parses an ucode blob during bootup,
it is better to save an ucode with lower or equal level for APs
No functional change i
to replace the current per-cpu cache 'uci->mc'.
With the assumption that all CPUs in the system have the same signature
(family, model, stepping and 'pf'), one microcode update matches with
one cpu should match with others. Having differing microcode revisions
on cpus would cause system unstable a
It ports the implementation of is_blacklisted() in linux kernel
to Xen.
Late loading may cause system hang if CPUs are affected by BDF90.
Check against BDF90 before performing a late loading.
Signed-off-by: Chao Gao
---
xen/arch/x86/microcode.c| 6 ++
xen/arch/x86/microcode_intel.c
Introduce a vendor hook, .end_update_percpu, for svm_host_osvw_init().
The hook function is called on each cpu after loading an update.
It is a preparation for spliting out apply_microcode() from
cpu_request_microcode().
Note that svm_host_osvm_init() should be called regardless of the
result of l
microcode_update_lock is to prevent logic threads of a same core from
updating microcode at the same time. But due to using a global lock, it
also prevented parallel microcode updating on different cores.
Remove this lock in order to update microcode in parallel. It is safe
because we have already
Major changes in version 10:
- add back the patch to call wbinvd() conditionally
- add a patch to disable late loading due to BDF90
- rendezvous CPUs in NMI handler and load ucode. But provide an option
to disable this behavior.
- avoid the call of self_nmi() on the control thread because it m
apply_microcode()'s always loading the cached ucode patch forces
a patch to be stored before being loaded. Make apply_microcode()
accept a patch pointer to remove the limitation so that a patch
can be stored after a successful loading.
Signed-off-by: Chao Gao
Reviewed-by: Jan Beulich
---
xen/ar
During late microcode loading, apply_microcode() is invoked in
cpu_request_microcode(). To make late microcode update more reliable,
we want to put the apply_microcode() into stop_machine context. So
we split out it from cpu_request_microcode(). In general, for both
early loading on BSP and late lo
to a more generic function. So that it can be used alone to check
an update against the CPU signature and current update revision.
Note that enum microcode_match_result will be used in common code
(aka microcode.c), it has been placed in the common header. And
constifying the parameter of microcod
It is needed to mitigate some issues on this specific Broadwell CPU.
Signed-off-by: Chao Gao
---
xen/arch/x86/microcode_intel.c | 27 +++
1 file changed, 27 insertions(+)
diff --git a/xen/arch/x86/microcode_intel.c b/xen/arch/x86/microcode_intel.c
index bcef668..4e5e7f9
To create a microcode patch from a vendor-specific update,
allocate_microcode_patch() copied everything from the update.
It is not efficient. Essentially, we just need to go through
ucodes in the blob, find the one with the newest revision and
install it into the microcode_patch. In the process, bu
During system bootup and resuming, CPUs just load the cached ucode.
So one unified function microcode_update_one() is introduced. It
takes a boolean to indicate whether ->start_update should be called.
Since early_microcode_update_cpu() is only called on BSP (APs call
the unified function), start_u
Previously, a per-cpu ucode cache is maintained. Then each CPU had one
per-cpu update cache and there might be multiple versions of microcode.
Thus microcode_resume_cpu tried best to update microcode by loading
every update cache until a successful load.
But now the cache struct is simplified a lo
Remove the per-cpu cache field in struct ucode_cpu_info since it has
been replaced by a global cache. It would leads to only one field
remaining in ucode_cpu_info. Then, this struct is removed and the
remaining field (cpu signature) is stored in per-cpu area.
The cpu status notifier is also remove
When one core is loading ucode, handling NMI on sibling threads or
on other cores in the system might be problematic. By rendezvousing
all CPUs in NMI handler, it prevents NMI acceptance during ucode
loading.
Basically, some work previously done in stop_machine context is
moved to NMI handler. Pri
On 11.09.2019 22:04, Andrew Cooper wrote:
> This helper will eventually be the core "can a guest confiured like this run
> on the CPU?" logic. For now, it is just enough of a stub to allow us to
> replace the hypercall interface while retaining the previous behaviour.
>
> It will be expanded as v
flight 141229 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141229/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64 6 xen-buildfail REGR. vs. 140282
Tests which did n
On 11.09.2019 22:04, Andrew Cooper wrote:
> update_domain_cpuid_info() currently serves two purposes. First to merge new
> CPUID data from the toolstack, and second, to perform any necessary updating
> of derived domain/vcpu settings.
>
> The first part of this is going to be superseded by a new
On 12/09/2019 08:43, Jan Beulich wrote:
> On 11.09.2019 22:04, Andrew Cooper wrote:
>> This helper will eventually be the core "can a guest confiured like this run
>> on the CPU?" logic. For now, it is just enough of a stub to allow us to
>> replace the hypercall interface while retaining the prev
On 11.09.2019 22:04, Andrew Cooper wrote:
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -229,6 +229,55 @@ int xc_get_domain_cpu_policy(xc_interface *xch, uint32_t
> domid,
> return ret;
> }
>
> +int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
>
On 12/09/2019 08:52, Jan Beulich wrote:
> On 11.09.2019 22:04, Andrew Cooper wrote:
>> update_domain_cpuid_info() currently serves two purposes. First to merge new
>> CPUID data from the toolstack, and second, to perform any necessary updating
>> of derived domain/vcpu settings.
>>
>> The first pa
On 11.09.2019 22:05, Andrew Cooper wrote:
> This patch is broken out just to simplify the following two.
>
> For xc_cpuid_set(), document how the 'k' works because it is quite subtle.
> Replace a memset() with weird calculation for a loop of 4 explicit NULL
> assigments. This mirrors the free()'s
On Fri, 2019-08-09 at 16:58 +0200, Juergen Gross wrote:
> In order to prepare for multiple vcpus per schedule unit move struct
> task_slice in schedule() from the local stack into struct sched_unit
> of the currently running unit. To make access easier for the single
> schedulers add the pointer of
flight 141224 linux-4.19 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141224/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-pvshim 18 guest-localmigrate/x10 fail REGR. vs. 129313
build-amd64-xsm
On 11.09.2019 22:05, Andrew Cooper wrote:
> @@ -935,6 +935,13 @@ int xc_cpuid_set(
> goto fail;
> }
>
> +/*
> + * Notes for following this algorithm:
> + *
> + * While it will accept any leaf data, it only makes sense to use on
> + * f
On 11.09.2019 22:05, Andrew Cooper wrote:
> The purpose of this change is to stop using xc_cpuid_do_domctl(), and to stop
> basing decisions on a local CPUID instruction. This is not an appropriate way
> to construct policy information for other domains.
>
> Obtain the host and domain-max policie
On 12.09.19 10:13, Dario Faggioli wrote:
On Fri, 2019-08-09 at 16:58 +0200, Juergen Gross wrote:
In order to prepare for multiple vcpus per schedule unit move struct
task_slice in schedule() from the local stack into struct sched_unit
of the currently running unit. To make access easier for the
On 12.09.2019 09:59, Andrew Cooper wrote:
> On 12/09/2019 08:43, Jan Beulich wrote:
>> On 11.09.2019 22:04, Andrew Cooper wrote:
>>> This helper will eventually be the core "can a guest confiured like this run
>>> on the CPU?" logic. For now, it is just enough of a stub to allow us to
>>> replace
On Mon, 2019-09-09 at 11:33 +0200, Juergen Gross wrote:
> Instead of having a cpupool_dprintk() define just use debugtrace.
>
> Signed-off-by: Juergen Gross
>
Acked-by: Dario Faggioli
Regards
--
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUS
On 12/09/2019 09:19, Jan Beulich wrote:
> On 11.09.2019 22:05, Andrew Cooper wrote:
>> The purpose of this change is to stop using xc_cpuid_do_domctl(), and to stop
>> basing decisions on a local CPUID instruction. This is not an appropriate
>> way
>> to construct policy information for other dom
On 12/09/2019 09:17, Jan Beulich wrote:
> On 11.09.2019 22:05, Andrew Cooper wrote:
>> @@ -935,6 +935,13 @@ int xc_cpuid_set(
>> goto fail;
>> }
>>
>> +/*
>> + * Notes for following this algorithm:
>> + *
>> + * While it will accept any leaf d
On 11.09.2019 22:05, Andrew Cooper wrote:
> The purpose of this change is to stop using xc_cpuid_do_domctl(), and to stop
> basing decisions on a local CPUID instruction. This is not a correct or
> appropriate way to construct policy information for other domains.
>
> The overwhelming majority of
On 11.09.2019 22:05, Andrew Cooper wrote:
> With the final users moved over to using XEN_DOMCTL_set_cpumsr_policy, drop
> this domctl and associated infrastructure.
>
> Rename the preexisting set_cpuid XSM vector to set_cpu_policy, now that it is
> back to having a single user.
>
> Signed-off-by:
On 11.09.2019 22:05, Andrew Cooper wrote:
> The domain builder no longer uses CPUID instructions for policy decisions.
How certain are we that there are no other components left relying
on being able to see raw CPUID output in Dom0? Sadly customers are
often doing strange things, insisting that th
flight 141219 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141219/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-xsm 6 xen-buildfail REGR. vs. 140999
Tests which are fail
On 12.09.2019 10:36, Andrew Cooper wrote:
> On 12/09/2019 09:19, Jan Beulich wrote:
>> On 11.09.2019 22:05, Andrew Cooper wrote:
>>> The purpose of this change is to stop using xc_cpuid_do_domctl(), and to
>>> stop
>>> basing decisions on a local CPUID instruction. This is not an appropriate
>>>
Roger Pau Monne writes ("[PATCH] freebsd-build: fix building efifat after
r351831"):
> FreeBSD revisions after r351831 no longer automatically build an
> efifat partition image, and makefs should be used instead if such file
> is required.
>
> Do this and add logic to build the efifat partition o
On Wed, Sep 11, 2019 at 05:21:55PM +0200, Jan Beulich wrote:
> There's no need for it to be 64 bits wide - only the low twelve bits
> of CR3 hold the PCID.
>
> Signed-off-by: Jan Beulich
Reviewed-by: Roger Pau Monné
Thanks, Roger.
___
Xen-devel mail
On 12/09/2019 10:07, Jan Beulich wrote:
> On 11.09.2019 22:05, Andrew Cooper wrote:
>> The domain builder no longer uses CPUID instructions for policy decisions.
> How certain are we that there are no other components left relying
> on being able to see raw CPUID output in Dom0?
Cstates and Turbo
On 09.09.19 16:17, Jan Beulich wrote:
On 09.08.2019 16:58, Juergen Gross wrote:
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_unit is a single vcpu. As soon as larger sched_units are
used most
On 10/09/2019 16:25, Roger Pau Monne wrote:
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 3ff67792a7..e8f5ebe929 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -401,6 +401,12 @@
> */
> #define LIBXL_HAVE_PHYSINFO_CAP_HAP 1
>
> +/*
> + * LIBXL_HAVE_PHYSIN
On Wed, Sep 11, 2019 at 05:22:17PM +0200, Jan Beulich wrote:
> We really need to flush the TLB just once, if we do so with or after the
> CR3 write. The only case where two flushes are unavoidable is when we
> mean to turn off CR4.PGE (perhaps just temporarily; see the code
> comment).
>
> Signed-o
On 12.09.2019 11:34, Juergen Gross wrote:
> On 09.09.19 16:17, Jan Beulich wrote:
>> On 09.08.2019 16:58, Juergen Gross wrote:
>>> @@ -1825,8 +1825,9 @@ static struct task_slice
>>> csched_schedule(
>>> const struct scheduler *ops, s_time_t now, bool_t
>>> tasklet_work_scheduled)
>>> {
>
On 12.09.2019 11:54, Roger Pau Monné wrote:
> On Wed, Sep 11, 2019 at 05:22:17PM +0200, Jan Beulich wrote:
>> We really need to flush the TLB just once, if we do so with or after the
>> CR3 write. The only case where two flushes are unavoidable is when we
>> mean to turn off CR4.PGE (perhaps just
On Wed, Sep 11, 2019 at 03:36:16PM +0100, Paul Durrant wrote:
> Xenstore watch call-backs are already abstracted away from XenBus using
> the XenWatch data structure but the associated NotifierList manipulation
> and file handle registation is still open coded in various xen_bus_...()
On Wed, 2019-09-11 at 12:30 +0200, Jan Beulich wrote:
> On 09.08.2019 16:58, Juergen Gross wrote:
> >
> > --- a/xen/include/xen/sched-if.h
> > +++ b/xen/include/xen/sched-if.h
> > @@ -75,6 +75,20 @@ static inline bool unit_runnable(const struct
> > sched_unit *unit)
> > return vcpu_runnable(u
On 12.09.2019 09:22, Chao Gao wrote:
> --- a/xen/arch/x86/microcode_intel.c
> +++ b/xen/arch/x86/microcode_intel.c
> @@ -134,21 +134,11 @@ static int collect_cpu_info(unsigned int cpu_num,
> struct cpu_signature *csig)
> return 0;
> }
>
> -static inline int microcode_update_match(
> -u
On Fri, 2019-08-09 at 16:58 +0200, Juergen Gross wrote:
> Today the vcpu runstate of a new scheduled vcpu is always set to
> "running" even if at that time vcpu_runnable() is already returning
> false due to a race (e.g. with pausing the vcpu).
>
> With core scheduling this can no longer work as n
Hello Volodymyr,
On 11.09.19 21:01, Volodymyr Babchuk wrote:
Introduce per-pcpu time accounting what includes the following states:
TACC_HYP - the pcpu executes hypervisor code like softirq processing
(including scheduling), tasklets and context switches
TACC_GUEST - the pcpu execut
On 12.09.2019 09:22, Chao Gao wrote:
> --- a/xen/arch/x86/microcode_intel.c
> +++ b/xen/arch/x86/microcode_intel.c
> @@ -260,6 +260,36 @@ static enum microcode_match_result
> microcode_update_match(
> return MIS_UCODE;
> }
>
> +static bool match_cpu(const struct microcode_patch *patch)
> +
On Wed, Sep 11, 2019 at 05:22:51PM +0200, Jan Beulich wrote:
> I can't see any technical or performance reason why we should treat
> 32-bit PV different from 64-bit PV in this regard.
>
> Signed-off-by: Jan Beulich
The original commit mentions that PCID doesn't improve performance for
non-XPTI d
On Mon, 2019-09-09 at 14:44 +0200, Juergen Gross wrote:
> ... using Dario's correct mail address
>
Thanks! :-)
> On 06.09.19 13:09, George Dunlap wrote:
> > There was a discussion on the community call about the core
> > scheduling
> > series being developed by Juergen Gross [1]. The conclusion
On Thu, Sep 12, 2019 at 12:11:55PM +0200, Jan Beulich wrote:
> On 12.09.2019 11:54, Roger Pau Monné wrote:
> > On Wed, Sep 11, 2019 at 05:22:17PM +0200, Jan Beulich wrote:
> >> We really need to flush the TLB just once, if we do so with or after the
> >> CR3 write. The only case where two flushes
On 12.09.2019 12:34, Roger Pau Monné wrote:
> On Wed, Sep 11, 2019 at 05:22:51PM +0200, Jan Beulich wrote:
>> I can't see any technical or performance reason why we should treat
>> 32-bit PV different from 64-bit PV in this regard.
>>
>> Signed-off-by: Jan Beulich
>
> The original commit mention
Roger Pau Monne writes ("[PATCH v4 2/2] sysctl: report shadow paging
capability"):
> Report whether shadow paging is supported by the hypervisor, since it
> can be disabled at build time.
...
> NB: I'm not sure the added check in
> libxl__domain_create_info_setdefault is that useful, or if it coul
flight 141234 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141234/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64 6 xen-buildfail REGR. vs. 139910
Tests which are fail
On 12.09.19 12:04, Jan Beulich wrote:
On 12.09.2019 11:34, Juergen Gross wrote:
On 09.09.19 16:17, Jan Beulich wrote:
On 09.08.2019 16:58, Juergen Gross wrote:
@@ -1825,8 +1825,9 @@ static struct task_slice
csched_schedule(
const struct scheduler *ops, s_time_t now, bool_t tasklet_wo
Thes macros really ought to live in the common xen/iommu.h header rather
then being distributed amongst architecture specific iommu headers and
xen/sched.h. This patch moves them there.
NOTE: Disabling 'sharept' in the command line iommu options should really
be hard error on ARM (as opposed
On 12.09.19 12:04, Jan Beulich wrote:
On 12.09.2019 11:34, Juergen Gross wrote:
On 09.09.19 16:17, Jan Beulich wrote:
On 09.08.2019 16:58, Juergen Gross wrote:
@@ -1825,8 +1825,9 @@ static struct task_slice
csched_schedule(
const struct scheduler *ops, s_time_t now, bool_t tasklet_wo
This patch defines a new bit reported in the hw_cap field of struct
xen_sysctl_physinfo to indicate whether the platform supports sharing of
HAP page tables (i.e. the P2M) with the IOMMU. This informs the toolstack
whether the domain needs extra memory to store discrete IOMMU page tables
or not.
S
...and hence the ability to disable IOMMU mappings, and control EPT
sharing.
This patch introduces a new 'libxl_passthrough' enumeration into
libxl_domain_create_info. The value will be set by xl either when it parses
a new 'passthrough' option in xl.cfg, or implicitly if there is passthrough
hard
This patch introduces a common domain creation flag to determine whether
the domain is permitted to make use of the IOMMU. Currently the flag is
always set for both dom0 and any domU created by libxl if the IOMMU is
globally enabled (i.e. iommu_enabled == 1). sanitise_domain_config() is
modified to
Now that there is a per-domain IOMMU-enable flag, which should be set if
any device is going to be passed through, stop deferring page table
construction until the assignment is done. Also don't tear down the tables
again when the last device is de-assigned; defer that task until domain
destruction
These are revisions of the remaining uncommitted patches from my
previous series:
https://lists.xenproject.org/archives/html/xen-devel/2019-08/msg01737.html
Paul Durrant (6):
domain: introduce XEN_DOMCTL_CDF_iommu flag
use is_iommu_enabled() where appropriate...
sysctl / libxl: report wheth
...rather than testing the global iommu_enabled flag and ops pointer.
Now that there is a per-domain flag indicating whether the domain is
permitted to use the IOMMU (which determines whether the ops pointer will
be set), many tests of the global iommu_enabled flag and ops pointer can
be translate
On Wed, Sep 11, 2019 at 03:36:17PM +0100, Paul Durrant wrote:
> This patch uses the XenWatchList abstraction to add a separate watch list
> for each device. This is more scalable than walking a single notifier
> list for all watches and is also necessary to implement a bug-fix in a
> subsequent pat
On Wed, Sep 11, 2019 at 05:23:20PM +0200, Jan Beulich wrote:
> The bit is meaningful only for MOV-to-CR3 insns, not anywhere else, in
> particular not when loading nested guest state.
Can't you use the current vcpu to check if the guest is in nested
mode, and avoid having to explicitly pass the no
> -Original Message-
> From: Anthony PERARD
> Sent: 12 September 2019 11:17
> To: Paul Durrant
> Cc: qemu-de...@nongnu.org; xen-devel@lists.xenproject.org; Stefano Stabellini
>
> Subject: Re: [PATCH 1/3] xen / notify: introduce a new XenWatchList
> abstraction
>
> On Wed, Sep 11, 2019
On 12/09/2019 09:22, Jan Beulich wrote:
> On 12.09.2019 09:59, Andrew Cooper wrote:
>> On 12/09/2019 08:43, Jan Beulich wrote:
>>> On 11.09.2019 22:04, Andrew Cooper wrote:
This helper will eventually be the core "can a guest confiured like this
run
on the CPU?" logic. For now, it
On Wed, Sep 11, 2019 at 05:24:41PM +0200, Jan Beulich wrote:
> While bits 11 and below are, if not used for other purposes, reserved
> but ignored, bits beyond physical address width are supposed to raise
> exceptions (at least in the non-nested case; I'm not convinced the
> current nested SVM/VMX
On 12.09.2019 13:17, Juergen Gross wrote:
> On 12.09.19 12:04, Jan Beulich wrote:
>> On 12.09.2019 11:34, Juergen Gross wrote:
>>> On 09.09.19 16:17, Jan Beulich wrote:
On 09.08.2019 16:58, Juergen Gross wrote:
> @@ -1825,8 +1825,9 @@ static struct task_slice
>csched_schedule(
> On 12 Sep 2019, at 12:17, Paul Durrant wrote:
>
> tools/libxl/libxl_types.idl | 1 +
> tools/ocaml/libs/xc/xenctrl.ml | 1 +
> tools/ocaml/libs/xc/xenctrl.mli | 2 +-
Acked-by: Christian Lindig
___
Xen-devel mailing list
Xen-devel@lists.xenproj
On 12.09.2019 13:35, Roger Pau Monné wrote:
> On Wed, Sep 11, 2019 at 05:23:20PM +0200, Jan Beulich wrote:
>> The bit is meaningful only for MOV-to-CR3 insns, not anywhere else, in
>> particular not when loading nested guest state.
>
> Can't you use the current vcpu to check if the guest is in ne
On 12.09.19 13:46, Jan Beulich wrote:
On 12.09.2019 13:17, Juergen Gross wrote:
On 12.09.19 12:04, Jan Beulich wrote:
On 12.09.2019 11:34, Juergen Gross wrote:
On 09.09.19 16:17, Jan Beulich wrote:
On 09.08.2019 16:58, Juergen Gross wrote:
@@ -1825,8 +1825,9 @@ static struct task_slice
c
On 12.09.2019 13:45, Roger Pau Monné wrote:
> On Wed, Sep 11, 2019 at 05:24:41PM +0200, Jan Beulich wrote:
>> While bits 11 and below are, if not used for other purposes, reserved
>> but ignored, bits beyond physical address width are supposed to raise
>> exceptions (at least in the non-nested cas
On 12.09.2019 13:53, Juergen Gross wrote:
> On 12.09.19 13:46, Jan Beulich wrote:
>> On 12.09.2019 13:17, Juergen Gross wrote:
>>> On 12.09.19 12:04, Jan Beulich wrote:
On 12.09.2019 11:34, Juergen Gross wrote:
> Okayy, I'll rename "cpu" to "my_cpu".
We've got a number of instanc
Hello Volodymyr,
On 11.09.19 20:48, Volodymyr Babchuk wrote:
Hi Andrii,
As we agreed, I'll wipe out debugging remains as well as cleanup coding style
nits and resend the series.
--
Sincerely,
Andrii Anisov.
___
Xen-devel mailing list
Xen-devel@l
On 12.09.19 14:08, Jan Beulich wrote:
On 12.09.2019 13:53, Juergen Gross wrote:
On 12.09.19 13:46, Jan Beulich wrote:
On 12.09.2019 13:17, Juergen Gross wrote:
On 12.09.19 12:04, Jan Beulich wrote:
On 12.09.2019 11:34, Juergen Gross wrote:
Okayy, I'll rename "cpu" to "my_cpu".
We've got a
Xenstore watch call-backs are already abstracted away from XenBus using
the XenWatch data structure but the associated NotifierList manipulation
and file handle registration is still open coded in various xen_bus_...()
functions.
This patch creates a new XenWatchList data structure to allow these
i
This series fixes a potential segfault caused by NotifierList corruption
in xen-bus. The first two patches lay the groundwork and the third
actually fixes the problem.
Paul Durrant (3):
xen / notify: introduce a new XenWatchList abstraction
xen: introduce separate XenWatchList for XenDevice ob
This patch uses the XenWatchList abstraction to add a separate watch list
for each device. This is more scalable than walking a single notifier
list for all watches and is also necessary to implement a bug-fix in a
subsequent patch.
Signed-off-by: Paul Durrant
Reviewed-by: Anthony Perard
---
Cc:
Cleaning up offine XenDevice objects directly in
xen_device_backend_changed() is dangerous as xen_device_unrealize() will
modify the watch list that is being walked. Even the QLIST_FOREACH_SAFE()
used in notifier_list_notify() is insufficient as *two* notifiers (for
the frontend and backend watches
On Thu, 12 Sep 2019, 13:10 Andrii Anisov, wrote:
> Hello Volodymyr,
>
> On 11.09.19 20:48, Volodymyr Babchuk wrote:
> >
> > Hi Andrii,
> >
>
> As we agreed, I'll wipe out debugging remains as well as cleanup coding
> style nits and resend the series.
This an RFC and I am sure there current stat
On 12.09.19 15:17, Julien Grall wrote:
This an RFC and I am sure there current state is enough to spark a discussion.
There are no need to waste time resending it and use filling up inboxes.
Please wait for more time.
Gotcha!
--
Sincerely,
Andrii Anisov.
__
On 12.09.2019 09:22, Chao Gao wrote:
> Introduce a vendor hook, .end_update_percpu, for svm_host_osvw_init().
> The hook function is called on each cpu after loading an update.
> It is a preparation for spliting out apply_microcode() from
> cpu_request_microcode().
>
> Note that svm_host_osvm_init
flight 141249 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141249/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-libvirt 13 migrate-support-checkfail never pass
test-arm64-arm64-xl-xsm 1
On 12.09.2019 13:17, Paul Durrant wrote:
> --- a/xen/arch/arm/sysctl.c
> +++ b/xen/arch/arm/sysctl.c
> @@ -15,6 +15,9 @@
> void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
> {
> pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm | XEN_SYSCTL_PHYSCAP_hap;
> +
> +if ( iommu_enabled && iommu_h
On 12.09.2019 13:17, Paul Durrant wrote:
> v9:
> - Add new Kconfig option to cause 'iommu_hap_pt_share' to be defined to
>true, rather than using CONFIG_ARM, as requested by Julien
> - Assuming Jan's R-b stands since this is a mainly a cosmetic change
>directly requested by another mainta
On 12.09.2019 13:17, Paul Durrant wrote:
> v9:
> - Added the passthrough='enabled' option to xl
> - One cosmetic change in xen
> - Assume Jan's R-b stands since non-cosmetic changes are only in the
>toolstack
Same here (I'm afraid I haven't been able to spot the cosmetic
change).
Jan
On 12/09/2019 09:06, Jan Beulich wrote:
> On 11.09.2019 22:04, Andrew Cooper wrote:
>> --- a/tools/libxc/xc_cpuid_x86.c
>> +++ b/tools/libxc/xc_cpuid_x86.c
>> @@ -229,6 +229,55 @@ int xc_get_domain_cpu_policy(xc_interface *xch,
>> uint32_t domid,
>> return ret;
>> }
>>
>> +int xc_set_domai
> -Original Message-
> From: Jan Beulich
> Sent: 12 September 2019 14:04
> To: Paul Durrant
> Cc: xen-devel@lists.xenproject.org; Julien Grall ;
> Andrew Cooper
> ; Anthony Perard ;
> Christian Lindig
> ; George Dunlap ; Ian
> Jackson
> ; Stefano Stabellini ; Konrad
> Rzeszutek Wilk
>
On 12.09.2019 15:15, Andrew Cooper wrote:
> On 12/09/2019 09:06, Jan Beulich wrote:
>> On 11.09.2019 22:04, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/domctl.c
>>> +++ b/xen/arch/x86/domctl.c
>>> @@ -294,6 +294,65 @@ static int update_domain_cpuid_info(struct domain *d,
>>> return 0;
>>> }
>
On 12/09/2019 10:11, Jan Beulich wrote:
> On 12.09.2019 10:36, Andrew Cooper wrote:
>> On 12/09/2019 09:19, Jan Beulich wrote:
>>> On 11.09.2019 22:05, Andrew Cooper wrote:
The purpose of this change is to stop using xc_cpuid_do_domctl(), and to
stop
basing decisions on a local CPUI
Instead of enabling debugging for debug builds only add a dedicated
Kconfig option for that purpose which defaults to DEBUG.
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
---
V2:
- rename to CONFIG_DEBUG_LOCKS (Jan Beulich)
---
xen/Kconfig.debug | 7 +++
xen/common/spinlock
Today adding locks located in a struct to lock profiling requires a
unique type index for each structure. This makes it hard to add any
new structure as the related sysctl interface needs to be changed, too.
Instead of using an index the already existing struct name specified
in lock_profile_regis
On 12.09.2019 15:18, Paul Durrant wrote:
>> -Original Message-
>> From: Jan Beulich
>> Sent: 12 September 2019 14:04
>> To: Paul Durrant
>> Cc: xen-devel@lists.xenproject.org; Julien Grall ;
>> Andrew Cooper
>> ; Anthony Perard ;
>> Christian Lindig
>> ; George Dunlap ; Ian
>> Jackson
A spinlock defined via DEFINE_SPINLOCK() as a static variable local to
a function shows up in lock profiling just with its local variable
name. This results in multiple locks just named "lock".
In order to be able to distinguish those locks in the lock profiling
output add the function name to str
Add the cpu currently holding the lock to struct lock_debug. This makes
analysis of locking errors easier and it can be tested whether the
correct cpu is releasing a lock again.
Signed-off-by: Juergen Gross
---
V2:
- adjust types (Jan Beulich)
V4:
- add define for bitfield size to store cpu numbe
1 - 100 of 177 matches
Mail list logo