Let the schedulers put a sched_unit pointer into struct task_slice
instead of a vcpu pointer.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_arinc653.c | 8
xen/common/sched_credit.c | 4 ++--
xen/common/sched_credit2.c | 4 ++--
xen/common/sched_nu
Instead of returning a physical cpu number let pick_cpu() return a
scheduler resource instead. Rename pick_cpu() to pick_resource() to
reflect that change.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V3:
- style fix (Jan Beulich)
---
xen/common/sched_arinc653.c | 13 +++---
Add support for core- and socket-scheduling in the Xen hypervisor.
Via boot parameter sched-gran=core (or sched-gran=socket)
it is possible to change the scheduling granularity from cpu (the
default) to either whole cores or even sockets.
All logical cpus (threads) of the core or socket are alway
This prepares support of larger scheduling granularities, e.g. core
scheduling.
While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h
into sched.h removing the need for including sched-if.h in cpuidle.h.
For that purpose remobe urgent_count from the scheduler private data
and mak
Rename vcpu_schedule_[un]lock[_irq]() to unit_schedule_[un]lock[_irq]()
and let it take a sched_unit pointer instead of a vcpu pointer as
parameter.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_credit.c | 17 +
xen/common/sched_credit2.c | 40 ++
Rename the scheduler related perf counters from vcpu* to unit* where
appropriate.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_credit.c| 32
xen/common/sched_credit2.c | 18 +-
xen/common/sched_null.c |
Switch credit scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_credit.c | 503 +++---
1 file changed, 250 insertions(+), 253 deletions(-)
diff --git a/xen/common/sched_credi
In order to make it easy to iterate over sched_unit elements of a
domain, build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().
For completeness add another iterator
Switch null scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V4:
- Item -> unit (Dario Faggioli)
---
xen/common/sched_null.c | 333
1 file changed, 165 insertions(+), 168 deletions(-
This prepares making the different schedulers vcpu agnostic.
Note that some scheduler specific accessor function are misnamed after
this patch. This will be corrected in later patches.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_arinc653.c | 4 ++--
xen/commo
Add a scheduling abstraction layer between physical processors and the
schedulers by introducing a struct sched_resource. Each scheduler unit
running is active on such a scheduler resource. For the time being
there is one struct sched_resource per cpu, but in future there might
be one for each core
sched_move_irqs() should work on a sched_unit as that is the unit
moved between cpus.
Rename the current function to vcpu_move_irqs() as it is still needed
in schedule().
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V4:
- make parameter const (Jan Beulich)
---
xen/common/schedu
Affinities are scheduler specific attributes, they should be per
scheduling unit. So move all affinity related fields in struct vcpu
to struct sched_unit. While at it switch affinity related functions in
sched-if.h to use a pointer to sched_unit instead to vcpu as parameter.
The affinity_broken fl
In preparation of core scheduling let the percpu pointer
schedule_data.curr point to a strct sched_unit instead of the related
vcpu. At the same time rename the per-vcpu scheduler specific structs
to per-unit ones.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V3:
- remove no long
Add an is_running indicator to struct sched_unit which will be set
whenever the unit is being scheduled. Switch scheduler code to use
unit->is_running instead of vcpu->is_running for scheduling decisions.
At the same time introduce a state_entry_time field in struct
sched_unit being updated whenev
In several places there is support for multiple vcpus per sched unit
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Signed-off-by: Juergen Gross
---
RFC V2:
- fix vcpu_runstate_helper()
V1:
- add special handling for idle unit in unit_ru
Add the following helpers using a sched_unit as input instead of a
vcpu:
- is_idle_unit() similar to is_idle_vcpu()
- is_unit_online() similar to is_vcpu_online() (returns true when any
of its vcpus is online)
- unit_runnable() like vcpu_runnable() (returns true if any of its
vcpus is runnable
Having a pointer to struct scheduler in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Dario Faggioli
---
V1: new patch
V4:
- several renames sd -> sr (Jan Beulich)
- use ops instead or sr->scheduler (Jan Beulich)
---
xen/
Switch rt scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_rt.c | 356 --
1 file changed, 174 insertions(+), 182 deletions(-)
diff --git a/xen/common/sched_rt.c b/xe
Switch arinc653 scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_arinc653.c | 208 +---
1 file changed, 101 insertions(+), 107 deletions(-)
diff --git a/xen/common/sched_ari
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling units spanning
multiple cpus rename cpupool_domain_cpumask() to
cpupool_domain_master_cpumask() and let it return a cpumask with only
one bit set per scheduling resource.
Signed-of
Add counters to struct sched_unit summing up runstates of associated
vcpus. This allows doing quick checks whether a unit has any vcpu
running or whether only a single vcpu of a unit is running.
Signed-off-by: Juergen Gross
---
RFC V2: add counters for each possible runstate
---
xen/common/sched
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_unit is a single vcpu. As soon as larger sched_units are
used most uses should be replaced by the master_cpu number of the local
sched_resource instea
Where appropriate switch from for_each_vcpu() to for_each_sched_unit()
in order to prepare core scheduling.
As it is beneficial once here and for sure in future add a
unit_scheduler() helper and let vcpu_scheduler() use it.
Signed-off-by: Juergen Gross
---
V2:
- handle affinity_broken correctly
In order to prepare for multiple vcpus per schedule unit move struct
task_slice in schedule() from the local stack into struct sched_unit
of the currently running unit. To make access easier for the single
schedulers add the pointer of the currently running unit as a parameter
of do_schedule().
Wh
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule unit have to be "running" when being sc
In order to prepare core- and socket-scheduling use a new struct
sched_unit instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_unit and remove_unit to reflect the change of the parameter.
In the schedulers ren
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_unit on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
Signed-o
Today there are two distinct scenarios for vcpu_create(): either for
creation of idle-domain vcpus (vcpuid == processor) or for creation of
"normal" domain vcpus (including dom0), where the caller selects the
initial processor on a round-robin scheme of the allowed processors
(allowed being based o
The credit scheduler calls vcpu_pause_nosync() and vcpu_unpause()
today. Add sched_unit_pause_nosync() and sched_unit_unpause() to
perform the same operations on scheduler units instead.
Signed-off-by: Juergen Gross
---
V4:
- add vcpu loops to functions (Dario Faggioli)
- make unit parameter cons
Use sched_units instead of vcpus in schedule(). This includes the
introduction of sched_unit_runstate_change() as a replacement of
vcpu_runstate_change() in schedule().
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
Note that sched_unit_runstate_change() will be subsumed by another
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set the
tasklet_work_scheduled parameter of d
When scheduling an unit with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_unit pointer in the idle vcpu as long as it used as
fallback vcpu.
In
vcpu_migrate_start() and vcpu_migrate_finish() are used only to ensure
a vcpu is running on a suitable processor, so they can be switched to
operate on schedule units instead of vcpus.
While doing that rename them accordingly.
Call vcpu_sync_execstate() for each vcpu of the unit when changing
pro
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_unit. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_unit_vcpu() which in
turn needs the vcpu already as a member of the domain
Add a scheduling granularity enum ("cpu", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"cpu", this can be modified by the new boot parameter (x86 only)
"sched-gran".
According to the selected granularity sched_granularity is set after
all cpus are onlin
On- and offlining cpus with core scheduling is rather complicated as
the cpus are taken on- or offline one by one, but scheduling wants them
rather to be handled per core.
As the future plan is to be able to select scheduling granularity per
cpupool prepare that by storing the granularity in struc
Prepare supporting multiple cpus per scheduling resource by allocating
the cpumask per resource dynamically.
Modify sched_res_mask to have only one bit per scheduling resource set.
Signed-off-by: Juergen Gross
---
V1: new patch (carved out from other patch)
V4:
- use cpumask_t for sched_res_mask
With core scheduling active schedule_cpu_[add/rm]() has to cope with
different scheduling granularity: a cpu not in any cpupool is subject
to granularity 1 (cpu scheduling), while a cpu in a cpupool might be
in a scheduling resource with more than one cpu.
Handle that by having arrays of old/new p
Switch credit2 scheduler completely from vcpu to sched_unit usage.
As we are touching lots of lines remove some white space at the end of
the line, too.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_credit2.c | 822 ++---
Instead of letting schedule_cpu_switch() handle moving cpus from and
to cpupools, split it into schedule_cpu_add() and schedule_cpu_rm().
This will allow us to drop allocating/freeing scheduler data for free
cpus as the idle scheduler doesn't need such data.
Signed-off-by: Juergen Gross
---
V1:
When entering deep sleep states all domains are paused resulting in
all cpus only running idle vcpus. This enables us to stop scheduling
completely in order to avoid synchronization problems with core
scheduling when individual cpus are offlined.
Disabling the scheduler is done by replacing the so
In order to be able to move cpus to cpupools with core scheduling
active it is mandatory to merge multiple cpus into one scheduling
resource or to split a scheduling resource with multiple cpus in it
into multiple scheduling resources. This in turn requires to modify
the cpu <-> scheduling resource
Having a pointer to struct cpupool in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Dario Faggioli
---
V1: new patch
---
xen/common/cpupool.c | 4 +---
xen/common/sched_credit.c | 2 +-
xen/common/sched_rt.c |
When core or socket scheduling are active enabling or disabling smt is
not possible as that would require a major host reconfiguration.
Add a bool sched_disable_smt_switching which will be set for core or
socket scheduling.
Signed-off-by: Juergen Gross
Acked-by: Jan Beulich
Acked-by: Dario Fagg
With core scheduling active it is necessary to move multiple cpus at
the same time to or from a cpupool in order to avoid split scheduling
resources in between.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/cpupool.c | 100 +
xen/
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in orde
On 25.09.19 12:59, Dario Faggioli wrote:
On Wed, 2019-09-25 at 09:05 +0200, Juergen Gross wrote:
The arinc653 scheduler's free_vdata() function is missing proper
locking: as it is modifying the scheduler's private vcpu list it
needs
to take the scheduler lock during that operation.
Signed-off-b
vcpu_force_reschedule() is only used for modifying the periodic timer
of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose
is kind of brutal.
So instead of doing the reschedule dance just operate on the timer
directly. By protecting periodic timer modifications against concurrent
On Fri, 2019-09-27 at 06:42 +0200, Jürgen Groß wrote:
> On 25.09.19 15:07, Jürgen Groß wrote:
> > On 24.09.19 13:55, Jan Beulich wrote:
> > > On 14.09.2019 10:52, Juergen Gross wrote:
> > > > @@ -765,16 +774,22 @@ void vcpu_wake(struct vcpu *v)
> > > > {
> > > > unsigned long flags;
> > > >
On 26.09.19 23:34, Marek Marczykowski-Górecki wrote:
> Hi,
>
> I've hit VM_BUG_ON_PAGE(!PageOffline(page), page) in
> alloc_xenballooned_pages, when trying to use gnttab from userspace
> application. It happens on Xen PV, but not on Xen PVH or HVM with the
> same kernel. This happens at least with
On 27.09.19 09:32, Dario Faggioli wrote:
On Fri, 2019-09-27 at 06:42 +0200, Jürgen Groß wrote:
On 25.09.19 15:07, Jürgen Groß wrote:
On 24.09.19 13:55, Jan Beulich wrote:
On 14.09.2019 10:52, Juergen Gross wrote:
@@ -765,16 +774,22 @@ void vcpu_wake(struct vcpu *v)
{
unsigned long f
On 27.09.2019 04:28, Roman Shaposhnik wrote:
> On Thu, Sep 26, 2019 at 12:44 AM Jan Beulich wrote:
>>
>> On 26.09.2019 00:31, Roman Shaposhnik wrote:
>>> Jan, Roger, thank you so much for the initial ideas. I tried a few of
>>> those and here's where I am.
>>>
>>> First of all, it is definitely re
> -Original Message-
> From: Roger Pau Monne
> Sent: 26 September 2019 16:59
> To: Jan Beulich
> Cc: Andrew Cooper ; Paul Durrant
> ; xen-
> de...@lists.xenproject.org; Wei Liu
> Subject: Re: [PATCH v2 06/11] ioreq: allow dispatching ioreqs to internal
> servers
>
> On Thu, Sep 26, 20
On 26.09.2019 22:33, Joe Jin wrote:
> On 9/24/19 8:42 AM, Roger Pau Monné wrote:
>> AFAICT you are draining any pending data from the posted interrupt
>> PIRR field into the IRR vlapic field, so that no stale interrupts are
>> left behind after the MSI fields have been updated by the guest. I
>> th
On 27.09.2019 09:23, Jürgen Groß wrote:
> On 25.09.19 12:59, Dario Faggioli wrote:
>> On Wed, 2019-09-25 at 09:05 +0200, Juergen Gross wrote:
>>> The arinc653 scheduler's free_vdata() function is missing proper
>>> locking: as it is modifying the scheduler's private vcpu list it
>>> needs
>>> to t
On 27.09.19 10:20, Jan Beulich wrote:
On 27.09.2019 09:23, Jürgen Groß wrote:
On 25.09.19 12:59, Dario Faggioli wrote:
On Wed, 2019-09-25 at 09:05 +0200, Juergen Gross wrote:
The arinc653 scheduler's free_vdata() function is missing proper
locking: as it is modifying the scheduler's private v
On 27.09.2019 08:04, Juergen Gross wrote:
> Commit 6338c9ead9ff9ef6 ("debugtrace: add per-cpu buffer option") had
> a rebase error when using per-cpu buffers: the global buffer address
> would always be set to the one of the last per-cpu buffer allocated.
>
> The result would be that when dumping
> -Original Message-
> From: Roger Pau Monne
> Sent: 26 September 2019 16:07
> To: Paul Durrant
> Cc: xen-devel@lists.xenproject.org; Ian Jackson ; Wei
> Liu ; Andrew
> Cooper ; George Dunlap ;
> Jan Beulich
> ; Julien Grall ; Konrad Rzeszutek
> Wilk
> ; Stefano Stabellini ; Tim
> (Xe
On Thu, Sep 26, 2019 at 01:33:42PM -0700, Joe Jin wrote:
> On 9/24/19 8:42 AM, Roger Pau Monné wrote:
> > On Fri, Sep 13, 2019 at 09:50:34AM -0700, Joe Jin wrote:
> >> On 9/13/19 3:33 AM, Roger Pau Monné wrote:
> >>> On Thu, Sep 12, 2019 at 11:03:14AM -0700, Joe Jin wrote:
> With below testcas
On Fri, Sep 27, 2019 at 10:29:21AM +0200, Paul Durrant wrote:
> > -Original Message-
> > From: Roger Pau Monne
> > Sent: 26 September 2019 16:07
> > To: Paul Durrant
> > Cc: xen-devel@lists.xenproject.org; Ian Jackson ;
> > Wei Liu ; Andrew
> > Cooper ; George Dunlap
> > ; Jan Beulich
>
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> Add the following helpers using a sched_unit as input instead of a
> vcpu:
>
> - is_idle_unit() similar to is_idle_vcpu()
> - is_unit_online() similar to is_vcpu_online() (returns true when any
> of its vcpus is online)
> - unit_runnable(
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> The credit scheduler calls vcpu_pause_nosync() and vcpu_unpause()
> today. Add sched_unit_pause_nosync() and sched_unit_unpause() to
> perform the same operations on scheduler units instead.
>
> Signed-off-by: Juergen Gross
>
Reviewed-by:
flight 141859 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141859/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-arm64-arm64-libvirt-qcow2 15 guest-start/debian.repeat fail like 141622
test-armhf-armhf-libvirt 14 saveresto
On 26.09.2019 21:39, Lars Kurth wrote:
> +### Express appreciation
> +As the nature of code review to find bugs and possible issues, it is very
> easy for
> +reviewers to get into a mode of operation where the patch review ends up
> being a list
> +of issues, not mentioning what is right and well
Add the new library libxenhypfs for access to the hypervisor filesystem.
Signed-off-by: Juergen Gross
---
V1:
- rename to libxenhypfs
- add xenhypfs_write()
---
tools/Rules.mk | 6 +
tools/libs/Makefile | 1 +
tools/libs/hypfs/Makefile | 14 ++
On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.
In the beginning there should only be basic support: entries can be
added from the hypervisor itself only, there is a simple hyp
Add the infrastructure for the hypervisor filesystem.
This includes the hypercall interface and the base functions for
entry creation, deletion and modification.
Initially we support string and unsigned integer entry types. The saved
entry size is an upper bound, so for unsigned integer entries w
On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.
This is a first implementation of that idea adding the basic
functionality to hypervisor and tools side. The interface to any
us
Add the xenfs tool for accessing the hypervisor filesystem.
Signed-off-by: Juergen Gross
---
V1:
- rename to xenhypfs
- don't use "--" for subcommands
- add write support
---
.gitignore| 1 +
tools/misc/Makefile | 6 +++
tools/misc/xenhypfs.c | 120 +
Add support to read values of hypervisor runtime parameters via the
hypervisor file system for all unsigned integer type runtime parameters.
Signed-off-by: Juergen Gross
---
docs/misc/hypfs-paths.pandoc | 6 ++
xen/common/kernel.c | 27 +++
2 files changed,
Add the /buildinfo/config entry to the hypervisor filesystem. This
entry contains the .config file used to build the hypervisor.
Signed-off-by: Juergen Gross
---
.gitignore | 2 ++
docs/misc/hypfs-paths.pandoc | 9 +
xen/common/Makefile | 9 +
xen/co
On 27.09.19 10:52, Dario Faggioli wrote:
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
Add the following helpers using a sched_unit as input instead of a
vcpu:
- is_idle_unit() similar to is_idle_vcpu()
- is_unit_online() similar to is_vcpu_online() (returns true when any
of its vc
> -Original Message-
> From: Roger Pau Monne
> Sent: 27 September 2019 09:46
> To: Paul Durrant
> Cc: xen-devel@lists.xenproject.org; Ian Jackson ; Wei
> Liu ; Andrew
> Cooper ; George Dunlap ;
> Jan Beulich
> ; Julien Grall ; Konrad Rzeszutek
> Wilk
> ; Stefano Stabellini ; Tim
> (Xe
On Fri, Sep 27, 2019 at 10:42:02AM +0200, Roger Pau Monné wrote:
> Also, I think I'm still confused by this, I've just realized that the
> PI descriptor seems to be moved from one vCPU to another without
> clearing PIRR, and hence I'm not sure why you are losing interrupts in
> that case. I need to
On 26.09.2019 21:39, Lars Kurth wrote:
> +### Verbose vs. terse
> +Due to the time it takes to review and compose code reviewer, reviewers
> often adopt a
> +terse style. It is not unusual to see review comments such as
> +> typo
> +> s/resions/regions/
> +> coding style
> +> coding style: bracket
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> Today there are two distinct scenarios for vcpu_create(): either for
> creation of idle-domain vcpus (vcpuid == processor) or for creation
> of
> "normal" domain vcpus (including dom0), where the caller selects the
> initial processor on a r
flight 141849 linux-4.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141849/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-pvshim 20 guest-start/debian.repeat fail in 141599 REGR.
vs. 139698
Tests which
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> Add counters to struct sched_unit summing up runstates of associated
> vcpus. This allows doing quick checks whether a unit has any vcpu
> running or whether only a single vcpu of a unit is running.
>
> Signed-off-by: Juergen Gross
>
Revie
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> Where appropriate switch from for_each_vcpu() to
> for_each_sched_unit()
> in order to prepare core scheduling.
>
> As it is beneficial once here and for sure in future add a
> unit_scheduler() helper and let vcpu_scheduler() use it.
>
> S
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> vcpu_migrate_start() and vcpu_migrate_finish() are used only to
> ensure
> a vcpu is running on a suitable processor, so they can be switched to
> operate on schedule units instead of vcpus.
>
> While doing that rename them accordingly.
>
On 26.09.2019 15:53, Chao Gao wrote:
> @@ -249,49 +249,82 @@ bool microcode_update_cache(struct microcode_patch
> *patch)
> return true;
> }
>
> -static int microcode_update_cpu(const void *buf, size_t size)
> +/*
> + * Load a microcode update to current CPU.
> + *
> + * If no patch is pro
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
> cpupool_domain_cpumask() is used by scheduling to select cpus or to
> iterate over cpus. In order to support scheduling units spanning
> multiple cpus rename cpupool_domain_cpumask() to
> cpupool_domain_master_cpumask() and let it return a c
On 27.09.19 11:32, Dario Faggioli wrote:
On Fri, 2019-09-27 at 09:00 +0200, Juergen Gross wrote:
Where appropriate switch from for_each_vcpu() to
for_each_sched_unit()
in order to prepare core scheduling.
As it is beneficial once here and for sure in future add a
unit_scheduler() helper and let
On 26.09.2019 15:53, Chao Gao wrote:
> @@ -264,40 +336,150 @@ static int microcode_update_cpu(const struct
> microcode_patch *patch)
> return err;
> }
>
> -static long do_microcode_update(void *patch)
> +static bool wait_for_state(typeof(loading_state) state)
> {
> -unsigned int cpu;
On 27/09/2019, 09:59, "Jan Beulich" wrote:
On 26.09.2019 21:39, Lars Kurth wrote:
> +### Express appreciation
> +As the nature of code review to find bugs and possible issues, it is
very easy for
> +reviewers to get into a mode of operation where the patch review ends up
bein
On 27/09/2019, 10:14, "Jan Beulich" wrote:
On 26.09.2019 21:39, Lars Kurth wrote:
> +### Verbose vs. terse
> +Due to the time it takes to review and compose code reviewer, reviewers
often adopt a
> +terse style. It is not unusual to see review comments such as
> +> typo
On 27/09/2019, 11:17, "Lars Kurth" wrote:
On 27/09/2019, 10:14, "Jan Beulich" wrote:
On 26.09.2019 21:39, Lars Kurth wrote:
> +### Verbose vs. terse
> +Due to the time it takes to review and compose code reviewer,
reviewers often adopt a
> +
Hi Oleksandr,
Thank you for the respin. The code in p2m.c looks good to me know. One comment
regarding the SMMU code below.
On 24/09/2019 17:01, Oleksandr Tyshchenko wrote:
> diff --git a/xen/drivers/passthrough/arm/smmu.c
> b/xen/drivers/passthrough/arm/smmu.c
> index 8ae986a..701fe9c 100644
>
On 26.09.2019 15:53, Chao Gao wrote:
> @@ -105,23 +110,42 @@ void __init microcode_set_module(unsigned int idx)
> }
>
> /*
> - * The format is '[|scan]'. Both options are optional.
> + * The format is '[|scan, nmi=]'. Both options are optional.
> * If the EFI has forced which of the multiboot
On 27.09.19 13:20, Julien Grall wrote:
Hi Oleksandr,
Hi Julien
Thank you for the respin. The code in p2m.c looks good to me know. One comment
regarding the SMMU code below.
On 24/09/2019 17:01, Oleksandr Tyshchenko wrote:
diff --git a/xen/drivers/passthrough/arm/smmu.c
b/xen/drivers/pa
Juergen Gross writes ("[PATCH v1 1/6] docs: add feature document for Xen
hypervisor sysfs-like support"):
> On the 2019 Xen developer summit there was agreement that the Xen
> hypervisor should gain support for a hierarchical name-value store
> similar to the Linux kernel's sysfs.
>
> In the begi
Juergen Gross writes ("[PATCH v1 3/6] libs: add libxenhypfs"):
> Add the new library libxenhypfs for access to the hypervisor filesystem.
This code looks as expected to me.
Acked-by: Ian Jackson
It does make me thing you have had to write rather a lot of rather
boring (and in some cases, fiddly
Juergen Gross writes ("[PATCH v1 4/6] tools: add xenfs tool"):
> Add the xenfs tool for accessing the hypervisor filesystem.
Thanks for taking care about exit status. Can you document the exit
statuses somewhere ?
Ian.
___
Xen-devel mailing list
Xen-d
On Fri, Sep 27, 2019 at 11:01:39AM +0200, Paul Durrant wrote:
> > -Original Message-
> > From: Roger Pau Monne
> > Sent: 27 September 2019 09:46
> > To: Paul Durrant
> > Cc: xen-devel@lists.xenproject.org; Ian Jackson ;
> > Wei Liu ; Andrew
> > Cooper ; George Dunlap
> > ; Jan Beulich
>
Switch to using pd and also print the pfn that failed.
No functional change intended.
Signed-off-by: Roger Pau Monné
---
xen/drivers/passthrough/x86/iommu.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/xen/drivers/passthrough/x86/iommu.c
b/xen/drivers/passthrough/x86
On Wed, Sep 25, 2019 at 12:48:42PM +0200, Roger Pau Monné wrote:
> On Mon, Sep 23, 2019 at 11:09:30AM +0100, Wei Liu wrote:
> > We use the same code structure as we did for Xen code.
> >
> > As starters, detect Hyper-V in probe_hyperv. More complex
> > functionality will be added later.
> >
> > S
On Wed, Sep 25, 2019 at 12:39:11PM +0200, Roger Pau Monné wrote:
> On Mon, Sep 23, 2019 at 11:09:28AM +0100, Wei Liu wrote:
> > The only implementation there is Xen.
> >
> > No functional change.
> >
> > Signed-off-by: Wei Liu
> > ---
> > xen/arch/x86/guest/Makefile| 2 +
> > xen/
On 26.09.2019 15:53, Chao Gao wrote:
> If a core with all of its thread being parked, late ucode loading
> which currently only loads ucode on online threads would lead to
> differing ucode revisions in the system. In general, keeping ucode
> revision consistent would be less error-prone. To this e
On Wed, Sep 25, 2019 at 12:44:27PM +0200, Roger Pau Monné wrote:
> On Mon, Sep 23, 2019 at 11:09:29AM +0100, Wei Liu wrote:
> > We need indication whether it has succeeded or not.
> >
> > Signed-off-by: Wei Liu
>
> The code LGTM, I have just a suggestion on the approach.
>
> > ---
> > xen/arch
1 - 100 of 213 matches
Mail list logo