flight 141325 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141325/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141319 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141319/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141292 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141292/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs.
133580
test-amd64-
flight 141285 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141285/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-pvshim 20 guest-start/debian.repeat fail REGR. vs. 140282
Tests which did n
flight 141313 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141313/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141283 linux-4.19 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141283/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-pvshim 20 guest-start/debian.repeat fail REGR. vs. 129313
build-armhf-pvops
flight 141310 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141310/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141277 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141277/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-arm64-arm64-libvirt-xsm 7 xen-boot fail in 141254 pass in 141277
test-amd64-amd64-xl-pvshim 18 gue
flight 141306 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141306/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141276 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141276/
Failures :-/ but no regressions.
Tests which did not succeed, but are not blocking:
test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 139876
test-amd64-i386-xl-qemuu-win7-amd64
flight 141271 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141271/
Failures :-/ but no regressions.
Tests which are failing intermittently (not blocking):
test-amd64-amd64-libvirt-vhd 11 guest-start fail in 141085 pass in 141271
test-amd64-amd64-xl-pvshim 12 gu
HVM domains use IOMMU and device model assistance for communicating with
PCI devices, xen-pcifront/pciback isn't directly needed by HVM domain.
But pciback serve also second function - it reset the device when it is
deassigned from the guest and for this reason pciback needs to be used
with HVM dom
Stubdomains need to be given sufficient privilege over the guest which it
provides emulation for in order for PCI passthrough to work correctly.
When a HVM domain try to enable MSI, QEMU in stubdomain calls
PHYSDEVOP_map_pirq, but later it needs to call XEN_DOMCTL_bind_pt_irq as
part of xc_domain_u
When qemu is running in stubdomain, handling "pci-ins" command will fail
if pcifront is not initialized already. Fix this by sending such command
only after confirming that pciback/front is running.
Signed-off-by: Marek Marczykowski-Górecki
Acked-by: Wei Liu
---
Changes in v2:
- Fixed code style
In this version, I add PHYSDEVOP_interrupt_control to allow stubdomain
enabling MSI after mapping it, and also disabling INTx beforehand. Actual
hypercall refuse to enable both of them.
Related article:
https://www.qubes-os.org/news/2017/10/18/msi-support/
Changes in v2:
- new "xen/x86: Allow st
Add libxc wrapper for PHYSDEVOP_interrupt_control introduced in previous
commit.
Signed-off-by: Marek Marczykowski-Górecki
---
Changes in v3:
- new patch
Changes in v4:
- adjust for updated previous patch
Changes in v5:
- rename to PHYSDEVOP_msi_control, adjust arguments
Change in v6:
- initi
Allow device model running in stubdomain to enable/disable INTx/MSI(-X),
bypassing pciback. While pciback is still used to access config space
from within stubdomain, it refuse to write to
PCI_MSI_FLAGS_ENABLE/PCI_MSIX_FLAGS_ENABLE/PCI_COMMAND_INTX_DISABLE
in non-permissive mode. Which is the right
Stubdomain do not have it's own config file - its configuration is
derived from target domains. Do not try to manipulate it when attaching
PCI device.
This bug prevented starting HVM with stubdomain and PCI passthrough
device attached.
Signed-off-by: Marek Marczykowski-Górecki
Acked-by: Wei Liu
flight 141304 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141304/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141299 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141299/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-armhf-armhf-xl 7 xen-boot fail REGR. vs. 141253
Tests which
flight 141267 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141267/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-pvshim 17 guest-saverestore.2 fail REGR. vs. 139910
Tests which did not
With core scheduling active schedule_cpu_[add/rm]() has to cope with
different scheduling granularity: a cpu not in any cpupool is subject
to granularity 1 (cpu scheduling), while a cpu in a cpupool might be
in a scheduling resource with more than one cpu.
Handle that by having arrays of old/new p
Switch credit2 scheduler completely from vcpu to sched_unit usage.
As we are touching lots of lines remove some white space at the end of
the line, too.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit2.c | 820 ++---
1 file changed, 403 insertion
With core scheduling active it is necessary to move multiple cpus at
the same time to or from a cpupool in order to avoid split scheduling
resources in between.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/cpupool.c | 100 +
xen/
On- and offlining cpus with core scheduling is rather complicated as
the cpus are taken on- or offline one by one, but scheduling wants them
rather to be handled per core.
As the future plan is to be able to select scheduling granularity per
cpupool prepare that by storing the granularity in struc
Rename the scheduler related perf counters from vcpu* to unit* where
appropriate.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c| 32
xen/common/sched_credit2.c | 18 +-
xen/common/sched_null.c | 18 +-
xen/c
Instead of letting schedule_cpu_switch() handle moving cpus from and
to cpupools, split it into schedule_cpu_add() and schedule_cpu_rm().
This will allow us to drop allocating/freeing scheduler data for free
cpus as the idle scheduler doesn't need such data.
Signed-off-by: Juergen Gross
---
V1:
When scheduling an unit with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_unit pointer in the idle vcpu as long as it used as
fallback vcpu.
In
In order to be able to move cpus to cpupools with core scheduling
active it is mandatory to merge multiple cpus into one scheduling
resource or to split a scheduling resource with multiple cpus in it
into multiple scheduling resources. This in turn requires to modify
the cpu <-> scheduling resource
Add a scheduling granularity enum ("cpu", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"cpu", this can be modified by the new boot parameter (x86 only)
"sched-gran".
According to the selected granularity sched_granularity is set after
all cpus are onlin
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_unit. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_unit_vcpu() which in
turn needs the vcpu already as a member of the domain
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_unit on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
Signed-o
Switch credit scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 503 +++---
1 file changed, 250 insertions(+), 253 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit
Having a pointer to struct scheduler in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/sched_credit.c | 18 +++---
xen/common/sched_credit2.c | 3 ++-
xen/common/schedule.c | 15 +++
xen/include/xen
Use sched_units instead of vcpus in schedule(). This includes the
introduction of sched_unit_runstate_change() as a replacement of
vcpu_runstate_change() in schedule().
Signed-off-by: Juergen Gross
---
Note that sched_unit_runstate_change() will be subsumed by another
rework in a later patch.
---
When entering deep sleep states all domains are paused resulting in
all cpus only running idle vcpus. This enables us to stop scheduling
completely in order to avoid synchronization problems with core
scheduling when individual cpus are offlined.
Disabling the scheduler is done by replacing the so
We'll need a way to free a sched_unit structure without side effects
in a later patch.
Signed-off-by: Juergen Gross
---
RFC V2: new patch, carved out from RFC V1 patch 49
---
xen/common/schedule.c | 38 +-
1 file changed, 21 insertions(+), 17 deletions(-)
dif
Switch null scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_null.c | 333
1 file changed, 165 insertions(+), 168 deletions(-)
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
ind
sched_move_irqs() should work on a sched_unit as that is the unit
moved between cpus.
Rename the current function to vcpu_move_irqs() as it is still needed
in schedule().
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 18 +-
1 file changed, 13 insertions(+), 5 deletion
When core or socket scheduling are active enabling or disabling smt is
not possible as that would require a major host reconfiguration.
Add a bool sched_disable_smt_switching which will be set for core or
socket scheduling.
Signed-off-by: Juergen Gross
Acked-by: Jan Beulich
---
V1:
- new patch
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in orde
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set the
tasklet_work_scheduled parameter of d
Prepare supporting multiple cpus per scheduling resource by allocating
the cpumask per resource dynamically.
Modify sched_res_mask to have only one bit per scheduling resource set.
Signed-off-by: Juergen Gross
---
V1: new patch (carved out from other patch)
---
xen/common/schedule.c | 16 +
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule unit have to be "running" when being sc
Add counters to struct sched_unit summing up runstates of associated
vcpus. This allows doing quick checks whether a unit has any vcpu
running or whether only a single vcpu of a unit is running.
Signed-off-by: Juergen Gross
---
RFC V2: add counters for each possible runstate
---
xen/common/sched
Add an is_running indicator to struct sched_unit which will be set
whenever the unit is being scheduled. Switch scheduler code to use
unit->is_running instead of vcpu->is_running for scheduling decisions.
At the same time introduce a state_entry_time field in struct
sched_unit being updated whenev
Switch arinc653 scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 208 +---
1 file changed, 101 insertions(+), 107 deletions(-)
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_ar
Switch rt scheduler completely from vcpu to sched_unit usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_rt.c | 356 --
1 file changed, 174 insertions(+), 182 deletions(-)
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index a47
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling units spanning
multiple cpus let cpupool_domain_cpumask() return a cpumask with only
one bit set per scheduling resource.
Signed-off-by: Juergen Gross
---
xen/common/cpupool.c
In preparation of core scheduling let the percpu pointer
schedule_data.curr point to a strct sched_unit instead of the related
vcpu. At the same time rename the per-vcpu scheduler specific structs
to per-unit ones.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V3:
- remove no long
In several places there is support for multiple vcpus per sched unit
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Signed-off-by: Juergen Gross
---
RFC V2:
- fix vcpu_runstate_helper()
V1:
- add special handling for idle unit in unit_ru
Having a pointer to struct cpupool in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
---
V1: new patch
---
xen/common/cpupool.c | 4 +---
xen/common/sched_credit.c | 2 +-
xen/common/sched_rt.c | 2 +-
xen/common/schedule.c | 8
xen/inc
This prepares making the different schedulers vcpu agnostic.
Note that some scheduler specific accessor function are misnamed after
this patch. This will be corrected in later patches.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
xen/common/sched_arinc653.c | 4 ++--
xen/commo
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_unit is a single vcpu. As soon as larger sched_units are
used most uses should be replaced by the master_cpu number of the local
sched_resource instea
Let the schedulers put a sched_unit pointer into struct task_slice
instead of a vcpu pointer.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 8
xen/common/sched_credit.c | 4 ++--
xen/common/sched_credit2.c | 4 ++--
xen/common/sched_null.c | 12 ++--
x
This prepares support of larger scheduling granularities, e.g. core
scheduling.
While at it move sched_has_urgent_vcpu() from include/asm-x86/cpuidle.h
into sched.h removing the need for including sched-if.h in cpuidle.h.
For that purpose remobe urgent_count from the scheduler private data
and mak
In order to make it easy to iterate over sched_unit elements of a
domain, build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().
For completeness add another iterator
In order to prepare for multiple vcpus per schedule unit move struct
task_slice in schedule() from the local stack into struct sched_unit
of the currently running unit. To make access easier for the single
schedulers add the pointer of the currently running unit as a parameter
of do_schedule().
Wh
Instead of returning a physical cpu number let pick_cpu() return a
scheduler resource instead. Rename pick_cpu() to pick_resource() to
reflect that change.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V3:
- style fix (Jan Beulich)
---
xen/common/sched_arinc653.c | 13 +++---
Rename vcpu_schedule_[un]lock[_irq]() to unit_schedule_[un]lock[_irq]()
and let it take a sched_unit pointer instead of a vcpu pointer as
parameter.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 17 +
xen/common/sched_credit2.c | 40 ---
Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only
to ensure a vcpu is running on a suitable processor they can be
switched to operate on schedule units instead of vcpus.
While doing that rename them accordingly and make the _start() variant
static. As it is needed anyway call v
The credit scheduler calls vcpu_pause_nosync() and vcpu_unpause()
today. Add sched_unit_pause_nosync() and sched_unit_unpause() to
perform the same operations on scheduler units instead.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 6 +++---
xen/include/xen/sched-if.h | 10
Affinities are scheduler specific attributes, they should be per
scheduling unit. So move all affinity related fields in struct vcpu
to struct sched_unit. While at it switch affinity related functions in
sched-if.h to use a pointer to sched_unit instead to vcpu as parameter.
The affinity_broken fl
Add support for core- and socket-scheduling in the Xen hypervisor.
Via boot parameter sched-gran=core (or sched-gran=socket)
it is possible to change the scheduling granularity from cpu (the
default) to either whole cores or even sockets.
All logical cpus (threads) of the core or socket are alway
In order to prepare core- and socket-scheduling use a new struct
sched_unit instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_unit and remove_unit to reflect the change of the parameter.
In the schedulers ren
Where appropriate switch from for_each_vcpu() to for_each_sched_unit()
in order to prepare core scheduling.
As it is beneficial once here and for sure in future add a
unit_scheduler() helper and let vcpu_scheduler() use it.
Signed-off-by: Juergen Gross
---
V2:
- handle affinity_broken correctly
Add a scheduling abstraction layer between physical processors and the
schedulers by introducing a struct sched_resource. Each scheduler unit
running is active on such a scheduler resource. For the time being
there is one struct sched_resource per cpu, but in future there might
be one for each core
Today there are two distinct scenarios for vcpu_create(): either for
creation of idle-domain vcpus (vcpuid == processor) or for creation of
"normal" domain vcpus (including dom0), where the caller selects the
initial processor on a round-robin scheme of the allowed processors
(allowed being based o
Add the following helpers using a sched_unit as input instead of a
vcpu:
- is_idle_unit() similar to is_idle_vcpu()
- is_unit_online() similar to is_vcpu_online() (returns true when any
of its vcpus is online)
- unit_runnable() like vcpu_runnable() (returns true if any of its
vcpus is runnable
flight 141272 freebsd-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141272/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-freebsd 7 freebsd-buildfail REGR. vs. 141004
Tests which did
flight 141270 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/141270/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 86ad762fa7a51cbf94e34e732961aae3de3339c3
baseline version:
ovmf 5a9db858806912ebd4e83
71 matches
Mail list logo