When scheduling an item with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_item pointer in the idle vcpu as long as it used as
fallback vcpu.
In
Switch credit scheduler completely from vcpu to sched_item usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 504 +++---
1 file changed, 251 insertions(+), 253 deletions(-)
diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit
Switch credit2 scheduler completely from vcpu to sched_item usage.
As we are touching lots of lines remove some white space at the end of
the line, too.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit2.c | 820 ++---
1 file changed, 403 insertion
Add an identifier to sched_item. For now it will be the same as the
related vcpu_id.
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 3 ++-
xen/include/xen/sched.h | 1 +
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/xen/common/schedule.c b/xen/common/schedule.c
index
Especially in the do_schedule() functions of the different schedulers
using smp_processor_id() for the local cpu number is correct only if
the sched_item is a single vcpu. As soon as larger sched_items are
used most uses should be replaced by the cpu number of the local
sched_resource instead.
Add
Now that vcpu_migrate_start() and vcpu_migrate_finish() are used only
to ensure a vcpu is running on a suitable processor they can be
switched to operate on schedule items instead of vcpus.
While doing that rename them accordingly and make the _start() variant
static.
vcpu_move_locked() is switch
Add a scheduling abstraction layer between physical processors and the
schedulers by introducing a struct sched_resource. Each scheduler item
running is active on such a scheduler resource. For the time being
there is one struct sched_resource per cpu, but in future there might
be one for each core
In preparation of core scheduling let the percpu pointer
schedule_data.curr point to a strct sched_item instead of the related
vcpu. At the same time rename the per-vcpu scheduler specific structs
to per-item ones.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 2 +-
xen/common
Add an is_running indicator to struct sched_item which will be set
whenever the item is being scheduled. Switch scheduler code to use
item->is_running instead of vcpu->is_running for scheduling decisions.
At the same time introduce a state_entry_time field in struct
sched_item being updated whenev
We'll need a way to free a sched_item structure without side effects
in a later patch.
Signed-off-by: Juergen Gross
---
RFC V2: new patch, carved out from RFC V1 patch 49
---
xen/common/schedule.c | 36
1 file changed, 20 insertions(+), 16 deletions(-)
diff
Instead of returning a physical cpu number let pick_cpu() return a
scheduler resource instead. Rename pick_cpu() to pick_resource() to
reflect that change.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 12 ++--
xen/common/sched_credit.c| 16
xen/com
In preparation for core scheduling carve out the GDT related
functionality (writing GDT related PTEs, loading default of full GDT)
into sub-functions.
Signed-off-by: Juergen Gross
---
RFC V2: split off non-refactoring part
---
xen/arch/x86/domain.c | 57 +++---
Rename the scheduler related perf counters from vcpu* to item* where
appropriate.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c| 32
xen/common/sched_credit2.c | 18 +-
xen/common/sched_null.c | 18 +-
xen/c
In order to make it easy to iterate over sched_item elements of a
domain build a single linked list and add an iterator for it. The new
list is guarded by the same mechanisms as the vcpu linked list as it
is modified only via vcpu_create() or vcpu_destroy().
For completeness add another iterator f
Add the following helpers using a sched_item as input instead of a
vcpu:
- is_idle_item() similar to is_idle_vcpu()
- item_runnable() like vcpu_runnable()
- sched_set_res() to set the current processor of an item
- sched_item_cpu() to get the current processor of an item
- sched_{set|clear}_pause_
Switch arinc653 scheduler completely from vcpu to sched_item usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 208 +---
1 file changed, 101 insertions(+), 107 deletions(-)
diff --git a/xen/common/sched_arinc653.c b/xen/common/sched_ar
This prepares making the different schedulers vcpu agnostic.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 4 ++--
xen/common/sched_credit.c | 6 +++---
xen/common/sched_credit2.c | 10 +-
xen/common/sched_null.c | 4 ++--
xen/common/sched_rt.c | 4 ++--
Add counters to struct sched_item summing up runstates of associated
vcpus.
Signed-off-by: Juergen Gross
---
RFC V2: add counters for each possible runstate
---
xen/common/schedule.c | 6 ++
xen/include/xen/sched.h | 2 ++
2 files changed, 8 insertions(+)
diff --git a/xen/common/schedule.
Add support for core- and socket-scheduling in the Xen hypervisor.
Via boot parameter sched-gran=core (or sched-gran=socket)
it is possible to change the scheduling granularity from cpu (the
default) to either whole cores or even sockets.
All logical cpus (threads) of the core or socket are alway
Add a scheduling granularity enum ("thread", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"thread", this can be modified by the new boot parameter (x86 only)
"sched_granularity".
According to the selected granularity sched_granularity is set after
all c
In several places there is support for multiple vcpus per sched item
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Signed-off-by: Juergen Gross
---
RFC V2: fix vcpu_runstate_helper()
---
xen/common/schedule.c | 26
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled item
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling item special care must be taken in orde
Use sched_items instead of vcpus in schedule(). This includes the
introduction of sched_item_runstate_change() as a replacement of
vcpu_runstate_change() in schedule().
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 70 +--
1 file changed
For support of core scheduling the scheduler cpu callback for
CPU_STARTING has to be moved into a dedicated function called by
start_secondary() as it needs to run before spin_debug_enable() then
due to potentially calling xfree().
Signed-off-by: Juergen Gross
---
RFC V2: fix ARM build
---
xen/a
When switching sched items synchronize all vcpus of the new item to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule item.
As tasklets require to schedule the idle item it is required to set the
tasklet_work_scheduled parameter of d
Affinities are scheduler specific attributes, they should be per
scheduling item. So move all affinity related fields in struct vcpu
to struct sched_item. While at it switch affinity related functions in
sched-if.h to use a pointer to sched_item instead to vcpu as parameter.
vcpu->last_run_time is
In order to prepare for multiple vcpus per schedule item move struct
task_slice in schedule() from the local stack into struct sched_item
of the currently running item. To make access easier for the single
schedulers add the pointer of the currently running item as a parameter
of do_schedule().
Wh
Today there are two distinct scenarios for vcpu_create(): either for
creation of idle-domain vcpus (vcpuid == processor) or for creation of
"normal" domain vcpus (including dom0), where the caller selects the
initial processor on a round-robin scheme of the allowed processors
(allowed being based o
Instead of using the SCHED_OP() macro to call the different scheduler
specific functions add inline wrappers for that purpose.
Signed-off-by: Juergen Gross
---
RFC V2: new patch (Andrew Cooper)
---
xen/common/schedule.c | 104 --
xen/include/xen/sched-if.h | 178
Let the schedulers put a sched_item pointer into struct task_slice
instead of a vcpu pointer.
Signed-off-by: Juergen Gross
---
xen/common/sched_arinc653.c | 8
xen/common/sched_credit.c | 4 ++--
xen/common/sched_credit2.c | 4 ++--
xen/common/sched_null.c | 12 ++--
x
Where appropriate switch from for_each_vcpu() to for_each_sched_item()
in order to prepare core scheduling.
Signed-off-by: Juergen Gross
---
xen/common/domain.c | 9 ++---
xen/common/schedule.c | 107 ++
2 files changed, 59 insertions(+), 57 de
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule item have to be "running" when being sc
Allocate a struct sched_item for each vcpu. This removes the need to
have it locally on the stack in schedule.c.
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 67 +++--
xen/include/xen/sched.h | 2 ++
2 files changed, 33 insertions(+), 36
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_item on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
Signed-o
In order to prepare core- and socket-scheduling use a new struct
sched_item instead of struct vcpu for interfaces of the different
schedulers.
Rename the per-scheduler functions insert_vcpu and remove_vcpu to
insert_item and remove_item to reflect the change of the parameter.
In the schedulers ren
Rename vcpu_schedule_[un]lock[_irq]() to item_schedule_[un]lock[_irq]()
and let it take a sched_item pointer instead of a vcpu pointer as
parameter.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 17 +
xen/common/sched_credit2.c | 40 +++
Instead of dynamically decide whether the previous vcpu was using full
or default GDT just add a percpu variable for that purpose. This at
once removes the need for testing vcpu_ids to differ twice.
Cache the need_full_gdt(nd) value in a local variable.
Signed-off-by: Juergen Gross
---
RFC V2: n
vcpu_force_reschedule() is only used for modifying the periodic timer
of a vcpu. Forcing a vcpu to give up the physical cpu for that purpose
is kind of brutal.
So instead of doing the reschedule dance just operate on the timer
directly.
In case we are modifying the timer of the currently running
Add a pointer to the domain to struct sched_item in order to avoid
having to dereference the vcpu pointer of struct sched_item to find
the related domain.
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 3 ++-
xen/include/xen/sched.h | 1 +
2 files changed, 3 insertions(+), 1 deletion
Switch rt scheduler completely from vcpu to sched_item usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_rt.c | 356 --
1 file changed, 174 insertions(+), 182 deletions(-)
diff --git a/xen/common/sched_rt.c b/xen/common/sched_rt.c
index 186
Switch null scheduler completely from vcpu to sched_item usage.
Signed-off-by: Juergen Gross
---
xen/common/sched_null.c | 304
1 file changed, 149 insertions(+), 155 deletions(-)
diff --git a/xen/common/sched_null.c b/xen/common/sched_null.c
ind
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling items spanning
multiple cpus let cpupool_domain_cpumask() return a cpumask with only
one bit set per scheduling resource.
Signed-off-by: Juergen Gross
---
xen/common/cpupool.c
sched_move_irqs() should work on a sched_item as that is the item
moved between cpus.
Rename the current function to vcpu_move_irqs() as it is still needed
in schedule().
Signed-off-by: Juergen Gross
---
xen/common/schedule.c | 18 +-
1 file changed, 13 insertions(+), 5 deletion
The credit scheduler calls vcpu_pause_nosync() and vcpu_unpause()
today. Add sched_item_pause_nosync() and sched_item_unpause() to
perform the same operations on scheduler items instead.
Signed-off-by: Juergen Gross
---
xen/common/sched_credit.c | 6 +++---
xen/include/xen/sched-if.h | 10
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_item. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_item_vcpu() which in
turn needs the vcpu already as a member of the domain
flight 135672 linux-4.9 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135672/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qcow217 guest-localmigrate/x10 fail REGR. vs. 134015
test-amd64-amd64-exam
On Thu, May 02, 2019 at 10:20:09AM +0200, Roger Pau Monné wrote:
>On Wed, May 01, 2019 at 12:41:13AM +0800, Chao Gao wrote:
>> On Tue, Apr 30, 2019 at 11:30:33AM +0200, Roger Pau Monné wrote:
>> >On Tue, Apr 30, 2019 at 05:01:21PM +0800, Chao Gao wrote:
>> >> On Tue, Apr 30, 2019 at 01:56:31AM -060
flight 135668 linux-4.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135668/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qcow217 guest-localmigrate/x10 fail REGR. vs. 133468
Tests which did not s
flight 135669 linux-4.14 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135669/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-arm64-arm64-examine11 examine-serial/bootloader fail REGR. vs. 133923
test-amd64-amd64-xl-
branch xen-4.11-testing
xenbranch xen-4.11-testing
job test-arm64-arm64-xl-xsm
testid xen-boot
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.g
On Fri, May 03, 2019 at 05:04:01PM +0200, Roger Pau Monne wrote:
There's no reason to request physically contiguous memory for those
allocations.
Reported-by: Ian Jackson
Signed-off-by: Roger Pau Monné
---
You really don't want this scissor line here, git will trim all your
message content b
flight 135639 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135639/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install
fail REGR. vs. 135443
t
flight 135663 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135663/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-qemuu-rhel6hvm-intel 12 guest-start/redhat.repeat fail REGR.
vs. 126937
branch xen-unstable
xenbranch xen-unstable
job build-i386-xsm
testid xen-build
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: xen git://xenbits.xen.org/xen.git
*** Found and reproduced probl
flight 135653 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135653/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-xsm 6 xen-buildfail REGR. vs. 133596
build-i386-xsm
Hi,
I have a machine that allocate vesa LFB above 4GB, as reported by UEFI
GOP. At 0x40 to be specific.
vga_console_info.u.vesa_lfb.lfb_base is a 32bit field, so it gets
truncated, leading to all kind of memory corruptions when something
writes there.
If that would be only about Xen, that
flight 135624 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135624/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-prev 6 xen-buildfail REGR. vs. 132889
test-amd64-amd6
flight 135640 freebsd-master real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135640/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-amd64-freebsd-again 5 host-install(5) fail REGR. vs. 135233
build-amd64-xen-
branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-ws16-amd64
testid windows-install
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git
flight 135749 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135749/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
coverity-amd647 coverity-upload fail REGR. vs. 133615
version t
flight 135539 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135539/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-amd64-xl-qcow217 guest-localmigrate/x10 fail REGR. vs. 133580
build-armhf-pvops
flight 135603 qemu-upstream-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135603/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-arm64-xsm broken in 134594
build-arm64
flight 135630 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/135630/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
build-i386-prev 6 xen-buildfail REGR. vs. 127792
build-amd64-pre
63 matches
Mail list logo