On 01.10.2019 21:44, Andrew Cooper wrote:
> In this example, hardware can the emulator can disagree by using a
> different access width.
>
> I've been experimenting with my Rome system, and an emulator hardcoded
> to use 2-byte accesses. After some investigation, the livelock only
> occurs for ac
1: MAINTAINERS: add tools/misc/xen-cpuid to "X86 ARCHITECTURE"
2: tools/xen-cpuid: avoid producing bogus output
They're not overly important to have for 4.13, but they're also rather
low risk, so I think they're worthwhile considering at this point in
time.
Jan
__
Today the vcpu runstate of a new scheduled vcpu is always set to
"running" even if at that time vcpu_runnable() is already returning
false due to a race (e.g. with pausing the vcpu).
With core scheduling this can no longer work as not all vcpus of a
schedule unit have to be "running" when being sc
cpupool_domain_cpumask() is used by scheduling to select cpus or to
iterate over cpus. In order to support scheduling units spanning
multiple cpus rename cpupool_domain_cpumask() to
cpupool_domain_master_cpumask() and let it return a cpumask with only
one bit set per scheduling resource.
Signed-of
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set the
tasklet_work_scheduled parameter of d
When core or socket scheduling are active enabling or disabling smt is
not possible as that would require a major host reconfiguration.
Add a bool sched_disable_smt_switching which will be set for core or
socket scheduling.
Signed-off-by: Juergen Gross
Acked-by: Jan Beulich
Acked-by: Dario Fagg
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in orde
Instead of letting schedule_cpu_switch() handle moving cpus from and
to cpupools, split it into schedule_cpu_add() and schedule_cpu_rm().
This will allow us to drop allocating/freeing scheduler data for free
cpus as the idle scheduler doesn't need such data.
Signed-off-by: Juergen Gross
Reviewed
In several places there is support for multiple vcpus per sched unit
missing. Add that missing support (with the exception of initial
allocation) and missing helpers for that.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
Acked-by: Jan Beulich
---
RFC V2:
- fix vcpu_runstate_helper()
Add a percpu variable holding the index of the cpu in the current
sched_resource structure. This index is used to get the correct vcpu
of a sched_unit on a specific cpu.
For now this index will be zero for all cpus, but with core scheduling
it will be possible to have higher values, too.
Signed-o
Add a scheduling granularity enum ("cpu", "core", "socket") for
specification of the scheduling granularity. Initially it is set to
"cpu", this can be modified by the new boot parameter (x86 only)
"sched-gran".
According to the selected granularity sched_granularity is set after
all cpus are onlin
Having a pointer to struct cpupool in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Dario Faggioli
---
V1: new patch
---
xen/common/cpupool.c | 4 +---
xen/common/sched_credit.c | 2 +-
xen/common/sched_rt.c |
With core scheduling active it is necessary to move multiple cpus at
the same time to or from a cpupool in order to avoid split scheduling
resources in between.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V1: new patch
---
xen/common/cpupool.c | 100 ++
On- and offlining cpus with core scheduling is rather complicated as
the cpus are taken on- or offline one by one, but scheduling wants them
rather to be handled per core.
As the future plan is to be able to select scheduling granularity per
cpupool prepare that by storing the granularity in struc
Add documentation for the new "sched-gran" hypervisor boot parameter.
Signed-off-by: Juergen Gross
---
V6:
- add a note regarding different AMD/Intel terminology (Jan Beulich)
---
docs/misc/xen-command-line.pandoc | 28
1 file changed, 28 insertions(+)
diff --git a/
Add support for core- and socket-scheduling in the Xen hypervisor.
Via boot parameter sched-gran=core (or sched-gran=socket)
it is possible to change the scheduling granularity from cpu (the
default) to either whole cores or even sockets.
All logical cpus (threads) of the core or socket are alway
When scheduling an unit with multiple vcpus there is no guarantee all
vcpus are available (e.g. above maxvcpus or vcpu offline). Fall back to
idle vcpu of the current cpu in that case. This requires to store the
correct schedule_unit pointer in the idle vcpu as long as it used as
fallback vcpu.
In
Prepare supporting multiple cpus per scheduling resource by allocating
the cpumask per resource dynamically.
Modify sched_res_mask to have only one bit per scheduling resource set.
Signed-off-by: Juergen Gross
Reviewed-by: Dario Faggioli
---
V1: new patch (carved out from other patch)
V4:
- use
In order to be able to move cpus to cpupools with core scheduling
active it is mandatory to merge multiple cpus into one scheduling
resource or to split a scheduling resource with multiple cpus in it
into multiple scheduling resources. This in turn requires to modify
the cpu <-> scheduling resource
When entering deep sleep states all domains are paused resulting in
all cpus only running idle vcpus. This enables us to stop scheduling
completely in order to avoid synchronization problems with core
scheduling when individual cpus are offlined.
Disabling the scheduler is done by replacing the so
With core scheduling active schedule_cpu_[add/rm]() has to cope with
different scheduling granularity: a cpu not in any cpupool is subject
to granularity 1 (cpu scheduling), while a cpu in a cpupool might be
in a scheduling resource with more than one cpu.
Handle that by having arrays of old/new p
Having a pointer to struct scheduler in struct sched_resource instead
of per cpu is enough.
Signed-off-by: Juergen Gross
Reviewed-by: Jan Beulich
Reviewed-by: Dario Faggioli
---
V1: new patch
V4:
- several renames sd -> sr (Jan Beulich)
- use ops instead or sr->scheduler (Jan Beulich)
---
xen/
With a scheduling granularity greater than 1 multiple vcpus share the
same struct sched_unit. Support that.
Setting the initial processor must be done carefully: we can't use
sched_set_res() as that relies on for_each_sched_unit_vcpu() which in
turn needs the vcpu already as a member of the domain
On 01.10.2019 21:51, Igor Druzhinin wrote:
> On 01/10/2019 20:48, Andrew Cooper wrote:
>> On 01/10/2019 20:15, Igor Druzhinin wrote:
>>> There is a small window where shootdown NMI might come to a CPU
>>> (e.g. in serial interrupt handler) where console lock is taken. In order
>>> not to leave foll
Along the lines of other x86-specific pieces under tools/.
Signed-off-by: Jan Beulich
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -472,6 +472,7 @@
F: tools/firmware/vgabios/
F: tools/fuzz/cpu-policy/
F: tools/fuzz/x86_instruction_emulator/
+F: tools/misc/xen-cpuid.c
F: tools/t
I was (mistakenly, as - looking at the code - it's clearly not intended
to work) passing the tool "Raw" and "Host" as command line arguments.
Avoid printing just "Raw " with not even a newline at the end in
such a case. Instead report what wasn't understood by the parsing logic.
Signed-off-b
Hi Adam,
On Tue, Oct 01, 2019 at 07:14:13PM -0500, Adam Ford wrote:
> On Sun, Sep 29, 2019 at 8:33 AM Adam Ford wrote:
> >
> > I am attaching two logs. I now the mailing lists will be unhappy, but
> > don't want to try and spam a bunch of log through the mailing liast.
> > The two logs show the
On 01.10.2019 17:16, Boris Ostrovsky wrote:
> Currently execution of panic() continues until Xen's panic notifier
> (xen_panic_event()) is called at which point we make a hypercall that
> never returns.
>
> This means that any notifier that is supposed to be called later as
> well as significant p
On 01.10.19 19:45, Boris Ostrovsky wrote:
> On 10/1/19 5:01 AM, David Hildenbrand wrote:
>> Let's simply use balloon_append() directly.
>>
>> Cc: Boris Ostrovsky
>> Cc: Juergen Gross
>> Cc: Stefano Stabellini
>> Signed-off-by: David Hildenbrand
>
> For the series (and your earlier patch)
>
>
On 02.10.19 09:47, David Hildenbrand wrote:
On 01.10.19 19:45, Boris Ostrovsky wrote:
On 10/1/19 5:01 AM, David Hildenbrand wrote:
Let's simply use balloon_append() directly.
Cc: Boris Ostrovsky
Cc: Juergen Gross
Cc: Stefano Stabellini
Signed-off-by: David Hildenbrand
For the series (and
On 01.10.2019 20:00, Andrew Cooper wrote:
> On 01/10/2019 10:07, Jan Beulich wrote:
>> The write-discard property of the type can't be represented in IOMMU
>> page table entries. Make sure the respective checks / tracking can't
>> race, by utilizing the domain lock. The other sides of the sharing/
On 01.10.2019 22:59, Andrew Cooper wrote:
> On 01/10/2019 09:38, Jan Beulich wrote:
>> On 30.09.2019 21:16, Andrew Cooper wrote:
>>> Clang in particular has a habit of out-of-lining these and creating multiple
>>> local copies of _mfn() and mfn_x(), etc. Override this behaviour.
>> Is special casi
On 01.10.2019 18:16, Andrew Cooper wrote:
> On 01/10/2019 16:58, Jan Beulich wrote:
>> On 01.10.2019 17:52, Andrew Cooper wrote:
>>> On 01/10/2019 15:48, Jan Beulich wrote:
On 01.10.2019 16:32, Andrew Cooper wrote:
> There are legitimate circumstance where array hardening is not wanted or
Hi Stefano,
On 10/2/19 2:25 AM, Stefano Stabellini wrote:
On Mon, 5 Aug 2019, Julien Grall wrote:
After upgrading Debian to Buster, I have began to notice console
mangling when using zsh in Dom0. This is happenning because output sent by
zsh to the console may contain NULs in the middle of the
flight 142108 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/142108/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-qemuu-rhel6hvm-intel 7 xen-boot fail REGR. vs. 140282
test-amd64-i386-f
On 01.10.2019 17:37, Andrew Cooper wrote:
> On 01/10/2019 15:32, Jan Beulich wrote:
>> On 01.10.2019 14:51, Andrew Cooper wrote:
>>> On 01/10/2019 13:21, Jan Beulich wrote:
On 30.09.2019 20:24, Andrew Cooper wrote:
> The code generation for barrier_nospec_true() is not correct. We are
>>
flight 142117 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/142117/
Perfect :-)
All tests in this flight passed as required
version targeted for testing:
ovmf 5be5439a5a4e45382abdba2a4339db4bb8e4bbcb
baseline version:
ovmf ed9db1b91ceba7d3a2474
On 01.10.2019 17:11, Paul Durrant wrote:
> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
> domain, migration may be needlessly vetoed due to the check of
> is_iommu_enabled() in paging_log_dirty_enable().
> There is actually no need to prevent logdirty from being enabled u
On 01.10.2019 18:32, Andrew Cooper wrote:
> This is a minor UI change, but users which have elected to enable
> XEN_GUEST (which still defaults to no) will definitely need one of these
> options, and will typically want both.
>
> Signed-off-by: Andrew Cooper
Acked-by: Jan Beulich
_
On 02.10.19 09:27, Jan Beulich wrote:
1: MAINTAINERS: add tools/misc/xen-cpuid to "X86 ARCHITECTURE"
2: tools/xen-cpuid: avoid producing bogus output
They're not overly important to have for 4.13, but they're also rather
low risk, so I think they're worthwhile considering at this point in
time.
On 01.10.19 18:32, Andrew Cooper wrote:
This is a minor UI change, but users which have elected to enable
XEN_GUEST (which still defaults to no) will definitely need one of these
options, and will typically want both.
Signed-off-by: Andrew Cooper
---
CC: Jan Beulich
CC: Wei Liu
CC: Roger Pau
Hi Stefano,
On 10/2/19 2:05 AM, Stefano Stabellini wrote:
On Tue, 24 Sep 2019, Julien Grall wrote:
The documentation is using a mix of ARM (old) and Arm (new). To stay
consistent, use only the new name.
Thank you for the patch, it must have been "not fun" to write this
patch.
However, let me
On 02/10/2019 08:07, Jan Beulich wrote:
> On 01.10.2019 21:44, Andrew Cooper wrote:
>> In this example, hardware can the emulator can disagree by using a
>> different access width.
>>
>> I've been experimenting with my Rome system, and an emulator hardcoded
>> to use 2-byte accesses. After some in
On Wed, 2 Oct 2019 at 09:42, Jan Beulich wrote:
>
> On 01.10.2019 17:11, Paul Durrant wrote:
> > Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
> > domain, migration may be needlessly vetoed due to the check of
> > is_iommu_enabled() in paging_log_dirty_enable().
> > There
Hi,
On 10/1/19 7:08 PM, Stefano Stabellini wrote:
On Thu, 26 Sep 2019, Oleksandr Tyshchenko wrote:
From: Oleksandr Tyshchenko
Renesas IPMMU-VMSA support (Arm) can be considered
as Technological Preview feature.
Signed-off-by: Oleksandr Tyshchenko
Acked-by: Stefano Stabellini
I have com
On 01.10.2019 16:32, Andrew Cooper wrote:
> The code generation for barrier_nospec_true() is not correct; the lfence
> instructions are generally too early in the instruction stream, resulting in a
> performance hit but no additional speculative safety.
>
> This is caused by inline assembly trying
On 02/10/2019 09:40, Jan Beulich wrote:
> On 01.10.2019 17:11, Paul Durrant wrote:
>> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
>> domain, migration may be needlessly vetoed due to the check of
>> is_iommu_enabled() in paging_log_dirty_enable().
>> There is actually no
Hi Stefano,
On 10/2/19 1:16 AM, Stefano Stabellini wrote:
On Tue, 1 Oct 2019, Julien Grall wrote:
On 01/10/2019 21:12, Stefano Stabellini wrote:
On Thu, 26 Sep 2019, Julien Grall wrote:
I am OK with the general approach but one thing to note is that the fiq
handler doesn't use the guest_vector
On 02.10.2019 10:51, Andrew Cooper wrote:
> On 02/10/2019 08:07, Jan Beulich wrote:
>> On 01.10.2019 21:44, Andrew Cooper wrote:
>>> In this example, hardware can the emulator can disagree by using a
>>> different access width.
>>>
>>> I've been experimenting with my Rome system, and an emulator ha
On 02.10.2019 10:59, Paul Durrant wrote:
> On Wed, 2 Oct 2019 at 09:42, Jan Beulich wrote:
>>
>> On 01.10.2019 17:11, Paul Durrant wrote:
>>> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
>>> domain, migration may be needlessly vetoed due to the check of
>>> is_iommu_enab
On Wed, 2 Oct 2019 at 10:26, Jan Beulich wrote:
>
> On 02.10.2019 10:59, Paul Durrant wrote:
> > On Wed, 2 Oct 2019 at 09:42, Jan Beulich wrote:
> >>
> >> On 01.10.2019 17:11, Paul Durrant wrote:
> >>> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
> >>> domain, migration
On Wed, 2 Oct 2019 at 10:12, Andrew Cooper wrote:
>
> On 02/10/2019 09:40, Jan Beulich wrote:
> > On 01.10.2019 17:11, Paul Durrant wrote:
> >> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
> >> domain, migration may be needlessly vetoed due to the check of
> >> is_iommu_
On 02.10.2019 11:10, Andrew Cooper wrote:
> On 02/10/2019 09:40, Jan Beulich wrote:
>> On 01.10.2019 17:11, Paul Durrant wrote:
>>> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
>>> domain, migration may be needlessly vetoed due to the check of
>>> is_iommu_enabled() in pa
> On 1. Oct 2019, at 17:37, Andrew Cooper wrote:
>
> On 01/10/2019 15:32, Jan Beulich wrote:
>> On 01.10.2019 14:51, Andrew Cooper wrote:
>>> On 01/10/2019 13:21, Jan Beulich wrote:
On 30.09.2019 20:24, Andrew Cooper wrote:
> The code generation for barrier_nospec_true() is not correct
On Wed, 2 Oct 2019 at 10:42, Jan Beulich wrote:
>
> On 02.10.2019 11:10, Andrew Cooper wrote:
> > On 02/10/2019 09:40, Jan Beulich wrote:
> >> On 01.10.2019 17:11, Paul Durrant wrote:
> >>> Now that xl.cfg has an option to explicitly enable IOMMU mappings for a
> >>> domain, migration may be needl
On Wed, Oct 02, 2019 at 09:27:07AM +0200, Jan Beulich wrote:
> 1: MAINTAINERS: add tools/misc/xen-cpuid to "X86 ARCHITECTURE"
> 2: tools/xen-cpuid: avoid producing bogus output
>
> They're not overly important to have for 4.13, but they're also rather
> low risk, so I think they're worthwhile cons
On 20.08.2019 22:38, Andrew Cooper wrote:
On 20/08/2019 21:36, Andreas Kinzler wrote:
Is it a known problem? Did someone test the new EPYCs?
This looks familiar, and is still somewhere on my TODO list.
Do you already know the reason or is that still to investigate?
Does booting with a single
Adding Juergen for a release-ack.
On Tue, Oct 01, 2019 at 05:44:07PM +0100, Anthony PERARD wrote:
> On Tue, Oct 01, 2019 at 05:22:33PM +0200, Roger Pau Monne wrote:
> > Currently only suspend power control requests wait for an ack from the
> > domain, while power off or reboot requests simply writ
Fix an unguarded d->arch.hvm access in assign_device().
Signed-off-by: Jan Beulich
---
Split from now withdrawn "x86/HVM: p2m_ram_ro is incompatible with
device pass-through".
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1488,7 +1488,8 @@ static int assign_device(s
On Wed, Oct 02, 2019 at 12:10:06PM +0200, Jan Beulich wrote:
> Fix an unguarded d->arch.hvm access in assign_device().
>
> Signed-off-by: Jan Beulich
Reviewed-by: Roger Pau Monné
I'm also adding Juergen as I think this is suitable for 4.13.
Thanks, Roger.
On 02/10/2019 08:27, Jan Beulich wrote:
> 1: MAINTAINERS: add tools/misc/xen-cpuid to "X86 ARCHITECTURE"
> 2: tools/xen-cpuid: avoid producing bogus output
>
> They're not overly important to have for 4.13, but they're also rather
> low risk, so I think they're worthwhile considering at this point
On 02.10.2019 12:14, Roger Pau Monné wrote:
> On Wed, Oct 02, 2019 at 12:10:06PM +0200, Jan Beulich wrote:
>> Fix an unguarded d->arch.hvm access in assign_device().
>>
>> Signed-off-by: Jan Beulich
>
> Reviewed-by: Roger Pau Monné
Thanks.
> I'm also adding Juergen as I think this is suitable
On 01.10.2019 21:15, Igor Druzhinin wrote:
> There is a small window where shootdown NMI might come to a CPU
> (e.g. in serial interrupt handler) where console lock is taken. In order
> not to leave following console prints waiting infinitely for shot down
> CPUs to free the lock - force unlock the
On 02.10.19 12:25, Jan Beulich wrote:
On 01.10.2019 21:15, Igor Druzhinin wrote:
There is a small window where shootdown NMI might come to a CPU
(e.g. in serial interrupt handler) where console lock is taken. In order
not to leave following console prints waiting infinitely for shot down
CPUs to
On 02.10.19 12:19, Jan Beulich wrote:
On 02.10.2019 12:14, Roger Pau Monné wrote:
On Wed, Oct 02, 2019 at 12:10:06PM +0200, Jan Beulich wrote:
Fix an unguarded d->arch.hvm access in assign_device().
Signed-off-by: Jan Beulich
Reviewed-by: Roger Pau Monné
Thanks.
I'm also adding Juerge
Hi Kateryna
Thanks for your interest in this project.
On Wed, Oct 02, 2019 at 12:37:30AM +0200, Kateryna Razumova wrote:
> Hello,
> I want to make the first contribution for xen. I want to participate with:
> Introduce CONFIG_PDX and use it in Xen hypervisor
>
> Where can I start?
Please read a
On 02/10/2019 11:14, Roger Pau Monné wrote:
> On Wed, Oct 02, 2019 at 12:10:06PM +0200, Jan Beulich wrote:
>> Fix an unguarded d->arch.hvm access in assign_device().
>>
>> Signed-off-by: Jan Beulich
> Reviewed-by: Roger Pau Monné
Acked-by: Andrew Cooper
The current implementation of host_maskall makes it sticky across
assign and deassign calls, which means that once a guest forces Xen to
set host_maskall the maskall bit is not going to be cleared until a
call to PHYSDEVOP_prepare_msix is performed. Such call however
shouldn't be part of the normal
On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport wrote:
>
> Hi Adam,
>
> On Tue, Oct 01, 2019 at 07:14:13PM -0500, Adam Ford wrote:
> > On Sun, Sep 29, 2019 at 8:33 AM Adam Ford wrote:
> > >
> > > I am attaching two logs. I now the mailing lists will be unhappy, but
> > > don't want to try and spam
On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.
In the beginning there should only be basic support: entries can be
added from the hypervisor itself only, there is a simple hyp
On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.
This is a first implementation of that idea adding the basic
functionality to hypervisor and tools side. The interface to any
us
Add the new library libxenhypfs for access to the hypervisor filesystem.
Signed-off-by: Juergen Gross
Acked-by: Ian Jackson
---
V1:
- rename to libxenhypfs
- add xenhypfs_write()
---
tools/Rules.mk | 6 +
tools/libs/Makefile | 1 +
tools/libs/hypfs/Makef
Add support to read values of hypervisor runtime parameters via the
hypervisor file system for all unsigned integer type runtime parameters.
Signed-off-by: Juergen Gross
---
docs/misc/hypfs-paths.pandoc | 9 +
xen/common/kernel.c | 39 +++
2
Add the infrastructure for the hypervisor filesystem.
This includes the hypercall interface and the base functions for
entry creation, deletion and modification.
Initially we support string and unsigned integer entry types. The saved
entry size is an upper bound, so for unsigned integer entries w
Add the xenfs tool for accessing the hypervisor filesystem.
Signed-off-by: Juergen Gross
---
V1:
- rename to xenhypfs
- don't use "--" for subcommands
- add write support
V2:
- escape non-printable characters per default with cat szbcommand
(Ian Jackson)
- add -b option to cat subcommand (Ian
Add the /buildinfo/config entry to the hypervisor filesystem. This
entry contains the .config file used to build the hypervisor.
Signed-off-by: Juergen Gross
---
.gitignore | 2 ++
docs/misc/hypfs-paths.pandoc | 9 +
xen/common/Makefile | 9 +
xen/co
On 02.10.19 12:08, Roger Pau Monné wrote:
Adding Juergen for a release-ack.
On Tue, Oct 01, 2019 at 05:44:07PM +0100, Anthony PERARD wrote:
On Tue, Oct 01, 2019 at 05:22:33PM +0200, Roger Pau Monne wrote:
Currently only suspend power control requests wait for an ack from the
domain, while pow
On Tue, 1 Oct 2019, Stefano Stabellini wrote:
> On Tue, 1 Oct 2019, Julien Grall wrote:
> > Hi,
> >
> > On 01/10/2019 21:12, Stefano Stabellini wrote:
> > > On Thu, 26 Sep 2019, Julien Grall wrote:
> > >> At the moment, enter_hypervisor_head() and leave_hypervisor_tail() are
> > >> used to deal wi
Hi,
On 10/2/19 1:41 PM, Stefano Stabellini wrote:
On Tue, 1 Oct 2019, Stefano Stabellini wrote:
On Tue, 1 Oct 2019, Julien Grall wrote:
Hi,
On 01/10/2019 21:12, Stefano Stabellini wrote:
On Thu, 26 Sep 2019, Julien Grall wrote:
At the moment, enter_hypervisor_head() and leave_hypervisor_tai
> On 1 Oct 2019, at 16:22, Roger Pau Monne wrote:
>
> tools/ocaml/libs/xl/xenlight.ml.in | 4 +-
> tools/ocaml/libs/xl/xenlight.mli.in | 4 +-
> tools/ocaml/libs/xl/xenlight_stubs.c | 18 --
Acked-by: Christian Lindig
___
Xen-devel mailing li
On 10/2/19 3:40 AM, Jan Beulich wrote:
> On 01.10.2019 17:16, Boris Ostrovsky wrote:
>> Currently execution of panic() continues until Xen's panic notifier
>> (xen_panic_event()) is called at which point we make a hypercall that
>> never returns.
>>
>> This means that any notifier that is supposed
On 02.10.2019 12:49, Roger Pau Monne wrote:
> The current implementation of host_maskall makes it sticky across
> assign and deassign calls, which means that once a guest forces Xen to
> set host_maskall the maskall bit is not going to be cleared until a
> call to PHYSDEVOP_prepare_msix is performe
flight 142110 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/142110/
Regressions :-(
Tests which did not succeed and are blocking,
including tests which could not be run:
test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs.
133580
test-amd64-i38
On 02.10.2019 15:24, Boris Ostrovsky wrote:
> On 10/2/19 3:40 AM, Jan Beulich wrote:
>> On 01.10.2019 17:16, Boris Ostrovsky wrote:
>>> Currently execution of panic() continues until Xen's panic notifier
>>> (xen_panic_event()) is called at which point we make a hypercall that
>>> never returns.
>>
The "TO BE DOCUMENTED" section of the xl man page still references
tmem. So does the xl.conf man page. Remove the references.
Signed-off-by: Juergen Gross
---
docs/man/xl.1.pod.in | 12
docs/man/xl.conf.5.pod | 2 +-
2 files changed, 1 insertion(+), 13 deletions(-)
diff --git a/
On Wed, Oct 02, 2019 at 03:41:56PM +0200, Juergen Gross wrote:
> The "TO BE DOCUMENTED" section of the xl man page still references
> tmem. So does the xl.conf man page. Remove the references.
>
> Signed-off-by: Juergen Gross
Nice catch. Thanks.
Acked-by: Wei Liu
_
On Tue, Oct 01, 2019 at 05:27:53PM +0200, Roger Pau Monné wrote:
> On Tue, Oct 01, 2019 at 05:22:33PM +0200, Roger Pau Monne wrote:
> > +int libxl_domain_reboot(libxl_ctx *ctx, uint32_t domid,
> > +const libxl_asyncop_how *ao_how)
> > {
> > -GC_INIT(ctx);
> > +AO_CR
On 02.10.19 09:27, Juergen Gross wrote:
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets require to schedule the idle unit it is required to set
On 02.10.19 15:44, Wei Liu wrote:
On Wed, Oct 02, 2019 at 03:41:56PM +0200, Juergen Gross wrote:
The "TO BE DOCUMENTED" section of the xl man page still references
tmem. So does the xl.conf man page. Remove the references.
Signed-off-by: Juergen Gross
Nice catch. Thanks.
Acked-by: Wei Liu
On 10/2/19 9:42 AM, Jan Beulich wrote:
>
> I can only guess that the thinking probably was that e.g. external
> dumping (by the tool stack) would be more reliable (including but
> not limited to this meaning less change of state from when the
> original crash reason was detected) than having the do
On 30.09.2019 15:32, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1518,11 +1518,15 @@ static int hvm_access_cf8(
> {
> struct domain *d = current->domain;
>
> -if ( dir == IOREQ_WRITE && bytes == 4 )
> +if ( bytes != 4 )
> +r
On 30.09.2019 15:32, Roger Pau Monne wrote:
> Internal ioreq servers are plain function handlers implemented inside
> of the hypervisor. Note that most fields used by current (external)
> ioreq servers are not needed for internal ones, and hence have been
> placed inside of a struct and packed in a
Hi Juergen,
On 10/2/19 2:56 PM, Jürgen Groß wrote:
On 02.10.19 09:27, Juergen Gross wrote:
When switching sched units synchronize all vcpus of the new unit to be
scheduled at the same time.
A variable sched_granularity is added which holds the number of vcpus
per schedule unit.
As tasklets re
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same scheduling unit special care must be taken in orde
On 30.09.2019 15:32, Roger Pau Monne wrote:
> @@ -855,6 +884,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t
> id)
> struct hvm_ioreq_server *s;
> int rc;
>
> +ASSERT(!hvm_ioreq_is_internal(id));
With this, ...
> @@ -871,13 +903,13 @@ int hvm_destroy_ioreq_server(s
Hi,
On 10/2/19 3:43 PM, Juergen Gross wrote:
vcpu_wake() and vcpu_sleep() need to be made core scheduling aware:
they might need to switch a single vcpu of an already scheduled unit
between running and not running.
Especially when vcpu_sleep() for a vcpu is being called by a vcpu of
the same sc
On 02.10.2019 16:14, Boris Ostrovsky wrote:
> On 10/2/19 9:42 AM, Jan Beulich wrote:
>>
>> I can only guess that the thinking probably was that e.g. external
>> dumping (by the tool stack) would be more reliable (including but
>> not limited to this meaning less change of state from when the
>> ori
On 30.09.2019 15:32, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1482,7 +1482,16 @@ int hvm_send_ioreq(ioservid_t id, ioreq_t *proto_p,
> bool buffered)
> ASSERT(s);
>
> if ( buffered )
> -return hvm_send_buffered_ioreq(s, prot
On 30.09.2019 15:32, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -485,6 +485,38 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s,
> bool buf)
> return rc;
> }
>
> +int hvm_set_ioreq_handler(struct domain *d, ioservid_t id,
> +
Hi Juergen. This series
https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg03072.html
needs your release review.
Here's the first patch. I can bounce you a digest if you like.
Thanks,
Ian.
Marek Marczykowski-Górecki writes ("[PATCH v8 1/4] libxl: fix cold plugged PCI
device wit
1 - 100 of 179 matches
Mail list logo