On Fri, 2019-08-30 at 17:10 +0200, Jan Beulich wrote:
> On 21.08.2019 18:35, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -699,14 +699,30 @@ trampoline_setup:
> > cmp $sym_offs(__trampoline_rel_stop),%
On Fri, 2019-08-30 at 17:43 +0200, Jan Beulich wrote:
> On 21.08.2019 18:35, David Woodhouse wrote:
> > From: David Woodhouse
> >
> > Ditch the bootsym() access from C code for the variables populated by
> > 16-bit boot code. As well as being cleaner this also paves
On Mon, 2019-09-02 at 09:44 +0200, Jan Beulich wrote:
> Right, just one pair should survive. And seeing how things work before
> this series I think it indeed should be linker script symbols only.
> And then the ALIGN() ahead of the "start" ones should stay, but there's
> no need for one on the "en
On Mon, 2019-09-02 at 15:47 +0200, Jan Beulich wrote:
> On 02.09.2019 14:51, David Woodhouse wrote:
> > On Mon, 2019-09-02 at 09:44 +0200, Jan Beulich wrote:
> > > Right, just one pair should survive. And seeing how things work before
> > > this series I think it ind
On Wed, 2019-05-01 at 17:09 +0100, Andrew Cooper wrote:
> I'm afraid testing says no. S3 works fine without this change, and
> resets with it.
Thanks for testing. That's obvious in retrospect — although the wakeup
code is relocated alongside the trampoline code, it runs in real mode
with its own
From: David Woodhouse
The wakeup code is now relocated alongside the trampoline code, so %ds
is just fine here.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/wakeup.S | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch
From: David Woodhouse
In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.
Signed-off-by: David
From: David Woodhouse
Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image.
For now, the boot code (including the EFI loader path) still
From: David Woodhouse
We appear to have implemented a memcpy() in the low-memory trampoline
which we then call into from __start_xen(), for no adequately defined
reason.
Kill it with fire.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/mem.S| 27 +--
xen
From: David Woodhouse
If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/head.S | 10 ++
xen/arch/x86/boot/trampoline.S | 4
2 files change
From: David Woodhouse
As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.
• bootsym() is now used only at boot time when no-real-mode isn't set.
• bootdatasym() i
From: David Woodhouse
Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.
Signed-off-by: David Woodhouse
---
xen
Which leaves me with no valid reason for reloc() to be running this
early, so I may well kill it with fire too. I just need to find a
safe location for the 16-bit boot code.
v2: Fix wake code. Thanks Andy for testing.
David Woodhouse (7):
x86/wakeup: Stop using %fs for lidt/lgdt
x86/boot: Re
Argh, that's the first version again. Sorry. The fixed version is in
http://git.infradead.org/users/dwmw2/xen.git/shortlog/refs/heads/bootcleanup
but I won't post the whole series again right now.
smime.p7s
Description: S/MIME cryptographic signature
_
> On 02/05/2019 09:14, Jan Beulich wrote:
> On 01.05.19 at 13:17, wrote:
>>> We appear to have implemented a memcpy() in the low-memory trampoline
>>> which we then call into from __start_xen(), for no adequately defined
>>> reason.
>> May I suggest that in cases like this you look at the com
On Thu, 2019-05-02 at 10:23 +0100, Andrew Cooper wrote:
> ...this reasoning is bogus. We're either accessing the data itself, or
> the memcpy function, but there is no possible way to programatically
> avoid "wrong" access into the trampoline, because we're still accessing it.
Just to be clear, n
On Thu, 2019-05-02 at 15:08 +0100, Andrew Cooper wrote:
> Sadly it cant.
>
> We have a number of alignment issues (well - confusions at least), most
> obviously that trampoline_{start,end} in the linked Xen image has no
> particular alignment, but the relocated trampoline_start has a 4k
> requirem
On Wed, 2019-06-12 at 06:00 -0600, Jan Beulich wrote:
> > > > Reported-by: David Woodhouse
> > >
> > > Does this mean there was an actual problem resulting from this code
> > > being there, or simply the observation that this code is all dead?
> >
On Wed, 2019-06-12 at 13:20 +0100, Andrew Cooper wrote:
> You definitely complained about the BDA on IRC, which is why I started
> looking, but I'm happy to leave you out of the patch if you'd prefer.
I did indeed complain about the BDA, but mostly when we were reading
the amount of memory availab
From: David Woodhouse
When recursing, a node sometimes disappears. Deal with it and move on
instead of aborting and failing to print the rest of what was
requested.
Either EACCES or ENOENT may occur as the result of race conditions with
updates; any other error should abort as before.
Signed
On Wed, 2019-08-07 at 16:37 +0100, Ian Jackson wrote:
> I think it is not safe to continue if we get a permissions error. I
> think that would mean "we were not able to determine whether this node
> exists", not "this node does not exist".
Either way, I interpreted it more as "haha screw you, now
On Fri, 2019-08-09 at 12:27 +0100, Ian Jackson wrote:
> Can you explain to me what your needs are for all this ? I want
> xenstore-ls to be able to do what you want, but I don't want to impose
> a lot of work on you. I also want the correctness property I mention
> above.
Ultimately, I don't rea
On Fri, 2019-08-09 at 12:48 +0100, David Woodhouse wrote:
>
> I'll send a v3 of the original patch which *only* ignores ENOENT. That
> one really shouldn't need to cause a non-zero exit status because it's
> equivalent to the case where the child node didn't exi
From: David Woodhouse
If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/head.S | 10 ++
xen/arch/x86/boot/trampoline.S | 4
2 files change
nbits.xen.org/gitweb/?p=people/dwmw2/xen.git;a=shortlog;h=refs/heads/bootcleanup
v2: Patch #1 is the first series is already merged.
Fold in minor fixup from Andrew to what is now patch #1.
Verbally agree to overcome Jan's objections to patch #1, in Chicago.
David Woodhouse (6):
x8
From: David Woodhouse
As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.
• bootsym() is now used only at boot time when no-real-mode isn't set.
• bootdatasym() i
From: David Woodhouse
In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.
Signed-off-by: David
From: David Woodhouse
We appear to have implemented a memcpy() in the low-memory trampoline
which we then call into from __start_xen(), for no adequately defined
reason.
Kill it with fire.
Signed-off-by: David Woodhouse
Acked-by: Andrew Cooper
---
v2: Minor fixups from Andrew.
xen/arch/x86
From: David Woodhouse
Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image.
For now, the boot code (including the EFI loader path) still
From: David Woodhouse
Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.
Signed-off-by: David Woodhouse
---
xen
On Mon, 2019-08-12 at 11:55 +0200, Jan Beulich wrote:
> On 09.08.2019 17:02, David Woodhouse wrote:
> > From: David Woodhouse
> >
> > In preparation for splitting the boot and permanent trampolines from
> > each other. Some of these will change back, but most are boot
Apologies for delayed response; I was away last week and was frowned at
every time I so much as looked towards the laptop.
On Mon, 2019-08-12 at 11:41 +0200, Jan Beulich wrote:
> On 09.08.2019 17:01, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/trampoline.S
> > +++ b/xe
On Mon, 2019-08-12 at 12:24 +0200, Jan Beulich wrote:
> On 09.08.2019 17:02, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -733,6 +733,17 @@ trampoline_setup:
> > cmp $sym_offs(__bootsym_seg_stop),%
On Mon, 2019-08-12 at 12:55 +0200, Jan Beulich wrote:
> On 09.08.2019 17:02, David Woodhouse wrote:
> > From: David Woodhouse
> >
> > Where booted from EFI or with no-real-mode, there is no need to stomp
> > on low memory with the 16-boot code. I
On Mon, 2019-08-19 at 14:42 +0100, Andrew Cooper wrote:
> ... to separate code from data. In particular,
> trampoline_realmode_entry's
> write to trampoline_cpu_started clobbers the I-cache line containing
> trampoline_protmode_entry, which won't be great for AP startup.
>
> Reformat the comments
On Mon, 2019-08-12 at 11:10 +0200, Jan Beulich wrote:
> On 09.08.2019 17:01, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -735,7 +735,17 @@ trampoline_setup:
> > /* Switch to low-memory stack which liv
From: David Woodhouse
As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.
• bootsym() is now used only at boot time when no-real-mode isn't set.
• bootdatasym() i
e for
not making the GDT sizing automatic as discussed.
David Woodhouse (5):
x86/boot: Only jump into low trampoline code for real-mode boot
x86/boot: Split bootsym() into four types of relocations
x86/boot: Rename trampoline_{start,end} to boot_trampoline_{start,end}
x86/boo
From: David Woodhouse
Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image, having applied suitable relocations.
This means that the GDT
From: David Woodhouse
Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.
These variables are put into a separate
From: David Woodhouse
If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/head.S | 10 ++
xen/arch/x86/boot/trampoline.S | 4
2 files change
From: David Woodhouse
In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.
Signed-off-by: David
> On 19.08.2019 17:25, David Woodhouse wrote:
>> On Mon, 2019-08-12 at 12:55 +0200, Jan Beulich wrote:
>>> On 09.08.2019 17:02, David Woodhouse wrote:
>>>> @@ -97,7 +100,7 @@ GLOBAL(trampoline_realmode_entry)
>>>> cld
>>>>
> On 19.08.2019 17:25, David Woodhouse wrote:
>> On Mon, 2019-08-12 at 12:24 +0200, Jan Beulich wrote:
>>> On 09.08.2019 17:02, David Woodhouse wrote:
>>>> @@ -227,7 +231,7 @@ start64:
>>>> .word 0
>>>> idt_48: .word
> On 19.08.2019 17:24, David Woodhouse wrote:
>> On Mon, 2019-08-12 at 11:55 +0200, Jan Beulich wrote:
>>> On 09.08.2019 17:02, David Woodhouse wrote:
>>>> From: David Woodhouse
>>>>
>>>> In preparation for splitting the boot and permanen
On Fri, 2019-08-30 at 16:25 +0200, Jan Beulich wrote:
> On 21.08.2019 18:35, David Woodhouse wrote:
> > --- a/xen/arch/x86/boot/head.S
> > +++ b/xen/arch/x86/boot/head.S
> > @@ -727,7 +727,17 @@ trampoline_setup:
> > /* Switch to low-memory stack which lives a
From: David Woodhouse
When recursing, a node sometimes disappears. Deal with it and move on
instead of aborting and failing to print the rest of what was
requested.
Signed-off-by: David Woodhouse
---
And thus did an extremely sporadic "not going to delete that device
because it al
On Mon, 2019-03-04 at 14:18 +, Wei Liu wrote:
> CC Ian as well.
>
> It would be better if you run ./scripts/get_maintainers.pl on your
> patches in the future to CC the correct people.
Will do; thanks.
> On Fri, Mar 01, 2019 at 12:16:56PM +, David Woodhouse wrote:
On Mon, 2019-03-04 at 15:51 +0100, Juergen Gross wrote:
> On 04/03/2019 15:31, David Woodhouse wrote:
> > On Mon, 2019-03-04 at 14:18 +, Wei Liu wrote:
> > > CC Ian as well.
> > >
> > > It would be better if you run ./scripts/get_maintainers.pl on
> >
On Mon, 2019-03-04 at 15:46 +, Wei Liu wrote:
> To me it is just a bit weird to guard with cur_depth -- if you really
> want to continue at all cost, why don't you make it really continue at
> all cost?
There isn't another early exit from the loop. It really does continue
at all costs.
The on
On Mon, 2016-08-08 at 18:54 +, Trammell Hudson wrote:
> Keir Fraser replied to Ward's follow up question:
>
> > > Is there a significant difference between booting 3.1.4 and
> > > 3.2.1 with kexec in terms of BIOS requirements?
> >
> > If you specify no-real-mode in both cases then there
> >
I've been looking at kexec into Xen, and from Xen.
Kexec-tools doesn't support Multiboot v2, and doesn't treat the Xen
image as relocatable. So it loads it at address zero, which causes lots
of amusement:
Firstly, head.S trusts the low memory limit found in the BDA, which has
been scribbled on. H
On Sat, 2019-04-27 at 08:15 +0200, David Woodhouse wrote:
> I've been looking at kexec into Xen, and from Xen.
>
> Kexec-tools doesn't support Multiboot v2, and doesn't treat the Xen
> image as relocatable. So it loads it at address zero, which causes lots
> of a
> On 27/04/2019 07:15, David Woodhouse wrote:
>> I've been looking at kexec into Xen, and from Xen.
>>
>> Kexec-tools doesn't support Multiboot v2, and doesn't treat the Xen
>> image as relocatable. So it loads it at address zero, which causes lots
>&g
From: David Woodhouse
We appear to have implemented a memcpy() in the low-memory trampoline
which we then call into from __start_xen(), for no adequately defined
reason.
Kill it with fire.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/mem.S| 27 +--
xen
From: David Woodhouse
As a first step toward using the low-memory trampoline only when necessary
for a legacy boot without no-real-mode, clean up the relocations into
three separate groups.
• bootsym() is now used only at boot time when no-real-mode isn't set.
• bootdatasym() i
From: David Woodhouse
If the no-real-mode flag is set, don't go there at all. This is a prelude
to not even putting it there in the first place.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/head.S | 10 ++
xen/arch/x86/boot/trampoline.S | 4
2 files change
From: David Woodhouse
The wakeup code is now relocated alongside the trampoline code, so %ds
is just fine here.
Signed-off-by: David Woodhouse
---
xen/arch/x86/boot/wakeup.S | 11 ++-
1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/xen/arch/x86/boot/wakeup.S b/xen/arch
From: David Woodhouse
Ditch the bootsym() access from C code for the variables populated by
16-bit boot code. As well as being cleaner this also paves the way for
not having the 16-bit boot code in low memory for no-real-mode or EFI
loader boots at all.
Signed-off-by: David Woodhouse
---
xen
From: David Woodhouse
Where booted from EFI or with no-real-mode, there is no need to stomp
on low memory with the 16-boot code. Instead, just go straight to
trampoline_protmode_entry() at its physical location within the Xen
image.
For now, the boot code (including the EFI loader path) still
From: David Woodhouse
In preparation for splitting the boot and permanent trampolines from
each other. Some of these will change back, but most are boot so do the
plain search/replace that way first, then a subsequent patch will extract
the permanent trampoline code.
Signed-off-by: David
put the 16-bit boot
trampoline though, which doesn't already contain anything that the
bootloader created for us.
In fact, isn't there already a chance that head.S will choose a location
for the trampoline which is already part of a module or contains one of
the Multiboot breadcrumbs?
David Woo
On Wed, 2018-08-22 at 09:19 +0200, gre...@linuxfoundation.org wrote:
> This is a note to let you know that I've just added the patch titled
>
> x86/entry/64: Remove %ebx handling from error_entry/exit
>
> to the 4.9-stable tree which can be found at:
>
> http://www.kernel.org/git/?p=linu
On Wed, 2018-11-28 at 08:44 -0800, Andy Lutomirski wrote:
> > Can we assume it's always from kernel? The Xen code definitely seems to
> > handle invoking this from both kernel and userspace contexts.
>
> I learned that my comment here was wrong shortly after the patch landed :(
Turns out the only
On Thu, 2018-12-06 at 10:49 -0800, Andy Lutomirski wrote:
> > On Dec 6, 2018, at 9:36 AM, Andrew Cooper wrote:
> > Basically - what is happening is that xen_load_tls() is invalidating the
> > %gs selector while %gs is still non-NUL.
> >
> > If this happens to intersect with a vcpu reschedule, %gs
On Thu, 2018-12-06 at 20:27 +, David Woodhouse wrote:
> On Thu, 2018-12-06 at 10:49 -0800, Andy Lutomirski wrote:
> > > On Dec 6, 2018, at 9:36 AM, Andrew Cooper <
> > > andrew.coop...@citrix.com> wrote:
> > > Basically - what is happening is that xen_
On Fri, 2018-12-07 at 12:18 +, David Woodhouse wrote:
>
> > #else
> > + struct multicall_space mc = __xen_mc_entry(0);
> > + MULTI_set_segment_base(mc.mc, SEGBASE_GS_USER_SEL, 0);
> > +
> > loadsegment(fs, 0);
> >
ude a request to zero the user %gs in the multicall too.
Signed-off-by: David Woodhouse
---
arch/x86/include/asm/xen/hypercall.h | 11
arch/x86/xen/enlighten_pv.c | 42 +---
2 files changed, 43 insertions(+), 10 deletions(-)
diff --git a/arch/x86/inclu
ude a request to zero the user %gs in the multicall too.
Signed-off-by: David Woodhouse
---
v2: Don't accidentally remove the call to xen_mc_batch().
arch/x86/include/asm/xen/hypercall.h | 11
arch/x86/xen/enlighten_pv.c | 40 ++--
2 files changed,
On Tue, 2016-01-26 at 09:34 +0800, Jianzhong,Chang wrote:
> There are some problems when msi guest_masked is set to 1 by default.
> When guest os is windows 2008 r2 server,
> the device(eg X540-AT2 vf) is not initialized correctly.
> Host will always receive message like this :"VF Reset msg receive
On Mon, 2018-05-21 at 14:10 +0200, Roger Pau Monné wrote:
>
> Hm, I think I might have fixed this issue, see:
>
> https://git.qemu.org/?p=qemu.git;a=commit;h=a8036336609d2e184fc3543a4c439c0ba7d7f3a2
Hm, thanks. We'll look at porting that change to qemu-traditional which
still doesn't do it.
smi
On Wed, 2018-08-08 at 10:35 -0700, Sarah Newman wrote:
> commit b3681dd548d06deb2e1573890829dff4b15abf46 upstream.
>
> This version applies to v4.9.
I think you can kill the 'xorl %ebx,%ebx' from error_entry too but yes,
this does want to go to 4.9 and earlier because the 'Fixes:' tag is a
bit of
On Thu, 2018-01-18 at 10:10 -0500, Paul Durrant wrote:
> Lastly the previous code did not properly emulate an EOI if a missed EOI
> was discovered in vlapic_has_pending_irq(); it merely cleared the bit in
> the ISR. The new code instead calls vlapic_EOI_set().
Hm, this *halves* my observed perform
On Mon, 2018-09-03 at 10:12 +, Paul Durrant wrote:
>
> I believe APIC assist is intended for fully synthetic interrupts.
Hm, if by 'fully synthetic interrupts' you mean
vlapic_virtual_intr_delivery_enabled(), then no I think APIC assist
doesn't get used in that case at all.
> Is it definitel
On Wed, 2018-09-05 at 09:36 +, Paul Durrant wrote:
>
> I see. Given that Windows has used APIC assist to circumvent its EOI
> then I wonder whether we can get away with essentially doing the
> same. I.e. for a completed APIC assist found in
> vlapic_has_pending_irq() we simply clear the APIC a
On Wed, 2018-09-05 at 10:40 +, Paul Durrant wrote:
>
> Actually the neatest approach would be to get information into the
> vlapic code as to whether APIC assist is suitable for the given
> vector so that the code there can selectively enable it, and then Xen
> would know it was safe to avoid
On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote:
> The Xen specific queue spinlock wait function has two issues which
> could result in a hanging system.
>
> They have a similar root cause of clearing a pending wakeup of a
> waiting vcpu and later going to sleep waiting for the just cleared
On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote:
> - /* If irq pending already clear it and return. */
> + /* Guard against reentry. */
> + local_irq_save(flags);
> +
> + /* If irq pending already clear it. */
> if (xen_test_irq_pending(irq)) {
>
On Wed, 2018-10-10 at 14:30 +0200, Thomas Gleixner wrote:
> On Wed, 10 Oct 2018, David Woodhouse wrote:
>
> > On Mon, 2018-10-01 at 09:16 +0200, Juergen Gross wrote:
> > > - /* If irq pending already clear it and return. */
> > > +
> The Xen HV is doing it right. It is blocking the vcpu in do_poll() and
> any interrupt will unblock it.
Great. Thanks for the confirmation.
--
dwmw2
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/lis
On Fri, 2018-01-12 at 18:00 +, Andrew Cooper wrote:
> +#ifdef CONFIG_INDIRECT_THUNK
> + /* callq __x86_indirect_thunk_rcx */
> + ctxt->io_emul_stub[10] = 0xe8;
> + *(int32_t *)&ctxt->io_emul_stub[11] =
> + (unsigned long)__x86_indirect_thunk_rcx - (stub_va + 11 + 4);
> +
> +#els
On Thu, 2018-11-08 at 11:18 +0100, Juergen Gross wrote:
> Oh, sorry. Of course it does. Dereferencing a percpu variable
> directly can't work. How silly of me.
>
> The attached variant should repair that. Tested to not break booting.
Strictly speaking, shouldn't you have an atomic_init() in there
On Mon, 2018-11-19 at 08:05 +0100, Juergen Gross wrote:
> On 15/11/2018 00:22, David Woodhouse wrote:
> > On Thu, 2018-11-08 at 11:18 +0100, Juergen Gross wrote:
> > > Oh, sorry. Of course it does. Dereferencing a percpu variable
> > > directly can't work. H
On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
> + * We've got no usable stack so can't use a RETPOLINE thunk, and are
> + * further than +- 2G from the high mappings so couldn't use
> JUMP_THUNK
> + * even if was a non-RETPOLINE thunk. Futhermore, an LFENCE isn't
On Thu, 2018-01-11 at 13:41 +, Andrew Cooper wrote:
> On 11/01/18 13:03, David Woodhouse wrote:
> >
> > On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
> > >
> > > + * We've got no usable stack so can't use a RETPOLINE thunk, an
On Fri, 2018-01-12 at 18:00 +, Andrew Cooper wrote:
>
> +.macro IND_THUNK_RETPOLINE reg:req
> + call 2f
> +1:
Linux and GCC have now settled on 'pause;lfence;jmp' here.
> + lfence
> + jmp 1b
> +2:
> + mov %\reg, (%rsp)
> + ret
> +.endm
> +
smime.p7s
Descri
On Fri, 2018-01-12 at 18:01 +, Andrew Cooper wrote:
>
> @@ -1736,6 +1736,9 @@ void context_switch(struct vcpu *prev, struct
> vcpu *next)
> }
>
> ctxt_switch_levelling(next);
> +
> + if ( opt_ibpb )
> + wrmsrl(MSR_PRED_CMD, PRED_CMD_IBPB);
> }
>
If
On Mon, 2018-01-15 at 13:02 +, Andrew Cooper wrote:
> On 15/01/18 12:54, David Woodhouse wrote:
> >
> > On Fri, 2018-01-12 at 18:01 +, Andrew Cooper wrote:
> > >
> > > @@ -1736,6 +1736,9 @@ void context_switch(struct vc
On Mon, 2018-01-15 at 14:23 +0100, David Woodhouse wrote:
>
> > >
> > > Also... if you're doing that in context_switch() does it do the right
> > > thing with idle? If a CPU switches to the idle domain and then back
> > > again to the same vCPU,
On Fri, 2018-01-12 at 18:00 +, Andrew Cooper wrote:
>
> @@ -152,14 +163,38 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr,
> uint64_t val)
> {
> const struct vcpu *curr = current;
> struct domain *d = v->domain;
> + const struct cpuid_policy *cp = d->arch.cpuid;
> struct ms
On Mon, 2018-01-15 at 22:39 +0100, David Woodhouse wrote:
> On Mon, 2018-01-15 at 14:23 +0100, David Woodhouse wrote:
> >
> >
> > >
> > > >
> > > >
> > > > Also... if you're doing that in context_switch() does it do the right
On Wed, 2018-01-17 at 18:26 +0100, David Woodhouse wrote:
>
> > In both switching to idle, and back to the vCPU, we should hit this
> > case and not the 'else' case that does the IBPB:
> >
> > 1710 if ( (per_cpu(curr_vcpu, cpu) == next) ||
&g
On Fri, 2018-01-19 at 08:07 -0700, Jan Beulich wrote:
> > > > On 19.01.18 at 15:48, wrote:
> > vcpu pointers are rather more susceptible to false aliasing in the case
> > that the 4k memory allocation behind struct vcpu gets reused.
> >
> > The risks are admittedly minute, but this is a much safe
On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
>
> --- a/xen/include/asm-x86/asm_defns.h
> +++ b/xen/include/asm-x86/asm_defns.h
> @@ -217,22 +217,34 @@ static always_inline void stac(void)
> addq $-(UREGS_error_code-UREGS_r15), %rsp
> cld
> movq %rdi,UREGS_rd
On Mon, 2018-01-22 at 10:18 +, Andrew Cooper wrote:
> On 22/01/2018 10:04, David Woodhouse wrote:
> >
> > On Thu, 2018-01-04 at 00:15 +, Andrew Cooper wrote:
> > >
> > > --- a/xen/include/asm-x86/asm_defns.h
> > > +++ b/xen/include/asm-x86/asm_
On Wed, 2018-01-24 at 13:49 +, Andrew Cooper wrote:
> On 24/01/18 13:34, Woodhouse, David wrote:
> > I am loath to suggest *more* tweakables, but given the IBPB cost is
> > there any merit in having a mode which does it only if the *domain* is
> > different, regardless of vcpu_id?
>
> This woul
On Fri, 2020-02-07 at 20:27 +, Julien Grall wrote:
> Could you please send the series in a separate thread? So we don't mix
> the two discussions (i.e merge and free on boot allocated page) together.
Well, they're all part of the same mess, really.
There are cases where pages end up in free_
On Fri, 2020-03-06 at 12:37 +0100, Jan Beulich wrote:
> I've started looking at the latest version of Paul's series, but I'm
> still struggling to see the picture: There's no true distinction
> between Xen heap and domain heap on x86-64 (except on very large
> systems). Therefore it is unclear to m
On Fri, 2020-03-06 at 12:37 +0100, Jan Beulich wrote:
> > For live update we need to give a region of memory to the new Xen which
> > it can use for its boot allocator, before it's handled any of the live
> > update records and before it knows which *other* memory is still
> > available for use.
>
On Fri, 2020-03-06 at 13:25 +0100, Jan Beulich wrote:
> And likely interrupt remapping tables, device tables, etc. I don't
> have a clear picture on how you want to delineate ones in use in any
> such way from ones indeed free to re-use.
Right. The solution there is two-fold:
For pages in the gen
1 - 100 of 909 matches
Mail list logo