On 1/10/2018 6:14 PM, Tom Lendacky wrote:
> On 1/10/2018 5:47 PM, David Woodhouse wrote:
>> On Wed, 2018-01-10 at 22:51 +, David Woodhouse wrote:
>>> In accordance with the Intel and AMD documentation, we need to overwrite
>>> all entries in the RSB on exiting a guest, to prevent malicious bran
On Thu, 2018-01-11 at 10:47 +0100, Borislav Petkov wrote:
> On Thu, Jan 11, 2018 at 10:32:31AM +0100, Peter Zijlstra wrote:
> >
> > can't you do lovely things like:
> >
> > volatile asm ("call __fill_rsb_thunk_%1" : : "r" (dummy))
> >
> > which would still let gcc select the register ?
I've
On Thu, Jan 11, 2018 at 10:47:59AM +0100, Borislav Petkov wrote:
> On Thu, Jan 11, 2018 at 10:32:31AM +0100, Peter Zijlstra wrote:
> > can't you do lovely things like:
> >
> > volatile asm ("call __fill_rsb_thunk_%1" : : "r" (dummy))
> >
> > which would still let gcc select the register ?
>
On Thu, Jan 11, 2018 at 10:32:31AM +0100, Peter Zijlstra wrote:
> can't you do lovely things like:
>
> volatile asm ("call __fill_rsb_thunk_%1" : : "r" (dummy))
>
> which would still let gcc select the register ?
Calling a function from asm is nasty because you need to pay attention
to clo
On Thu, Jan 11, 2018 at 09:07:09AM +, Woodhouse, David wrote:
> On Thu, 2018-01-11 at 09:49 +0100, Boris Petkov wrote:
> > On January 11, 2018 9:42:38 AM GMT+01:00, Peter Zijlstra wrote:
> > >Or we teach the alternative thing to patch in a jmp to end instead of
> > >NOP padding the entire thin
on and a
jmp over the whole lot.
Looks like this now...
From 302622182f56825b7cf2c39ce88ea8c462d587fe Mon Sep 17 00:00:00 2001
From: David Woodhouse
Date: Wed, 10 Jan 2018 22:32:24 +0000
Subject: [PATCH] x86/retpoline: Fill return stack buffer on vmexit
In accordance with the Intel and AMD
On January 11, 2018 9:42:38 AM GMT+01:00, Peter Zijlstra
wrote:
>Or we teach the alternative thing to patch in a jmp to end instead of
>NOP padding the entire thing as soon as the jmp (3 bytes) fits ?
Or, even better: use alternative_call() to call functions instead of patching
gazillion bytes.
On Wed, Jan 10, 2018 at 10:51:22PM +, David Woodhouse wrote:
> This implements the most pressing of the RSB stuffing documented
> by dhansen (based our discussions) in https://goo.gl/pXbvBE
Only took me 3 readings to find interrupts/traps were in fact
enumerated. Could we plretty please separa
On Thu, Jan 11, 2018 at 12:04:35AM +, Woodhouse, David wrote:
> On Wed, 2018-01-10 at 15:47 -0800, Tim Chen wrote:
> >
> > > +
> > > + asm volatile (ALTERNATIVE("",
> > > + __stringify(__FILL_RETURN_BUFFER(%0, %1,
> > > _%=)),
> > > +
On Thu, 2018-01-11 at 01:04 +, David Woodhouse wrote:
> On Wed, 2018-01-10 at 18:14 -0600, Tom Lendacky wrote:
> > On 1/10/2018 5:47 PM, David Woodhouse wrote:
> > > Now smoke tested with Intel VT-x, but not yet on AMD. Tom, would you be
> > > able to do that?
> > Yes, I'll try to get to it as
On Wed, 2018-01-10 at 18:14 -0600, Tom Lendacky wrote:
> On 1/10/2018 5:47 PM, David Woodhouse wrote:
> > On Wed, 2018-01-10 at 22:51 +, David Woodhouse wrote:
> >> In accordance with the Intel and AMD documentation, we need to overwrite
> >> all entries in the RSB on exiting a guest, to preven
On 1/10/2018 5:47 PM, David Woodhouse wrote:
> On Wed, 2018-01-10 at 22:51 +, David Woodhouse wrote:
>> In accordance with the Intel and AMD documentation, we need to overwrite
>> all entries in the RSB on exiting a guest, to prevent malicious branch
>> target predictions from affecting the hos
On Wed, 2018-01-10 at 15:47 -0800, Tim Chen wrote:
>
> > +
> > + asm volatile (ALTERNATIVE("",
> > + __stringify(__FILL_RETURN_BUFFER(%0, %1,
> > _%=)),
> > + X86_FEATURE_RETPOLINE)
>
> We'll be patching in a fairly long set of inst
On Wed, 2018-01-10 at 15:22 -0800, David Lang wrote:
> I somewhat hate to ask this, but for those of us following at home, what does
> this add to the overhead?
>
> I am remembering an estimate from mid last week that put retpoline at
> replacing
> a 3 clock 'ret' with 30 clocks of eye-bleed co
On Wed, 2018-01-10 at 22:51 +, David Woodhouse wrote:
> In accordance with the Intel and AMD documentation, we need to overwrite
> all entries in the RSB on exiting a guest, to prevent malicious branch
> target predictions from affecting the host kernel. This is needed both
> for retpoline and
On 01/10/2018 02:51 PM, David Woodhouse wrote:
> + */
> +#define __FILL_RETURN_BUFFER(reg, sp, uniq) \
> + mov $(NUM_BRANCHES_TO_FILL/2), reg; \
> + .align 16; \
> +.Ldo_call1_ ## uniq: \
> + call.Ldo_call2_ ## uniq;
I somewhat hate to ask this, but for those of us following at home, what does
this add to the overhead?
I am remembering an estimate from mid last week that put retpoline at replacing
a 3 clock 'ret' with 30 clocks of eye-bleed code
In accordance with the Intel and AMD documentation, we need to overwrite
all entries in the RSB on exiting a guest, to prevent malicious branch
target predictions from affecting the host kernel. This is needed both
for retpoline and for IBRS.
Signed-off-by: David Woodhouse
---
Untested in this fo
18 matches
Mail list logo