Re: [PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

2019-06-17 Thread Kees Cook
On Mon, Jun 17, 2019 at 12:26:01AM +0200, Thomas Gleixner wrote: > Nah. The straight forward approach is to do right when the secondary CPU > comes into life, before it does any cr4 write (except for the code in > entry_64.S), something like the below. Okay, will do; thanks! :) I'll respin things

Re: [PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

2019-06-16 Thread Thomas Gleixner
On Fri, 14 Jun 2019, Kees Cook wrote: > On Fri, Jun 14, 2019 at 04:57:36PM +0200, Thomas Gleixner wrote: > > Wouldn't it make sense to catch situations where a regular caller provides > > @val with pinned bits unset? I.e. move the OR into this code path after > > storing bits_missing. > > I menti

Re: [PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

2019-06-14 Thread Kees Cook
On Fri, Jun 14, 2019 at 04:57:36PM +0200, Thomas Gleixner wrote: > Wouldn't it make sense to catch situations where a regular caller provides > @val with pinned bits unset? I.e. move the OR into this code path after > storing bits_missing. I mentioned this in the commit log, but maybe I wasn't ver

Re: [PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

2019-06-14 Thread Thomas Gleixner
On Tue, 4 Jun 2019, Kees Cook wrote: > --- > v2: > - move setup until after CPU feature detection and selection. > - refactor to use static branches to have atomic enabling. > - only perform the "or" after a failed check. Maybe I'm missing something, but > static inline unsigned long native_read

Re: [PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

2019-06-05 Thread Kees Cook
On Tue, Jun 04, 2019 at 04:44:21PM -0700, Kees Cook wrote: > diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c > index 2c57fffebf9b..6b210be12734 100644 > --- a/arch/x86/kernel/cpu/common.c > +++ b/arch/x86/kernel/cpu/common.c > @@ -366,6 +366,23 @@ static __always_inline voi

[PATCH v2 1/2] x86/asm: Pin sensitive CR4 bits

2019-06-04 Thread Kees Cook
Several recent exploits have used direct calls to the native_write_cr4() function to disable SMEP and SMAP before then continuing their exploits using userspace memory access. This pins bits of CR4 so that they cannot be changed through a common function. This is not intended to be general ROP prot