On Mon, Jun 17, 2019 at 12:26:01AM +0200, Thomas Gleixner wrote:
> Nah. The straight forward approach is to do right when the secondary CPU
> comes into life, before it does any cr4 write (except for the code in
> entry_64.S), something like the below.
Okay, will do; thanks! :) I'll respin things
On Fri, 14 Jun 2019, Kees Cook wrote:
> On Fri, Jun 14, 2019 at 04:57:36PM +0200, Thomas Gleixner wrote:
> > Wouldn't it make sense to catch situations where a regular caller provides
> > @val with pinned bits unset? I.e. move the OR into this code path after
> > storing bits_missing.
>
> I menti
On Fri, Jun 14, 2019 at 04:57:36PM +0200, Thomas Gleixner wrote:
> Wouldn't it make sense to catch situations where a regular caller provides
> @val with pinned bits unset? I.e. move the OR into this code path after
> storing bits_missing.
I mentioned this in the commit log, but maybe I wasn't ver
On Tue, 4 Jun 2019, Kees Cook wrote:
> ---
> v2:
> - move setup until after CPU feature detection and selection.
> - refactor to use static branches to have atomic enabling.
> - only perform the "or" after a failed check.
Maybe I'm missing something, but
> static inline unsigned long native_read
On Tue, Jun 04, 2019 at 04:44:21PM -0700, Kees Cook wrote:
> diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
> index 2c57fffebf9b..6b210be12734 100644
> --- a/arch/x86/kernel/cpu/common.c
> +++ b/arch/x86/kernel/cpu/common.c
> @@ -366,6 +366,23 @@ static __always_inline voi
Several recent exploits have used direct calls to the native_write_cr4()
function to disable SMEP and SMAP before then continuing their exploits
using userspace memory access. This pins bits of CR4 so that they cannot
be changed through a common function. This is not intended to be general
ROP prot
6 matches
Mail list logo