* Borislav Petkov <b...@alien8.de> wrote: > On Fri, Nov 24, 2017 at 06:23:41PM +0100, Ingo Molnar wrote: > > From: Andy Lutomirski <l...@kernel.org> > > > > Historically, IDT entries from usermode have always gone directly > > to the running task's kernel stack. Rearrange it so that we enter on > > a percpu trampoline stack and then manually switch to the task's stack. > > This touches a couple of extra cachelines, but it gives us a chance > > to run some code before we touch the kernel stack. > > > > The asm isn't exactly beautiful, but I think that fully refactoring > > it can wait. > > > > Signed-off-by: Andy Lutomirski <l...@kernel.org> > > Signed-off-by: Thomas Gleixner <t...@linutronix.de> > > I think you mean Reviewed-by: here. The following patches have it too. > > > Cc: Borislav Petkov <bpet...@suse.de> > > Cc: Brian Gerst <brge...@gmail.com> > > Cc: Dave Hansen <dave.han...@intel.com> > > Cc: Josh Poimboeuf <jpoim...@redhat.com> > > Cc: Linus Torvalds <torva...@linux-foundation.org> > > Cc: Peter Zijlstra <pet...@infradead.org> > > Link: > > https://lkml.kernel.org/r/fa3958723a1a85baeaf309c735b775841205800e.1511497875.git.l...@kernel.org > > Signed-off-by: Ingo Molnar <mi...@kernel.org> > > --- > > arch/x86/entry/entry_64.S | 67 > > ++++++++++++++++++++++++++++++---------- > > arch/x86/entry/entry_64_compat.S | 5 ++- > > arch/x86/include/asm/switch_to.h | 2 +- > > arch/x86/include/asm/traps.h | 1 - > > arch/x86/kernel/cpu/common.c | 6 ++-- > > arch/x86/kernel/traps.c | 18 +++++------ > > 6 files changed, 68 insertions(+), 31 deletions(-) > > ... > > > diff --git a/arch/x86/include/asm/switch_to.h > > b/arch/x86/include/asm/switch_to.h > > index 8c6bd6863db9..a6796ac8d311 100644 > > --- a/arch/x86/include/asm/switch_to.h > > +++ b/arch/x86/include/asm/switch_to.h > > @@ -93,7 +93,7 @@ static inline void update_sp0(struct task_struct *task) > > #ifdef CONFIG_X86_32 > > load_sp0(task->thread.sp0); > > #else > > - load_sp0(task_top_of_stack(task)); > > + /* On x86_64, sp0 always points to the entry trampoline stack. */ > > #endif > > You can put this comment to the one ontop and remove the #else. > ifdeffery is always ugly and the less, the better.
Ok, I made it: /* On x86_64, sp0 always points to the entry trampoline stack, which is constant: */ #ifdef CONFIG_X86_32 load_sp0(task->thread.sp0); #endif > > /* > > * This is called from entry_64.S early in handling a fault > > * caused by a bad iret to user mode. To handle the fault > > - * correctly, we want move our stack frame to task_pt_regs > > - * and we want to pretend that the exception came from the > > - * iret target. > > + * correctly, we want move our stack frame to where it would > > " ... we want to move... " > > > + * be had we entered directly on the entry stack (rather than > > + * just below the IRET frame) and we want to pretend that the > > + * exception came from the iret target. > > s/iret/IRET/ > > > */ > > struct bad_iret_stack *new_stack = > > - container_of(task_pt_regs(current), > > - struct bad_iret_stack, regs); > > + (struct bad_iret_stack *)this_cpu_read(cpu_tss.x86_tss.sp0) - 1; > > > > /* Copy the IRET target to the new stack. */ > > memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); > > -- > > with that: > > Reviewed-by: Borislav Petkov <b...@suse.de> Fixed those details and added your tag, thanks Boris! Ingo