On Mon, Mar 23, 2015 at 12:27 PM, Andy Lutomirski <l...@amacapital.net> wrote: > On Mon, Mar 23, 2015 at 12:21 PM, Denys Vlasenko <dvlas...@redhat.com> wrote: >> On 03/23/2015 08:10 PM, Andy Lutomirski wrote: >>> We currently have a race: if we're preempted during syscall exit, we >>> can fail to process syscall return work that is queued up while >>> we're preempted in ret_from_sys_call after checking ti.flags. >>> >>> Fix it by disabling interrupts before checking ti.flags. >>> >>> Fixes: 96b6352c1271 x86_64, entry: Remove the syscall exit audit and >>> schedule optimizations >>> Reported-by: Stefan Seyfried <stefan.seyfr...@googlemail.com> >>> Reported-by: Takashi Iwai <ti...@suse.de> >>> Signed-off-by: Andy Lutomirski <l...@kernel.org> >>> --- >>> >>> Ingo, I don't understand the LOCKDEP_SYS_EXIT stuff. Can you take a quick >>> look to confirm that it's okay to call it more than once? >>> >>> arch/x86/kernel/entry_64.S | 18 ++++++++++++++---- >>> 1 file changed, 14 insertions(+), 4 deletions(-) >>> >>> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S >>> index 1d74d161687c..2babb393915e 100644 >>> --- a/arch/x86/kernel/entry_64.S >>> +++ b/arch/x86/kernel/entry_64.S >>> @@ -364,12 +364,21 @@ system_call_fastpath: >>> * Has incomplete stack frame and undefined top of stack. >>> */ >>> ret_from_sys_call: >>> - testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) >>> - jnz int_ret_from_sys_call_fixup /* Go the the slow path */ >>> - >>> LOCKDEP_SYS_EXIT >>> DISABLE_INTERRUPTS(CLBR_NONE) >>> TRACE_IRQS_OFF >>> + >>> + /* >>> + * We must check ti flags with interrupts (or at least preemption) >>> + * off because we must *never* return to userspace without >>> + * processing exit work that is enqueued if we're preempted here. >>> + * In particular, returning to userspace with any of the one-shot >>> + * flags (TIF_NOTIFY_RESUME, TIF_USER_RETURN_NOTIFY, etc) set is >>> + * very bad. >>> + */ >>> + testl $_TIF_ALLWORK_MASK,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) >>> + jnz int_ret_from_sys_call_fixup /* Go the the slow path */ >> ^^^^^^^^^^^^^^^^^^^^ >> >> typo here; s/the the/to the/ > > Whoops. > >> >> >>> + >>> CFI_REMEMBER_STATE >>> /* >>> * sysretq will re-enable interrupts: >>> @@ -386,7 +395,7 @@ ret_from_sys_call: >>> >>> int_ret_from_sys_call_fixup: >>> FIXUP_TOP_OF_STACK %r11, -ARGOFFSET >>> - jmp int_ret_from_sys_call >>> + jmp int_ret_from_sys_call_irqs_off >>> >>> /* Do syscall tracing */ >>> tracesys: >>> @@ -432,6 +441,7 @@ tracesys_phase2: >>> GLOBAL(int_ret_from_sys_call) >>> DISABLE_INTERRUPTS(CLBR_NONE) >>> TRACE_IRQS_OFF >>> +int_ret_from_sys_call_irqs_off: >>> movl $_TIF_ALLWORK_MASK,%edi >>> /* edi: mask to check */ >>> GLOBAL(int_with_check) >> >> >> You can avoid having to know LOCKDEP_SYS_EXIT :) >> Just set %edi = $_TIF_ALLWORK_MASK, and jump a bit farther: >> >> >> movl $_TIF_ALLWORK_MASK,%edi >> testl %edi,TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) >> jnz int_ret_from_sys_call_fixup /* Go to the slow path */ >> ... >> ... >> GLOBAL(int_ret_from_sys_call) >> DISABLE_INTERRUPTS(CLBR_NONE) >> TRACE_IRQS_OFF >> movl $_TIF_ALLWORK_MASK,%edi >> /* edi: mask to check */ >> GLOBAL(int_with_check) >> LOCKDEP_SYS_EXIT_IRQ >> int_ret_from_sys_call_irqs_off: <========== HERE >> > > I didn't want to do that, because I really want to rewrite > int_ret_from_sys_call in C. >
To say that better: I don't want to further spread the %edi garbage around entry_64.S. Saving a single load on the slow path isn't worth any of this complexity, and, if we're going to rewrite it in C anyway, then maybe we could consider microoptimizations like that later on. --Andy -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/