Really swap arguments #4 and #5 in stub32_clone instead of "optimizing"
it into a move.

Yes, tls_val is currently unused. Yes, on some CPUs XCHG is a little bit
more expensive than MOV. But a cycle or two on an expensive syscall like
clone() is way below noise floor, and obfuscation of logic introduced
by this optimization is simply not worth it.

Signed-off-by: Denys Vlasenko <dvlas...@redhat.com>
CC: Linus Torvalds <torva...@linux-foundation.org>
CC: Steven Rostedt <rost...@goodmis.org>
CC: Ingo Molnar <mi...@kernel.org>
CC: Borislav Petkov <b...@alien8.de>
CC: "H. Peter Anvin" <h...@zytor.com>
CC: Andy Lutomirski <l...@amacapital.net>
CC: Oleg Nesterov <o...@redhat.com>
CC: Frederic Weisbecker <fweis...@gmail.com>
CC: Alexei Starovoitov <a...@plumgrid.com>
CC: Will Drewry <w...@chromium.org>
CC: Kees Cook <keesc...@chromium.org>
CC: x...@kernel.org
CC: linux-kernel@vger.kernel.org
---
 arch/x86/ia32/ia32entry.S | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index 8e72256..0c302d0 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -567,11 +567,9 @@ GLOBAL(stub32_clone)
         * 32-bit clone API is clone(..., int tls_val, int *child_tidptr).
         * 64-bit clone API is clone(..., int *child_tidptr, int tls_val).
         * Native 64-bit kernel's sys_clone() implements the latter.
-        * We need to swap args here. But since tls_val is in fact ignored
-        * by sys_clone(), we can get away with an assignment
-        * (arg4 = arg5) instead of a full swap:
+        * We need to swap args here:
         */
-       mov     %r8, %rcx
+       xchg    %r8, %rcx
        jmp     ia32_ptregs_common
 
        ALIGN
-- 
1.8.1.4

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to