"movw %ds,%cx" insn needs a 0x66 prefix, while "movw %ds,%ecx" does not. The difference is that it overwrite entire %ecx, not only its low half, but subsequent code doesn't depend on the value of top half of %ecx, we can safely use the shorter insn.
The new code is also faster than old one - now we don't depend on old value of %ecx, but this code fragment is not performance-critical. Signed-off-by: Denys Vlasenko <dvlas...@redhat.com> CC: Linus Torvalds <torva...@linux-foundation.org> CC: Steven Rostedt <rost...@goodmis.org> CC: Ingo Molnar <mi...@kernel.org> CC: Borislav Petkov <b...@alien8.de> CC: "H. Peter Anvin" <h...@zytor.com> CC: Andy Lutomirski <l...@amacapital.net> CC: Oleg Nesterov <o...@redhat.com> CC: Frederic Weisbecker <fweis...@gmail.com> CC: Alexei Starovoitov <a...@plumgrid.com> CC: Will Drewry <w...@chromium.org> CC: Kees Cook <keesc...@chromium.org> CC: x...@kernel.org CC: linux-kernel@vger.kernel.org --- arch/x86/kernel/entry_64.S | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index 62b4c5f..ef2651d 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -1195,19 +1195,19 @@ ENTRY(xen_failsafe_callback) /*CFI_REL_OFFSET ds,DS*/ CFI_REL_OFFSET r11,8 CFI_REL_OFFSET rcx,0 - movw %ds,%cx + movl %ds,%ecx cmpw %cx,0x10(%rsp) CFI_REMEMBER_STATE jne 1f - movw %es,%cx + movl %es,%ecx cmpw %cx,0x18(%rsp) jne 1f - movw %fs,%cx + movl %fs,%ecx cmpw %cx,0x20(%rsp) jne 1f - movw %gs,%cx + movl %gs,%ecx cmpw %cx,0x28(%rsp) jne 1f /* All segments match their saved values => Category 2 (Bad IRET). */ movq (%rsp),%rcx CFI_RESTORE rcx movq 8(%rsp),%r11 -- 1.8.1.4 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/