On Mon, Sep 16, 2019 at 2:30 PM Linus Torvalds <torva...@linux-foundation.org> wrote: > > On Mon, Sep 16, 2019 at 10:41 AM Andy Lutomirski <l...@kernel.org> wrote: > > > > After some experimentation, I think y'all are just doing it wrong. > > GCC is very clever about this as long as it's given the chance. This > > test, for example, generates excellent code: > > > > #include <string.h> > > > > __THROW __nonnull ((1)) __attribute__((always_inline)) void > > *memset(void *s, int c, size_t n) > > { > > asm volatile ("nop"); > > return s; > > } > > > > /* generates 'nop' */ > > void zero(void *dest, size_t size) > > { > > __builtin_memset(dest, 0, size); > > } > > I think the point was that we'd like to get the default memset (for > when __builtin_memset() doesn't generate inline code) also inlined > into just "rep stosb", instead of that tail-call "jmp memset".
Well, when I wrote this email, I *thought* it was inlining the 'memset' function, but maybe I just can't read gcc's output today. It seems like gcc is maybe smart enough to occasionally optimize memset just because it's called 'memset'. This generates good code: #include <stddef.h> inline void *memset(void *dest, int c, size_t n) { /* Boris' implementation */ void *ret, *dummy; asm volatile("push %%rdi\n\t" "mov %%rax, %%rsi\n\t" "mov %%rcx, %%rdx\n\t" "andl $7,%%edx\n\t" "shrq $3,%%rcx\n\t" "movzbl %%sil,%%esi\n\t" "movabs $0x0101010101010101,%%rax\n\t" "imulq %%rsi,%%rax\n\t" "rep stosq\n\t" "movl %%edx,%%ecx\n\t" "rep stosb\n\t" "pop %%rax\n\t" : "=&D" (ret), "=c" (dummy) : "0" (dest), "a" (c), "c" (n) : "rsi", "rdx", "memory"); return ret; } int one_word(void) { int x; memset(&x, 0, sizeof(x)); return x; } So maybe Boris' patch is good after all.