On 22/10/2015 18:02, Eric Blake wrote: > On 10/22/2015 09:31 AM, Paolo Bonzini wrote: > >> Only if your machine cannot do unaligned loads. If it can, you can >> align the length instead of the buffer. memcmp will take care of >> aligning the buffer (with some luck it won't have to, e.g. if buf is >> 0x12340002 and length = 4094). On x86 unaligned "unsigned long" loads >> are basically free as long as they don't cross a cache line. >> >>> BTW Rusty has a benchmark framework for this as referenced from: >>> http://rusty.ozlabs.org/?p=560 >> >> I missed his benchmark framework so I wrote another one, here it is: >> https://gist.githubusercontent.com/bonzini/9a95b0e02d1ceb60af9e/raw/7bc42ddccdb6c42fea3db58e0539d0443d0e6dc6/memeqzero.c > > I see a bug in there:
Of course. You shouldn't have told me what the bug was, I deserved to look for it myself. :) Fixed version, same performance (7/24/91/5002 for memeqzero4_rusty, 7/10/59/4963 for mine, so 30-40 clock cycles saved if length >= 16): bool memeqzero4_paolo(const void *data, size_t length) { const unsigned char *p = data; unsigned long word; while (__builtin_expect(length & (sizeof(word) - 1), 0)) { if (*p) return false; p++; length--; if (!length) return true; } /* We must always read one byte or word, even if everything is aligned! * Otherwise, memcmp(data, data, length) is trivially true. */ for (;;) { memcpy(&word, p, sizeof(word)); if (word) return false; if (__builtin_expect(length & (16 - sizeof(word)), 0) == 0) break; p += sizeof(word); length -= sizeof(word); if (!length) return true; } /* Now we know that's zero, memcmp with self. */ return memcmp(data, p, length) == 0; }