From: David Laight <[email protected]>

Unrolling the loop once significantly improves performance on some CPU.
Userspace testing on a Zen-5 shows it runs at two bytes/clock rather than
one byte/clock with only a marginal additional overhead.

Using 'byte masking' is faster for longer strings - the break-even point
is around 56 bytes on the same Zen-5 (there is much larger overhead, then
it runs at 16 bytes in 3 clocks).
But the majority of kernel calls won't be near that length.
There will also be extra overhead for big-endian systems and those
without a fast ffs().

Signed-off-by: David Laight <[email protected]>
---

For reference 'rep scasb' comes in at 150 + 3 per byte on Zen-5.

I've not tested any Intel CPU, I don't think they can run a
'1 clock loop' but the change might improve performance from
2 clocks/byte to 1 clock/byte.
I can test Intel up to i7-7xxx but don't have any older AMD CPU
or any other architecutes (apart from a pi-5).

Other architectures may well see an improvement.
If only because of a reduced number of taken branches.

I did notice that arm64 uses a very large asm block that is clearly
optimised for very long strings - I suspect the C version will be
faster in the kernel.

 lib/string.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/lib/string.c b/lib/string.c
index b632c71df1a5..31de9aa86409 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -415,11 +415,13 @@ EXPORT_SYMBOL(strnchr);
 #ifndef __HAVE_ARCH_STRLEN
 size_t strlen(const char *s)
 {
-       const char *sc;
+       size_t len;
 
-       for (sc = s; *sc != '\0'; ++sc)
-               /* nothing */;
-       return sc - s;
+       for (len = 0; likely(s[len]); len += 2) {
+               if (!s[len + 1])
+                       return len + 1;
+       }
+       return len;
 }
 EXPORT_SYMBOL(strlen);
 #endif
-- 
2.39.5


Reply via email to