we're at it, only fill as much of the buffer as we plan to use.
Signed-off-by: George Spelvin
Cc: Daniel Axtens
Cc: Herbert Xu
Cc: linuxppc-dev@lists.ozlabs.org
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
---
arch/powerpc/crypto/crc-vpmsum_test.c | 10 +++---
1
(Cross-posted in case there are generic issues; please trim if
discussion wanders into single-architecture details.)
I was working on some scaling code that can benefit from 64x64->128-bit
multiplies. GCC supports an __int128 type on processors with hardware
support (including z/Arch and MIPS64),
ut!
Here's a draft patch, but obviously it should be tested!
>From 6f3cc608c49dba33a38e81232a2fd46e8fa8dbcd Mon Sep 17 00:00:00 2001
From: George Spelvin
Date: Sat, 30 Mar 2019 10:27:14 +
Subject: [PATCH] s390: Enable CONFIG_ARCH_SUPPORTS_INT128 on 64-bit builds
If a platform supports a
2019 at 01:07:07PM +, George Spelvin wrote:
>> I don't have easy access to an Alpha cross-compiler to test, but
>> as it has UMULH, I suspect it would work, too.
>
> https://mirrors.edge.kernel.org/pub/tools/crosstool/
Thanks for the pointer; I have a workin
ion=195667
Here's a revised s390 patch.
>From e8ea8c0a5d618385049248649b8c13717b598a42 Mon Sep 17 00:00:00 2001
From: George Spelvin
Date: Sat, 30 Mar 2019 10:27:14 +
Subject: [PATCH v2] s390: Enable CONFIG_ARCH_SUPPORTS_INT128 on 64-bit builds
If a platform supports a 64x64->12
Great work; that is indeed a logical follow-on.
Reviewed by: George Spelvin
I you feel even more ambitious, you could try impementing Rasmus
Villemoes' idea of having generic *compare* functions. (It's on
my to-do list, but I haven't made meaningful progress yet, and I'm
On Mon, 1 Apr 2019 at 12:35:55 +0300, Andy Shevchenko wrote:
> Hmm... If (*swap)() is called recursively it means the change might increase
> stack usage on 64-bit platforms.
>
> Am I missing something?
Under what conceivable circumstance would someone write a recursive
(*swap)() function?
You'r
r, and mask manually.
FUNCTIONAL CHANGE: mips and xtensa were changed from 64-bit
get_random_long() to 56-bit get_random_canary() to match the
others, in accordance with the logic in CANARY_MASK.
(We could do 1 bit better and zero *one* of the two high bytes.)
Signed-off-by: George Spelvin
Cc: Rus
Madhavan Srinivasan wrote:
> +#if (__BYTE_ORDER == __BIG_ENDIAN) && (BITS_PER_LONG != 64)
> + tmp = addr[(((nbits - 1)/BITS_PER_LONG) - (start / BITS_PER_LONG))]
> + ^ invert;
> +#else
> tmp = addr[start / BITS_PER_LONG] ^ invert