From: Eric Biggers
Since kernel-mode NEON sections are now preemptible on arm64, there is
no longer any need to limit the length of them.
Signed-off-by: Eric Biggers
Reviewed-by: Ard Biesheuvel
Signed-off-by: Herbert Xu
---
arch/arm64/crypto/sha256-glue.c | 19 ++-
1 file cha
From: Eric Biggers
Follow best practices by changing the length parameters to size_t and
explicitly specifying the length of the output digest arrays.
Signed-off-by: Eric Biggers
Signed-off-by: Herbert Xu
---
include/crypto/internal/sha2.h | 2 +-
include/crypto/sha2.h | 8
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
sha256_base.h is no longer used, so remove it.
Signed-off-by: Eric Biggers
Signed-off-by: Herbert Xu
---
include/crypto/sha256_base.h | 151 ---
1 file changed, 151 deletions(-)
delete mode 100644 include/crypto/sha256_base.h
diff --git a/i
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
Since arch/sparc/crypto/opcodes.h is now needed outside the
arch/sparc/crypto/ directory, move it into arch/sparc/include/asm/ so
that it can be included as .
Signed-off-by: Eric Biggers
Signed-off-by: Herbert Xu
---
arch/sparc/crypto/aes_asm.S | 3 +--
arc
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
From: Eric Biggers
As has been done for various other algorithms, rework the design of the
SHA-256 library to support arch-optimized implementations, and make
crypto/sha256.c expose both generic and arch-optimized shash algorithms
that wrap the library functions.
This allows users of the SHA-256
Changes in v2:
- Rebase on top of lib partial block helper series.
- Restore the block-only shash implementation of sha256.
- Move the SIMD hardirq test out of the block functions so that
it is only done for the lib/crypto interface.
- Split the lib/crypto sha256 module to break cycle in allmod b
On Fri, Apr 25, 2025 at 11:50:37PM -0700, Eric Biggers wrote:
>
> +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
> + const u8 *data, size_t nblocks)
> +{
> + if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) {
> + kernel_fpu_begin();
>
On Sun, Apr 27, 2025 at 09:52:42AM +0800, Herbert Xu wrote:
> On Sat, Apr 26, 2025 at 06:50:41PM -0700, Eric Biggers wrote:
> >
> > But this one does have a lib/crypto/ interface now. There's no reason not
> > to
> > use it here.
>
> I need to maintain a consistent export format between shash an
On Sat, Apr 26, 2025 at 07:05:50PM -0700, Eric Biggers wrote:
>
> Why? And how is that relevant here, when the export format should just be
Because I want to be able to fall back from an ahash to shash in
the middle of a hash, i.e., I need to be able to export from the
ahash, import it into an sh
On Sun, Apr 27, 2025 at 09:17:28AM +0800, Herbert Xu wrote:
> On Sat, Apr 26, 2025 at 06:12:28PM -0700, Eric Biggers wrote:
> >
> > No, that would be silly. I'm not doing that. The full update including the
> > partial block handling is already needed in the library. There is no need
> > to
> >
On Sat, Apr 26, 2025 at 06:50:41PM -0700, Eric Biggers wrote:
>
> But this one does have a lib/crypto/ interface now. There's no reason not to
> use it here.
I need to maintain a consistent export format between shash and
ahash, and the easiest way to do that is to use the shash partial
block han
On Sat, Apr 26, 2025 at 06:12:28PM -0700, Eric Biggers wrote:
>
> No, that would be silly. I'm not doing that. The full update including the
> partial block handling is already needed in the library. There is no need to
> implement it again at the shash level.
shash implements a lot more algori
On Sun, Apr 27, 2025 at 09:06:51AM +0800, Herbert Xu wrote:
> Eric Biggers wrote:
> >
> > +static int crypto_sha256_update_arch(struct shash_desc *desc, const u8
> > *data,
> > +unsigned int len)
> > +{
> > + sha256_update(shash_desc_ctx(desc), data, len)
Eric Biggers wrote:
>
> +static int crypto_sha256_update_arch(struct shash_desc *desc, const u8 *data,
> +unsigned int len)
> +{
> + sha256_update(shash_desc_ctx(desc), data, len);
> + return 0;
> +}
Please use the block functions directly in the sh
On Sun, Apr 27, 2025 at 08:18:56AM +0800, Herbert Xu wrote:
> On Sat, Apr 26, 2025 at 11:03:26AM -0700, Eric Biggers wrote:
> >
> > The SHA-256 library functions currently work in any context, and this patch
> > series preserves that behavior. Changing that would be a separate change.
>
> I've al
On Sat, Apr 26, 2025 at 11:03:26AM -0700, Eric Biggers wrote:
>
> The SHA-256 library functions currently work in any context, and this patch
> series preserves that behavior. Changing that would be a separate change.
I've already removed the SIMD fallback path and your patch is
adding it back.
On Sat, Apr 26, 2025 at 06:50:43PM +0800, Herbert Xu wrote:
> Eric Biggers wrote:
> >
> > +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
> > + const u8 *data, size_t nblocks)
> > +{
> > + if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable())
> >
On Fri, 25 Apr 2025 at 23:51, Eric Biggers wrote:
>
> Following the example of several other algorithms (e.g. CRC32, ChaCha,
> Poly1305, BLAKE2s), this series refactors the kernel's existing
> architecture-optimized SHA-256 code to be available via the library API,
> instead of just via the crypto
kernel test robot writes:
> All errors (new ones prefixed by >>):
>
>arch/powerpc/net/bpf_jit_comp64.c: In function 'bpf_jit_build_body':
>>> arch/powerpc/net/bpf_jit_comp64.c:814:4: error: a label can only be part of
>>> a statement and a declaration is not a statement
> 814 |bool
Eric Biggers wrote:
>
> +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS],
> + const u8 *data, size_t nblocks)
> +{
> + if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) {
> + kernel_fpu_begin();
> + static_call(sha256_
On Sat, 26 Apr 2025 at 08:51, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Instead of providing crypto_shash algorithms for the arch-optimized
> SHA-256 code, instead implement the SHA-256 library. This is much
> simpler, it makes the SHA-256 library functions be arch-optimized, and
> it fixes
On Sat, 26 Apr 2025 at 08:51, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Since kernel-mode NEON sections are now preemptible on arm64, there is
> no longer any need to limit the length of them.
>
> Signed-off-by: Eric Biggers
> ---
> arch/arm64/crypto/sha256-glue.c | 19 ++-
On Sat, 26 Apr 2025 at 08:51, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Instead of providing crypto_shash algorithms for the arch-optimized
> SHA-256 code, instead implement the SHA-256 library. This is much
> simpler, it makes the SHA-256 library functions be arch-optimized, and
> it fixes
From: Eric Biggers
Instead of providing crypto_shash algorithms for the arch-optimized
SHA-256 code, instead implement the SHA-256 library. This is much
simpler, it makes the SHA-256 library functions be arch-optimized, and
it fixes the longstanding issue where the arch-optimized SHA-256 was
dis
32 matches
Mail list logo