On Tue, Oct 15, 2024 at 04:25:51PM +0530, Vishal Chourasia wrote:
> Rename devdata_mutex to devdata_spinlock to accurately reflect its
> implementation as a spinlock.
>
> [1] v1 https://lore.kernel.org/all/zwyqd-w5hehrn...@linux.ibm.com
>
> Signed-off-by: Vishal Chourasia
> ---
> drivers/crypto
On Fri, Oct 25, 2024 at 10:02:39PM +, Eric Biggers wrote:
> On Fri, Oct 25, 2024 at 10:47:15PM +0200, Ard Biesheuvel wrote:
> > On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
> > >
> > > From: Eric Biggers
> > >
> > > Instead of registering the crc32-$arch and crc32c-$arch algorithms if
>
On Fri, 25 Oct 2024 at 21:20, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Now that the crc32c() library function directly takes advantage of
> architecture-specific optimizations, it is unnecessary to go through the
> crypto API. Just use crc32c(). This is much simpler, and it improves
> per
On Fri, Oct 25, 2024 at 11:37:45PM +0200, Ard Biesheuvel wrote:
> On Fri, 25 Oct 2024 at 23:32, Eric Biggers wrote:
> >
> > On Fri, Oct 25, 2024 at 10:32:14PM +0200, Ard Biesheuvel wrote:
> > > On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
> > > >
> > > > From: Eric Biggers
> > > >
> > > > M
On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Make the CRC32 library export some flags that indicate which CRC32
> functions are actually executing optimized code at runtime. Set these
> correctly from the architectures that implement the CRC32 functions.
>
> This
From: Eric Biggers
Now that the crc32c() library function directly takes advantage of
architecture-specific optimizations, it is unnecessary to go through the
crypto API. Just use crc32c(). This is much simpler, and it improves
performance due to eliminating the crypto API overhead.
Signed-off
On Wed, Oct 23, 2024 at 12:14 PM Usama Arif wrote:
>
> __pa() is only intended to be used for linear map addresses and using
> it for initial_boot_params which is in fixmap for arm64 will give an
> incorrect value. Hence save the physical address when it is known at
> boot time when calling early
On Fri, Oct 25, 2024 at 10:47:15PM +0200, Ard Biesheuvel wrote:
> On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
> >
> > From: Eric Biggers
> >
> > Instead of registering the crc32-$arch and crc32c-$arch algorithms if
> > the arch-specific code was built, only register them when that code was
On Fri, Oct 25, 2024 at 10:32:14PM +0200, Ard Biesheuvel wrote:
> On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
> >
> > From: Eric Biggers
> >
> > Make the CRC32 library export some flags that indicate which CRC32
> > functions are actually executing optimized code at runtime. Set these
> >
On Fri, 25 Oct 2024 at 23:32, Eric Biggers wrote:
>
> On Fri, Oct 25, 2024 at 10:32:14PM +0200, Ard Biesheuvel wrote:
> > On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
> > >
> > > From: Eric Biggers
> > >
> > > Make the CRC32 library export some flags that indicate which CRC32
> > > function
On Fri, 25 Oct 2024 at 21:15, Eric Biggers wrote:
>
> From: Eric Biggers
>
> Instead of registering the crc32-$arch and crc32c-$arch algorithms if
> the arch-specific code was built, only register them when that code was
> built *and* is not falling back to the base implementation at runtime.
>
>
From: Eric Biggers
Now that the lower level __crc32c_le() library function is optimized for
each architecture, make crc32c() just call that instead of taking an
inefficient and error-prone detour through the shash API.
Note: a future cleanup should make crc32c_le() be the actual library
function
On Fri, Oct 25, 2024 at 5:57 AM Simon Horman wrote:
>
> On Thu, Oct 24, 2024 at 01:52:57PM -0700, Rosen Penev wrote:
> > The latter is the preferred way to copy ethtool strings.
> >
> > Avoids manually incrementing the pointer. Cleans up the code quite well.
> >
> > Signed-off-by: Rosen Penev
>
>
From: Eric Biggers
Move the arm CRC32 assembly code into the lib directory and wire it up
to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all the
arc
From: Eric Biggers
- Change the len parameter from unsigned int to size_t, so that the
library function which takes a size_t can safely use this code.
- Rename to crc32c_x86_3way() which is much clearer.
- Move the crc parameter to the front, as this is the usual convention.
Reviewed-by: Ard
From: Eric Biggers
Now that the crc32c() library function directly takes advantage of
architecture-specific optimizations, it is unnecessary to go through the
crypto API. Just use crc32c(). This is much simpler, and it improves
performance due to eliminating the crypto API overhead.
Reviewed-b
From: Eric Biggers
Move the x86 CRC32 assembly code into the lib directory and wire it up
to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all the
arc
From: Eric Biggers
Now that the crc32c() library function directly takes advantage of
architecture-specific optimizations, it is unnecessary to go through the
crypto API. Just use crc32c(). This is much simpler, and it improves
performance due to eliminating the crypto API overhead.
Reviewed-b
From: Eric Biggers
Now that the crc32() library function takes advantage of
architecture-specific optimizations, it is unnecessary to go through the
crypto API. Just use crc32(). This is much simpler, and it improves
performance due to eliminating the crypto API overhead.
Reviewed-by: Ard Bies
From: Eric Biggers
- Change the len parameter from unsigned int to size_t, so that the
library function which takes a size_t can safely use this code.
- Move the crc parameter to the front, as this is the usual convention.
Reviewed-by: Ard Biesheuvel
Signed-off-by: Eric Biggers
---
arch/x8
From: Eric Biggers
Move the sparc CRC32C assembly code into the lib directory and wire it
up to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all the
From: Eric Biggers
Currently the CRC32 library functions are defined as weak symbols, and
the arm64 and riscv architectures override them.
This method of arch-specific overrides has the limitation that it only
works when both the base and arch code is built-in. Also, it makes the
arch-specific
From: Eric Biggers
Move the mips CRC32 assembly code into the lib directory and wire it up
to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all the
ar
From: Eric Biggers
Move the powerpc CRC32C assembly code into the lib directory and wire it
up to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all th
From: Eric Biggers
Move the s390 CRC32 assembly code into the lib directory and wire it up
to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all the
ar
From: Eric Biggers
Move the loongarch CRC32 assembly code into the lib directory and wire
it up to the library interface. This allows it to be used without going
through the crypto API. It remains usable via the crypto API too via
the shash algorithms that use the library interface. Thus all t
From: Eric Biggers
Instead of registering the crc32-$arch and crc32c-$arch algorithms if
the arch-specific code was built, only register them when that code was
built *and* is not falling back to the base implementation at runtime.
This avoids confusing users like btrfs which checks the shash dr
From: Eric Biggers
Make the CRC32 library export some flags that indicate which CRC32
functions are actually executing optimized code at runtime. Set these
correctly from the architectures that implement the CRC32 functions.
This will be used to determine whether the crc32[c]-$arch shash
algori
This patchset is also available in git via:
git fetch
https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git crc32-lib-v2
CRC32 is a family of common non-cryptographic integrity check algorithms
that are fairly fast with a portable C implementation and become far
faster still wit
From: Eric Biggers
Remove the leading underscores from __crc32c_le_base().
This is in preparation for adding crc32c_le_arch() and eventually
renaming __crc32c_le() to crc32c_le().
Reviewed-by: Ard Biesheuvel
Signed-off-by: Eric Biggers
---
arch/arm64/lib/crc32-glue.c | 2 +-
arch/riscv/lib/c
On Fri, Oct 25, 2024 at 11:29:38AM +1100, Michael Ellerman wrote:
> [To += Mathieu]
>
> "Nysal Jan K.A." writes:
> > From: "Nysal Jan K.A"
> >
> > On architectures where ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> > is not selected, sync_core_before_usermode() is a no-op.
> > In membarrier_mm_sync_core
On Tue, Oct 22, 2024 at 2:25 AM Sean Christopherson wrote:
> > Looks good to me, thanks and congratulations!! Should we merge it in
> > kvm/next asap?
>
> That has my vote, though I'm obvious extremely biased :-)
Your wish is my command... Merged.
Paolo
On Fri, 25 Oct 2024 at 01:56, David Laight wrote:
>
> > > Especially if there is always a (PAGE sized) gap between the highest
> > > user address and the lowest kernel address so the 'size' argument
> > > to access_ok() can be ignored on the assumption that the accesses
> > > are (reasonably) line
On Thu, Oct 24, 2024 at 01:52:57PM -0700, Rosen Penev wrote:
> The latter is the preferred way to copy ethtool strings.
>
> Avoids manually incrementing the pointer. Cleans up the code quite well.
>
> Signed-off-by: Rosen Penev
...
> diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtoo
> Sorry I realise it's version 7, but although the above looks correct it's
> kind of dense.
>
> I think the below would also work and is (I think) easier to follow, and
> is more obviously similar to the existing code. I'm sure your version is
> faster, but I don't think it's that performance crit
On Fri, Oct 25, 2024 at 11:37:52AM +0530, Ritesh Harjani (IBM) wrote:
> Gautam Menghani writes:
>
> > Mask off the LPCR_MER bit before running a vCPU to ensure that it is not
> > set if there are no pending interrupts. Running a vCPU with LPCR_MER bit
> > set and no pending interrupts results in
Hi Rosen,
kernel test robot noticed the following build warnings:
[auto build test WARNING on net-next/main]
url:
https://github.com/intel-lab-lkp/linux/commits/Rosen-Penev/net-freescale-use-ethtool-string-helpers/20241025-045447
base: net-next/main
patch link:
https://lore.kernel.org
On Fri, Oct 25, 2024 at 02:56:05PM +1100, Michael Ellerman wrote:
> Hi Gautam,
>
> A few comments below ...
>
> Gautam Menghani writes:
> > Mask off the LPCR_MER bit before running a vCPU to ensure that it is not
> > set if there are no pending interrupts.
>
> I would typically leave this until
On Wed, Oct 23, 2024, at 05:36, Christoph Hellwig wrote:
> page_to_phys is duplicated by all architectures, and from some strange
> reason placed in where it doesn't fit at all.
>
> phys_to_page is only provided by a few architectures despite having a lot
> of open coded users.
>
> Provide gene
On 2024-10-24 20:29, Michael Ellerman wrote:
[To += Mathieu]
"Nysal Jan K.A." writes:
From: "Nysal Jan K.A"
On architectures where ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
is not selected, sync_core_before_usermode() is a no-op.
In membarrier_mm_sync_core_before_usermode() the compiler does not
el
Matthew Maurer writes:
> Adds a new format for MODVERSIONS which stores each field in a separate
> ELF section. This initially adds support for variable length names, but
> could later be used to add additional fields to MODVERSIONS in a
> backwards compatible way if needed. Any new fields will be
...
> access_ok() itself is so rarely used these days that we could out-line
> it. But the code cost of a function call is likely higher than
> inlining the 8-byte constant and a couple of instructions: not because
> the call instruction itself, but because of the code generation pain
> around it
Hi Nathan,
On Mon, Oct 21, 2024 at 03:15:19PM -0700, Nathan Chancellor wrote:
> Hi Mike,
>
> On Wed, Oct 16, 2024 at 03:24:22PM +0300, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > When module text memory will be allocated with ROX permissions, the
> > memory at the actual
On Mon, 21 Oct 2024 at 02:29, Eric Biggers wrote:
>
> This patchset is also available in git via:
>
> git fetch
> https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git
> crc32-lib-v1
>
> CRC32 is a family of common non-cryptographic integrity check algorithms
> that are fairly f
On 2024/10/23 23:43, Pierre Gondois wrote:
> Hello Yicong,
>
> On 10/15/24 04:18, Yicong Yang wrote:
>> From: Yicong Yang
>>
>> On building the topology from the devicetree, we've already
>> gotten the SMT thread number of each core. Update the largest
>> SMT thread number and enable the SMT cont
Currently this cannot lookup symbol beyond 64 characters in some cases
like "ls", "lp" and "t"
Fix this by using KSYM_NAME_LEN instead of fixed 64 characters
Signed-off-by: Mukesh Kumar Chaurasiya
---
arch/powerpc/xmon/xmon.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
ChangeLo
Mask off the LPCR_MER bit before running a vCPU to ensure that it is not
set if there are no pending interrupts. Running a vCPU with LPCR_MER bit
set and no pending interrupts results in L2 vCPU getting an infinite flood
of spurious interrupts. The 'if check' in kvmhv_run_single_vcpu() sets
the LPC
Alistair Popple wrote:
[..]
> >
> > Was there a discussion I missed about why the conversion to typical
> > folios allows the page->share accounting to be dropped.
>
> The problem with keeping it is we now treat DAX pages as "normal"
> pages according to vm_normal_page(). As such we use the normal
On 2024/10/24 16:44, Pierre Gondois wrote:
> Hello Yicong,
>
> On 10/15/24 04:18, Yicong Yang wrote:
>> From: Yicong Yang
>>
>> For ACPI we'll build the topology from PPTT and we cannot directly
>> get the SMT number of each core. Instead using a temporary xarray
>> to record the heterogeneous in
49 matches
Mail list logo