On Fri, 13 Jun 2025 17:37:09 +0200,
Christophe Leroy wrote:
>
> With user access protection (Called SMAP on x86 or KUAP on powerpc)
> each and every call to get_user() or put_user() performs heavy
> operations to unlock and lock kernel access to userspace.
>
> SNDRV_PCM_IOCTL_SYNC_PTR is a hot pa
On 6/13/25 10:02 PM, Justin Forbes wrote:
> On Tue, May 20, 2025 at 08:36:18PM +1000, Michael Ellerman wrote:
>> Madhavan Srinivasan writes:
>>> Since termio interface is now obsolete, include/uapi/asm/ioctls.h
>>> has some constant macros referring to "struct termio", this caused
>>> build fai
On Tue, May 20, 2025 at 08:36:18PM +1000, Michael Ellerman wrote:
> Madhavan Srinivasan writes:
> > Since termio interface is now obsolete, include/uapi/asm/ioctls.h
> > has some constant macros referring to "struct termio", this caused
> > build failure at userspace.
> >
> > In file included from
In an effort of optimising SNDRV_PCM_IOCTL_SYNC_PTR ioctl which
is a hot path, lets first refactor the copy from and to user
with macros.
This is done with macros and not static inline fonctions because
types differs between the different versions of snd_pcm_sync_ptr()
like functions.
First step
On Sat, Jun 07, 2025 at 01:04:51PM -0700, Eric Biggers wrote:
> From: Eric Biggers
>
> Move the s390-optimized CRC code from arch/s390/lib/crc* into its new
> location in lib/crc/s390/, and wire it up in the new way. This new way
> of organizing the CRC code eliminates the need to artificially s
On Fri, Jun 13, 2025 at 06:01:41PM +0200, Alexander Gordeev wrote:
> On Sat, Jun 07, 2025 at 01:04:51PM -0700, Eric Biggers wrote:
> > From: Eric Biggers
> >
> > Move the s390-optimized CRC code from arch/s390/lib/crc* into its new
> > location in lib/crc/s390/, and wire it up in the new way. Th
With user access protection (Called SMAP on x86 or KUAP on powerpc)
each and every call to get_user() or put_user() performs heavy
operations to unlock and lock kernel access to userspace.
SNDRV_PCM_IOCTL_SYNC_PTR is a hot path which is called really often
and needs to run as fast as possible.
To
Le 13/06/2025 à 14:37, Takashi Iwai a écrit :
On Fri, 13 Jun 2025 13:03:04 +0200,
Christophe Leroy wrote:
Le 13/06/2025 à 11:29, Takashi Iwai a écrit :
On Thu, 12 Jun 2025 12:51:05 +0200,
Christophe Leroy wrote:
Now that snd_pcm_sync_ptr_get_user() and snd_pcm_sync_ptr_put_user()
are co
On Fri, 13 Jun 2025 14:46:46 +0200,
Christophe Leroy wrote:
>
>
>
> Le 13/06/2025 à 14:37, Takashi Iwai a écrit :
> > On Fri, 13 Jun 2025 13:03:04 +0200,
> > Christophe Leroy wrote:
> >>
> >>
> >>
> >> Le 13/06/2025 à 11:29, Takashi Iwai a écrit :
> >>> On Thu, 12 Jun 2025 12:51:05 +0200,
> >
On Fri, Jun 13, 2025 at 1:59 AM Alexis Lothoré
wrote:
>
> On Fri Jun 13, 2025 at 10:32 AM CEST, Peter Zijlstra wrote:
> > On Fri, Jun 13, 2025 at 10:26:37AM +0200, Alexis Lothoré wrote:
> >> Hi Peter,
> >>
> >> On Fri Jun 13, 2025 at 10:11 AM CEST, Peter Zijlstra wrote:
> >> > On Fri, Jun 13, 2025
This series converts all variants of SNDRV_PCM_IOCTL_SYNC_PTR to
user_access_begin/user_access_end() in order to reduce the CPU load
measured in function snd_pcm_ioctl.
With the current implementation, "perf top" reports a high load in
snd_pcm_iotcl(). Most calls to that function are SNDRV_PCM_IO
Now that snd_pcm_sync_ptr_get_user() and snd_pcm_sync_ptr_put_user()
are converted to user_access_begin/user_access_end(),
snd_pcm_sync_ptr_get_user() is more efficient than a raw get_user()
followed by a copy_from_user(). And because copy_{to/from}_user() are
generic functions focussed on transfer
On Thu, Jun 12, 2025 at 06:56:53AM +0200, Paolo Bonzini wrote:
> On 6/11/25 02:10, Sean Christopherson wrote:
> > Kill off include/kvm (through file moves/renames), and standardize the set
> > of
> > KVM includes across all architectures.
> >
> > This conflicts with Colton's partioned PMU series[
To match struct __snd_pcm_mmap_status and enable reuse of
snd_pcm_sync_ptr_get_user() and snd_pcm_sync_ptr_put_user() by
snd_pcm_sync_ptr() replace tstamp_sec and tstamp_nsec fields by
a struct __snd_timespec in struct snd_pcm_mmap_status32.
Do the same with audio_tstamp_sec and audio_tstamp_nsec.
On 12/06/2025 18:36, Alexander Gordeev wrote:
> Commit a1d416bf9faf ("sparc/mm: disable preemption in lazy mmu mode")
> is not necessary anymore, since the lazy MMU mode is entered with a
> spinlock held and sparc does not support Real-Time. Thus, upon entering
> the lazy mode the preemption is alr
On Sat, May 17, 2025 at 12:52:23AM +0800, Hans Zhang wrote:
> From: Hans Zhang
>
> Change pcie_aer_disable variable to bool and update pci_no_aer()
> to set it to true. Improves code readability and aligns with modern
> kernel practices.
>
> Signed-off-by: Hans Zhang
Applied to pci/misc!
- Ma
Le 13/06/2025 à 11:29, Takashi Iwai a écrit :
On Thu, 12 Jun 2025 12:51:05 +0200,
Christophe Leroy wrote:
Now that snd_pcm_sync_ptr_get_user() and snd_pcm_sync_ptr_put_user()
are converted to user_access_begin/user_access_end(),
snd_pcm_sync_ptr_get_user() is more efficient than a raw get_us
When the target function receives more arguments than available
registers, the additional arguments are passed on stack, and so the
generated trampoline needs to read those to prepare the bpf context,
but also to prepare the target function stack when it is in charge of
calling it. This works well
x86 allows using up to 6 registers to pass arguments between function
calls. This value is hardcoded in multiple places, use a define for this
value.
Signed-off-by: Alexis Lothoré (eBPF Foundation)
---
arch/x86/net/bpf_jit_comp.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(
powerpc allows using up to 8 registers to pass arguments between function
calls. This value is hardcoded in multiple places, use a define for this
value.
Signed-off-by: Alexis Lothoré (eBPF Foundation)
---
arch/powerpc/net/bpf_jit_comp.c | 10 +++---
1 file changed, 7 insertions(+), 3 deleti
When the target function receives more arguments than available
registers, the additional arguments are passed on stack, and so the
generated trampoline needs to read those to prepare the bpf context, but
also to prepare the target function stack when it is in charge of
calling it. This works well
When attaching ebpf programs to functions through fentry/fexit, the
generated trampolines can not really make sure about the arguments exact
location on the stack if those are structures: those structures can be
altered with attributes such as packed or aligned(x), but this
information is not encod
When the target function receives more arguments than available
registers, the additional arguments are passed on stack, and so the
generated trampoline needs to read those to prepare the bpf context, but
also to prepare the target function stack when it is in charge of
calling it. This works well
Hi Peter,
On Fri Jun 13, 2025 at 10:11 AM CEST, Peter Zijlstra wrote:
> On Fri, Jun 13, 2025 at 09:37:11AM +0200, Alexis Lothoré (eBPF Foundation)
> wrote:
>> When the target function receives more arguments than available
>> registers, the additional arguments are passed on stack, and so the
>>
On Fri Jun 13, 2025 at 10:32 AM CEST, Peter Zijlstra wrote:
> On Fri, Jun 13, 2025 at 10:26:37AM +0200, Alexis Lothoré wrote:
>> Hi Peter,
>>
>> On Fri Jun 13, 2025 at 10:11 AM CEST, Peter Zijlstra wrote:
>> > On Fri, Jun 13, 2025 at 09:37:11AM +0200, Alexis Lothoré (eBPF Foundation)
>> > wrote:
When the target function receives more arguments than available
registers, the additional arguments are passed on stack, and so the
generated trampoline needs to read those to prepare the bpf context, but
also to prepare the target function stack when it is in charge of
calling it. This works well
Hello,
this series follows some discussions started in [1] around bpf
trampolines limitations on specific cases. When a trampoline is
generated for a target function involving many arguments, it has to
properly find and save the arguments that has been passed through stack.
While this is doable wit
On Fri, 13 Jun 2025 13:03:04 +0200,
Christophe Leroy wrote:
>
>
>
> Le 13/06/2025 à 11:29, Takashi Iwai a écrit :
> > On Thu, 12 Jun 2025 12:51:05 +0200,
> > Christophe Leroy wrote:
> >>
> >> Now that snd_pcm_sync_ptr_get_user() and snd_pcm_sync_ptr_put_user()
> >> are converted to user_access_
On 03/06/25 2:22 pm, Christophe Leroy wrote:
Le 28/05/2025 à 15:48, Aditya Bodkhe a écrit :
[Vous ne recevez pas souvent de courriers de adity...@linux.ibm.com.
Découvrez pourquoi ceci est important à
https://aka.ms/LearnAboutSenderIdentification ]
From: Aditya Bodkhe
commit a1be9ccc57f
On Fri, 13 Jun 2025 07:24:46 +0200,
Christophe Leroy wrote:
>
>
>
> Le 12/06/2025 à 13:02, Takashi Iwai a écrit :
> > On Thu, 12 Jun 2025 12:39:39 +0200,
> > Christophe Leroy wrote:
> >>
> >> With user access protection (Called SMAP on x86 or KUAP on powerpc)
> >> each and every call to get_use
On 12/06/2025 18:36, Alexander Gordeev wrote:
> As a follow-up to commit 691ee97e1a9d ("mm: fix lazy mmu docs and
> usage") take a step forward and protect with a lock not only user,
> but also kernel mappings before entering the lazy MMU mode. With
> that the semantics of arch_enter|leave_lazy_mmu
On Fri, Jun 13, 2025 at 09:37:11AM +0200, Alexis Lothoré (eBPF Foundation)
wrote:
> When the target function receives more arguments than available
> registers, the additional arguments are passed on stack, and so the
> generated trampoline needs to read those to prepare the bpf context,
> but als
On Fri, Jun 13, 2025 at 10:26:37AM +0200, Alexis Lothoré wrote:
> Hi Peter,
>
> On Fri Jun 13, 2025 at 10:11 AM CEST, Peter Zijlstra wrote:
> > On Fri, Jun 13, 2025 at 09:37:11AM +0200, Alexis Lothoré (eBPF Foundation)
> > wrote:
> >> When the target function receives more arguments than availabl
Greetings!!!
IBM CI has reported kernel BUG at mm/slub.c:546!, while running fuzzer
test, on linux-next-20250613 kernel on IBM Power Server.
Traces:
[ 4017.318542] [ cut here ]
[ 4017.318577] kernel BUG at mm/slub.c:546!
[ 4017.318586] Oops: Exception in kernel
On Thu, 12 Jun 2025 12:51:05 +0200,
Christophe Leroy wrote:
>
> Now that snd_pcm_sync_ptr_get_user() and snd_pcm_sync_ptr_put_user()
> are converted to user_access_begin/user_access_end(),
> snd_pcm_sync_ptr_get_user() is more efficient than a raw get_user()
> followed by a copy_from_user(). And b
On Thu, Jun 12, 2025 at 07:36:09PM +0200, Alexander Gordeev wrote:
> As a follow-up to commit 691ee97e1a9d ("mm: fix lazy mmu docs and
> usage") take a step forward and protect with a lock not only user,
> but also kernel mappings before entering the lazy MMU mode. With
> that the semantics of arch
36 matches
Mail list logo