just test long symbols by self
generated symbols as another test case. In case its useful to you I've
put this in a rebased branch 20241016-modules-symtab branch. Feel free
to use as you see fit.
By reading this, I discovered that was initially added to powerpc by
commit 271ca788774a (
On Wed, Oct 16, 2024 at 01:40:55PM +0300, Mike Rapoport wrote:
> On Tue, Oct 15, 2024 at 01:11:54PM -0700, Luis Chamberlain wrote:
> > On Tue, Oct 15, 2024 at 08:54:29AM +0300, Mike Rapoport wrote:
> > > On Mon, Oct 14, 2024 at 09:09:49PM -0700, Luis Chamberlain wrote:
> > > > Mike, please run this
also just test long symbols by self
generated symbols as another test case. In case its useful to you I've
put this in a rebased branch 20241016-modules-symtab branch. Feel free
to use as you see fit.
I forget what we concluded on Helge Deller's alignement patches, I think
there was a
On Wed, 16 Oct 2024 at 15:13, Kirill A. Shutemov wrote:
>
> It is worse than that. If we get LAM_SUP enabled (there's KASAN patchset
> in works) this check will allow arbitrary kernel addresses.
Ugh. I haven't seen the LAM_SUP patches.
But yeah, I assume any LAM_SUP model would basically then ma
On Wed, 16 Oct 2024 at 15:03, Andrew Cooper wrote:
>
> That doesn't have the same semantics, does it?
Correct. It just basically makes all positive addresses be force-canonicalized.
> If AMD think it's appropriate, then what you probably want is the real
> branch as per before (to maintain archi
On Wed, Oct 16, 2024 at 11:02:56PM +0100, Andrew Cooper wrote:
> On 16/10/2024 5:10 pm, Linus Torvalds wrote:
> > --- a/arch/x86/lib/getuser.S
> > +++ b/arch/x86/lib/getuser.S
> > @@ -37,11 +37,14 @@
> >
> >#define ASM_BARRIER_NOSPEC ALTERNATIVE "", "lfence",
> > X86_FEATURE_LFENCE_RDTSC
On 16/10/2024 5:10 pm, Linus Torvalds wrote:
> --- a/arch/x86/lib/getuser.S
> +++ b/arch/x86/lib/getuser.S
> @@ -37,11 +37,14 @@
>
>#define ASM_BARRIER_NOSPEC ALTERNATIVE "", "lfence",
> X86_FEATURE_LFENCE_RDTSC
>
> +#define X86_CANONICAL_MASK ALTERNATIVE \
> + "movq $0x80007
On Wed, Oct 16, 2024 at 03:24:18PM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly
> specify node ID will use huge pages only if size_per_node is larger than
> a huge page.
> Still the actual allocated memory i
On Wed, Oct 16, 2024 at 03:24:17PM +0300, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> There are a couple of declarations that depend on CONFIG_MMU in
> include/linux/vmalloc.h spread all over the file.
>
> Group them all together to improve code readability.
>
> No functional c
On Wed, 16 Oct 2024 15:24:22 +0300
Mike Rapoport wrote:
> diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
> index 8da0e66ca22d..b498897b213c 100644
> --- a/arch/x86/kernel/ftrace.c
> +++ b/arch/x86/kernel/ftrace.c
> @@ -118,10 +118,13 @@ ftrace_modify_code_direct(unsigned long ip
On Wed, Oct 16, 2024 at 11:23 AM Athira Rajeev
wrote:
>
>
>
> > On 16 Oct 2024, at 8:36 PM, Ian Rogers wrote:
> >
> > On Wed, Oct 16, 2024 at 5:30 AM Athira Rajeev
> > wrote:
> >>
> >>
> >>
> >>> On 14 Oct 2024, at 10:56 PM, Namhyung Kim wrote:
> >>>
> >>> Hello Athira,
> >>>
> >>> On Sun, Oct
> On 16 Oct 2024, at 8:36 PM, Ian Rogers wrote:
>
> On Wed, Oct 16, 2024 at 5:30 AM Athira Rajeev
> wrote:
>>
>>
>>
>>> On 14 Oct 2024, at 10:56 PM, Namhyung Kim wrote:
>>>
>>> Hello Athira,
>>>
>>> On Sun, Oct 13, 2024 at 11:07:42PM +0530, Athira Rajeev wrote:
perf fails to compil
> On 16 Oct 2024, at 11:04 PM, Namhyung Kim wrote:
>
> Hello Athira,
>
> On Thu, Oct 10, 2024 at 08:21:06PM +0530, Athira Rajeev wrote:
>> perf list picks the events supported for specific platform
>> from pmu-events/arch/powerpc/. Example power10 events
>> are in pmu-events/arch/powerpc/powe
Hello Athira,
On Thu, Oct 10, 2024 at 08:21:06PM +0530, Athira Rajeev wrote:
> perf list picks the events supported for specific platform
> from pmu-events/arch/powerpc/. Example power10 events
> are in pmu-events/arch/powerpc/power10, power9 events are part
> of pmu-events/arch/powerpc/power9. Th
On Mon, 14 Oct 2024 at 09:55, Linus Torvalds
wrote:
>
> On Mon, 14 Oct 2024 at 05:30, Kirill A. Shutemov wrote:
> >
> > Given that LAM enforces bit 47/56 to be equal to bit 63 I think we can do
> > this unconditionally instead of masking:
> >
> > diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib
On Wed, Oct 16, 2024 at 5:30 AM Athira Rajeev
wrote:
>
>
>
> > On 14 Oct 2024, at 10:56 PM, Namhyung Kim wrote:
> >
> > Hello Athira,
> >
> > On Sun, Oct 13, 2024 at 11:07:42PM +0530, Athira Rajeev wrote:
> >> perf fails to compile on systems with GCC version11
> >> as below:
> >>
> >> In file in
+ Albert Ou, Alexander Gordeev, Brian Cain, Guo Ren, Heiko Carstens, Michael
Ellerman, Michal Simek, Palmer Dabbelt, Paul Walmsley, Vasily Gorbik, Vineet
Gupta.
This was a rather tricky series to get the recipients correct for and my script
did not realize that "supporter" was a pseudonym for "ma
> On 14 Oct 2024, at 11:13 PM, Namhyung Kim wrote:
>
> On Sun, 13 Oct 2024 22:37:32 +0530, Athira Rajeev wrote:
>
>> The testcase for tool_pmu failed in powerpc as below:
>>
>> ./perf test -v "Parsing without PMU name"
>> 8: Tool PMU:
> On 14 Oct 2024, at 10:56 PM, Namhyung Kim wrote:
>
> Hello Athira,
>
> On Sun, Oct 13, 2024 at 11:07:42PM +0530, Athira Rajeev wrote:
>> perf fails to compile on systems with GCC version11
>> as below:
>>
>> In file included from /usr/include/string.h:519,
>> from
>> /home
From: "Mike Rapoport (Microsoft)"
Enable execmem's cache of PMD_SIZE'ed pages mapped as ROX for module
text allocations on 64 bit.
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/x86/Kconfig | 1 +
arch/x86/mm/init.c | 37 -
2 files changed, 37 insertio
From: "Mike Rapoport (Microsoft)"
Using large pages to map text areas reduces iTLB pressure and improves
performance.
Extend execmem_alloc() with an ability to use huge pages with ROX
permissions as a cache for smaller allocations.
To populate the cache, a writable large page is allocated from
From: "Mike Rapoport (Microsoft)"
When module text memory will be allocated with ROX permissions, the
memory at the actual address where the module will live will contain
invalid instructions and there will be a writable copy that contains the
actual module code.
Update relocations and alternati
From: "Mike Rapoport (Microsoft)"
Add an API that will allow updates of the direct/linear map for a set of
physically contiguous pages.
It will be used in the following patches.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by: Christoph Hellwig
---
arch/arm64/include/asm/set_memory.h
From: "Mike Rapoport (Microsoft)"
In order to support ROX allocations for module text, it is necessary to
handle modifications to the code, such as relocations and alternatives
patching, without write access to that memory.
One option is to use text patching, but this would make module loading
e
From: "Mike Rapoport (Microsoft)"
Several architectures support text patching, but they name the header
files that declare patching functions differently.
Make all such headers consistently named text-patching.h and add an empty
header in asm-generic for architectures that do not support text pa
From: "Mike Rapoport (Microsoft)"
vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly
specify node ID will use huge pages only if size_per_node is larger than
a huge page.
Still the actual allocated memory is not distributed between nodes and
there is no advantage in such approach.
From: "Mike Rapoport (Microsoft)"
There are a couple of declarations that depend on CONFIG_MMU in
include/linux/vmalloc.h spread all over the file.
Group them all together to improve code readability.
No functional changes.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by: Christoph Hellw
From: "Mike Rapoport (Microsoft)"
Hi,
This is an updated version of execmem ROX caches.
Andrew, Luis, there is a conflict with Suren's "page allocation tag
compression" patches:
https://lore.kernel.org/all/20241014203646.1952505-1-sur...@google.com
Probably taking this via mmotm would be more
On Sun, Oct 13, 2024 at 10:17:00PM +0200, Julia Lawall wrote:
> Since SLOB was removed and since
> commit 6c6c47b063b5 ("mm, slab: call kvfree_rcu_barrier() from
> kmem_cache_destroy()"),
> it is not necessary to use call_rcu when the callback only performs
> kmem_cache_free. Use kfree_rcu() direc
On Tue, Oct 15, 2024 at 01:11:54PM -0700, Luis Chamberlain wrote:
> On Tue, Oct 15, 2024 at 08:54:29AM +0300, Mike Rapoport wrote:
> > On Mon, Oct 14, 2024 at 09:09:49PM -0700, Luis Chamberlain wrote:
> > > Mike, please run this with kmemleak enabled and running, and also try to
> > > get
> > > to
30 matches
Mail list logo