Hi Brendan,
On Fri, Jan 10, 2025 at 06:40:28PM +, Brendan Jackman wrote:
> Currently a nop config. Keeping as a separate commit for easy review of
> the boring bits. Later commits will use and enable this new config.
>
> This config is only added for non-UML x86_64 as other architectures do
>
From: "Mike Rapoport (Microsoft)"
max_mapnr is essentially the size of the memory map for systems that use
FLATMEM. There is no reason to calculate it in each and every architecture
when it's anyway calculated in alloc_node_mem_map().
Drop setting of max_mapnr from architecture
From: "Mike Rapoport (Microsoft)"
All architectures that support HIGHMEM have their code that frees high
memory pages to the buddy allocator while __free_memory_core() is limited
to freeing only low memory.
There is no actual reason for that. The memory map is completely ready
b
From: "Mike Rapoport (Microsoft)"
This will help to pull out memblock_free_all() to generic code.
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/arm/mm/init.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/in
From: "Mike Rapoport (Microsoft)"
Hi,
Every architecture has implementation of mem_init() function and some
even more than one. All these release free memory to the buddy
allocator, most of them set high_memory to the end of directly
addressable memory and many of them set max_mapnr f
From: "Mike Rapoport (Microsoft)"
This will help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/hexagon/mm/init.c | 14 ++
1 file changed, 6 insertions(+), 8
From: "Mike Rapoport (Microsoft)"
This will help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/xtensa/mm/init.c | 97 ++-
1 file c
From: "Mike Rapoport (Microsoft)"
Allocating the zero pages from memblock is simpler because the memory is
already reserved.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport
From: "Mike Rapoport (Microsoft)"
max_mapnr is essentially the size of the memory map for systems that use
FLATMEM. There is no reason to calculate it in each and every architecture
when it's anyway calculated in alloc_node_mem_map().
Drop setting of max_mapnr from architecture
From: "Mike Rapoport (Microsoft)"
Allocating the zero pages from memblock is simpler because the memory is
already reserved.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Acked-by: Heiko Carstens
Sig
From: "Mike Rapoport (Microsoft)"
This will help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/nios2/kernel/setup.c | 2 ++
arch/nios2/mm/init.c | 2 --
2 files
From: "Mike Rapoport (Microsoft)"
Currently, implementation of mem_init() in every architecture consists of
one or more of the following:
* initializations that must run before page allocator is active, for
instance swiotlb_init()
* a call to memblock_free_all() to release all the
From: "Mike Rapoport (Microsoft)"
The point where the memory is released from memblock to the buddy allocator
is hidden inside arch-specific mem_init()s and the call to
memblock_free_all() is needlessly duplicated in every artiste cure and
after introduction of arch_mm_preinit() hook
From: "Mike Rapoport (Microsoft)"
All architectures that support HIGHMEM have their code that frees high
memory pages to the buddy allocator while __free_memory_core() is limited
to freeing only low memory.
There is no actual reason for that. The memory map is completely ready
b
From: "Mike Rapoport (Microsoft)"
high_memory defines upper bound on the directly mapped memory.
This bound is defined by the beginning of ZONE_HIGHMEM when a system has
high memory and by the end of memory otherwise.
All this is known to generic memory management initialization cod
From: "Mike Rapoport (Microsoft)"
Both MIPS systems that support numa (loongsoon3 and sgi-ip27) have
identical mem_init() for NUMA case.
Move that into arch/mips/mm/init.c and drop duplicate per-machine
definitions.
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/mips/loongso
From: "Mike Rapoport (Microsoft)"
Memory used by initrd should be reserved as soon as possible before
there any memblock allocations that might overwrite that memory.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch
From: "Mike Rapoport (Microsoft)"
This will help to pull out memblock_free_all() to generic code.
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/arm/mm/init.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/in
On Wed, Mar 12, 2025 at 04:56:59PM +0100, Ard Biesheuvel wrote:
> On Tue, 11 Mar 2025 at 06:56, Mike Rapoport wrote:
> >
> > On Fri, Mar 07, 2025 at 04:28:15PM +0100, Heiko Carstens wrote:
> > > On Thu, Mar 06, 2025 at 08:51:17PM +0200, Mike Rapoport wrote:
> > >
On Tue, Mar 11, 2025 at 09:59:32PM +, Russell King (Oracle) wrote:
> On Tue, Mar 11, 2025 at 05:51:06PM +, Mark Brown wrote:
> > On Thu, Mar 06, 2025 at 08:51:20PM +0200, Mike Rapoport wrote:
> > > From: "Mike Rapoport (Microsoft)"
> > >
> &g
Hi Mark,
On Tue, Mar 11, 2025 at 05:51:06PM +, Mark Brown wrote:
> On Thu, Mar 06, 2025 at 08:51:20PM +0200, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > high_memory defines upper bound on the directly mapped memory.
> > This
On Thu, Mar 06, 2025 at 08:51:15PM +0200, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)"
>
> Allocating the zero pages from memblock is simpler because the memory is
> already reserved.
>
> This will also help with pulling out memblock_free_all() to the
On Fri, Mar 07, 2025 at 04:28:15PM +0100, Heiko Carstens wrote:
> On Thu, Mar 06, 2025 at 08:51:17PM +0200, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > Allocating the zero pages from memblock is simpler because the memory is
> > alre
From: "Mike Rapoport (Microsoft)"
This will help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/hexagon/mm/init.c | 14 ++
1 file changed, 6 insertions(+), 8
From: "Mike Rapoport (Microsoft)"
Both MIPS systems that support numa (loongsoon3 and sgi-ip27) have
identical mem_init() for NUMA case.
Move that into arch/mips/mm/init.c and drop duplicate per-machine
definitions.
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/mips/loongso
From: "Mike Rapoport (Microsoft)"
Memory used by initrd should be reserved as soon as possible before
there any memblock allocations that might overwrite that memory.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch
From: "Mike Rapoport (Microsoft)"
Currently, implementation of mem_init() in every architecture consists of
one or more of the following:
* initializations that must run before page allocator is active, for
instance swiotlb_init()
* a call to memblock_free_all() to release all the
From: "Mike Rapoport (Microsoft)"
This will help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/nios2/kernel/setup.c | 2 ++
arch/nios2/mm/init.c | 2 --
2 files
From: "Mike Rapoport (Microsoft)"
The point where the memory is released from memblock to the buddy allocator
is hidden inside arch-specific mem_init()s and the call to
memblock_free_all() is needlessly duplicated in every artiste cure and
after introduction of arch_mm_preinit() hook
From: "Mike Rapoport (Microsoft)"
Allocating the zero pages from memblock is simpler because the memory is
already reserved.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport
From: "Mike Rapoport (Microsoft)"
high_memory defines upper bound on the directly mapped memory.
This bound is defined by the beginning of ZONE_HIGHMEM when a system has
high memory and by the end of memory otherwise.
All this is known to generic memory management initialization cod
From: "Mike Rapoport (Microsoft)"
This will help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/xtensa/mm/init.c | 97 ++-
1 file c
From: "Mike Rapoport (Microsoft)"
Allocating the zero pages from memblock is simpler because the memory is
already reserved.
This will also help with pulling out memblock_free_all() to the generic
code and reducing code duplication in arch::mem_init().
Signed-off-by: Mike Rapoport
From: "Mike Rapoport (Microsoft)"
Hi,
Every architecture has implementation of mem_init() function and some
even more than one. All these release free memory to the buddy
allocator, most of them set high_memory to the end of directly
addressable memory and many of them set max_mapnr f
Hi Ryan,
On Thu, Feb 27, 2025 at 11:13:29AM +, Ryan Roberts wrote:
> Hi Mike,
>
> Drive by review comments below...
>
>
> On 23/10/2024 17:27, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > Using large pages to map
On Mon, Jan 27, 2025 at 01:50:31PM +0100, Petr Pavlu wrote:
> On 1/26/25 08:47, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > Instead of using writable copy for module text sections, temporarily remap
> > the memory allocated from execme
From: "Mike Rapoport (Microsoft)"
The memory allocated for the ROX cache was removed from the direct map to
reduce amount of direct map updates, however this cannot be tolerated by
/proc/kcore that accesses module memory using vread_iter() and the latter
does vmalloc_to_
From: "Mike Rapoport (Microsoft)"
after rework of execmem ROX caches
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ef6cfea9df73..9d7bd0ae48c4 100644
--- a/arch/x86/Kconfig
From: "Mike Rapoport (Microsoft)"
module_writable_address() is unused and can be removed.
Signed-off-by: Mike Rapoport (Microsoft)
---
include/linux/module.h | 10 --
1 file changed, 10 deletions(-)
diff --git a/include/linux/module.h b/include/linux/module.h
index 6a
From: "Mike Rapoport (Microsoft)"
The module code does not create a writable copy of the executable memory
anymore so there is no need to handle it in module relocation and
alternatives patching.
This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3.
Signed-off-by: Mik
From: "Mike Rapoport (Microsoft)"
Instead of using writable copy for module text sections, temporarily remap
the memory allocated from execmem's ROX cache as writable and restore its
ROX permissions after the module is formed.
This will allow removing nasty games with w
From: "Mike Rapoport (Microsoft)"
Using a writable copy for ROX memory is cumbersome and error prone.
Add API that allow temporarily remapping of ranges in the ROX cache as
writable and then restoring their read-only-execute permissions.
This API will be later used in modules cod
ges done in the original CPA call and after
collapsing of large pages
* update commit message
]
Link:
https://lore.kernel.org/all/20200416213229.19174-1-kirill.shute...@linux.intel.com
Signed-off-by: Kirill A. Shutemov
Co-developed-by: Mike Rapoport (Microsoft)
Signed-off-by: Mike Rapoport
From: "Mike Rapoport (Microsoft)"
There is a 'struct cpa_data *data' parameter in cpa_flush() that is
assigned to a local 'struct cpa_data *cpa' variable.
Rename the parameter from 'data' to 'cpa' and drop declaration of the
local '
From: "Mike Rapoport (Microsoft)"
The CPA_ARRAY test always uses len[1] as numpages argument to
change_page_attr_set() although the addresses array is different each
iteration of the test loop.
Replace len[1] with len[i] to have numpages matching the addresses array.
Fixes: ecc729f1
From: "Mike Rapoport (Microsoft)"
Hi,
Following Peter's comments [1] these patches rework handling of ROX caches
for module text allocations.
Instead of using a writable copy that really complicates alternatives
patching, temporarily remap parts of a large ROX page as RW
On Thu, Jan 23, 2025 at 03:16:28PM +0100, Petr Pavlu wrote:
> On 1/21/25 10:57, Mike Rapoport wrote:
> > In order to use execmem's API for temporal remapping of the memory
> > allocated from ROX cache as writable, there is a need to distinguish
> > between the state when
From: "Mike Rapoport (Microsoft)"
after rework of execmem ROX caches
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/x86/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ef6cfea9df73..9d7bd0ae48c4 100644
--- a/arch/x86/Kconfig
From: "Mike Rapoport (Microsoft)"
module_writable_address() is unused and can be removed.
Signed-off-by: Mike Rapoport (Microsoft)
---
include/linux/module.h | 10 --
1 file changed, 10 deletions(-)
diff --git a/include/linux/module.h b/include/linux/module.h
index e9
From: "Mike Rapoport (Microsoft)"
The module code does not create a writable copy of the executable memory
anymore so there is no need to handle it in module relocation and
alternatives patching.
This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3.
Signed-off-by: Mik
From: "Mike Rapoport (Microsoft)"
Instead of using writable copy for module text sections, temporarily remap
the memory allocated from execmem's ROX cache as writable and restore its
ROX permissions after the module is formed.
This will allow removing nasty games with w
From: "Mike Rapoport (Microsoft)"
In order to use execmem's API for temporal remapping of the memory
allocated from ROX cache as writable, there is a need to distinguish
between the state when the module is being formed and the state when it is
deconstructed and fre
From: "Mike Rapoport (Microsoft)"
Hi,
Following Peter's comments [1] these patches rework handling of ROX caches
for module text allocations.
Instead of using a writable copy that really complicates alternatives
patching, temporarily remap parts of a large ROX page as RW
From: "Mike Rapoport (Microsoft)"
The memory allocated for the ROX cache was removed from the direct map to
reduce amount of direct map updates, however this cannot be tolerated by
/proc/kcore that accesses module memory using vread_iter() and the latter
does vmalloc_to_
From: "Mike Rapoport (Microsoft)"
There is a 'struct cpa_data *data' parameter in cpa_flush() that is
assigned to a local 'struct cpa_data *cpa' variable.
Rename the parameter from 'data' to 'cpa' and drop declaration of the
local '
From: "Mike Rapoport (Microsoft)"
Using a writable copy for ROX memory is cumbersome and error prone.
Add API that allow temporarily remapping of ranges in the ROX cache as
writable and then restoring their read-only-execute permissions.
This API will be later used in modules cod
From: "Mike Rapoport (Microsoft)"
The CPA_ARRAY test always uses len[1] as numpages argument to
change_page_attr_set() although the addresses array is different each
iteration of the test loop.
Replace len[1] with len[i] to have numpages matching the addresses array.
Fixes: ecc729f1
ges done in the original CPA call and after
collapsing of large pages
* update commit message
]
Link:
https://lore.kernel.org/all/20200416213229.19174-1-kirill.shute...@linux.intel.com
Signed-off-by: Kirill A. Shutemov
Co-developed-by: Mike Rapoport (Microsoft)
Signed-off-by: Mike Rapoport
On Mon, Jan 13, 2025 at 10:01:13AM +0200, Kirill A. Shutemov wrote:
> On Sun, Jan 12, 2025 at 10:54:46AM +0200, Mike Rapoport wrote:
> > Hi Kirill,
> >
> > On Fri, Jan 10, 2025 at 12:36:59PM +0200, Kirill A. Shutemov wrote:
> > > On Fri, Dec 27, 2024 at 09:28:2
Hi Kirill,
On Fri, Jan 10, 2025 at 12:36:59PM +0200, Kirill A. Shutemov wrote:
> On Fri, Dec 27, 2024 at 09:28:20AM +0200, Mike Rapoport wrote:
> > From: "Kirill A. Shutemov"
> >
> > Change of attributes of the pages may lead to fragmentation of direct
> &
On Mon, Dec 23, 2024 at 05:41:01PM +0800, Qi Zheng wrote:
> Here we are explicitly dealing with struct page, and the following logic
> semms strange:
>
> tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte)));
>
> tlb_remove_page_ptdesc
> --> tlb_remove_page(tlb, ptdesc_page(pt));
>
> So remove tlb_r
From: "Mike Rapoport (Microsoft)"
module_writable_address() is unused and can be removed.
Signed-off-by: Mike Rapoport (Microsoft)
---
include/linux/module.h | 10 --
1 file changed, 10 deletions(-)
diff --git a/include/linux/module.h b/include/linux/module.h
index e9
From: "Mike Rapoport (Microsoft)"
Using a writable copy for ROX memory is cumbersome and error prone.
Add API that allow temporarily remapping of ranges in the ROX cache as
writable and then restoring their read-only-execute permissions.
This API will be later used in modules cod
From: "Mike Rapoport (Microsoft)"
The module code does not create a writable copy of the executable memory
anymore so there is no need to handle it in module relocation and
alternatives patching.
This reverts commit 9bfc4824fd4836c16bb44f922bfaffba5da3e4f3.
Signed-off-by: Mik
From: "Mike Rapoport (Microsoft)"
Instead of using writable copy for module text sections, temporarily remap
the memory allocated from execmem's ROX cache as writable and restore its
ROX permissions after the module is formed.
This will allow removing nasty games with w
From: "Mike Rapoport (Microsoft)"
In order to use execmem's API for temporal remapping of the memory
allocated from ROX cache as writable, there is a need to distinguish
between the state when the module is being formed and the state when it is
deconstructed and fre
to PUD as peterz
suggested
* flush TLB twice: for changes done in the original CPA call and after
collapsing of large pages
]
Link:
https://lore.kernel.org/all/20200416213229.19174-1-kirill.shute...@linux.intel.com
Signed-off-by: Kirill A. Shutemov
Co-developed-by: Mike Rapoport (Microsoft
From: "Mike Rapoport (Microsoft)"
The CPA_ARRAY test always uses len[1] as numpages argument to
change_page_attr_set() although the addresses array is different each
iteration of the test loop.
Replace len[1] with len[i] to have numpages matching the addresses array.
Fixes: ecc729f1
From: "Mike Rapoport (Microsoft)"
There is a 'struct cpa_data *data' parameter in cpa_flush() that is
assigned to a local 'struct cpa_data *cpa' variable.
Rename the parameter from 'data' to 'cpa' and drop declaration of the
local '
From: "Mike Rapoport (Microsoft)"
Hi,
Following Peter's comments [1] these patches rework handling of ROX caches
for module text allocations.
Instead of using a writable copy that really complicates alternatives
patching, temporarily remap parts of a large ROX page as RW
-...@intel.com/
> v6: Fix CI compile warinigs
> Links to CI:
> https://lore.kernel.org/oe-kbuild-all/202412221259.jugnaucq-...@intel.com/
> v7: add chagelog and adjust function declaration alignment format
> --
>
> Signed-off-by: Guo Weikang
> Reviewed-by: A
MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
> }
>
> +void *__memblock_alloc_or_panic(phys_addr_t size, phys_addr_t align,
> +const char *func);
Please align this line with the first parameter to the function.
Other tha
On Mon, Nov 18, 2024 at 01:25:01PM -0500, Steven Rostedt wrote:
> On Wed, 23 Oct 2024 19:27:03 +0300
> Mike Rapoport wrote:
>
> > From: "Mike Rapoport (Microsoft)"
> >
> > Hi,
> >
> > This is an updated version of execmem ROX caches.
> >
Hi Nathan,
On Mon, Nov 04, 2024 at 04:27:41PM -0700, Nathan Chancellor wrote:
> Hi Mike,
>
> On Wed, Oct 23, 2024 at 07:27:09PM +0300, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > When module text memory will be allocated with ROX perm
Hi Nathan,
On Mon, Oct 21, 2024 at 03:15:19PM -0700, Nathan Chancellor wrote:
> Hi Mike,
>
> On Wed, Oct 16, 2024 at 03:24:22PM +0300, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > When module text memory will be allocated with ROX perm
From: "Mike Rapoport (Microsoft)"
Enable execmem's cache of PMD_SIZE'ed pages mapped as ROX for module
text allocations on 64 bit.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by: Luis Chamberlain
Tested-by: kdevops
---
arch/x86/Kconfig | 1 +
arc
From: "Mike Rapoport (Microsoft)"
In order to support ROX allocations for module text, it is necessary to
handle modifications to the code, such as relocations and alternatives
patching, without write access to that memory.
One option is to use text patching, but this would make modu
From: "Mike Rapoport (Microsoft)"
When module text memory will be allocated with ROX permissions, the
memory at the actual address where the module will live will contain
invalid instructions and there will be a writable copy that contains the
actual module code.
Update reloc
From: "Mike Rapoport (Microsoft)"
Using large pages to map text areas reduces iTLB pressure and improves
performance.
Extend execmem_alloc() with an ability to use huge pages with ROX
permissions as a cache for smaller allocations.
To populate the cache, a writable large page is allo
From: "Mike Rapoport (Microsoft)"
Add an API that will allow updates of the direct/linear map for a set of
physically contiguous pages.
It will be used in the following patches.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by: Christoph Hellwig
Reviewed-by: Luis Chamberlain
From: "Mike Rapoport (Microsoft)"
Several architectures support text patching, but they name the header
files that declare patching functions differently.
Make all such headers consistently named text-patching.h and add an empty
header in asm-generic for architectures that do not su
From: "Mike Rapoport (Microsoft)"
vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly
specify node ID will use huge pages only if size_per_node is larger than
a huge page.
Still the actual allocated memory is not distributed between nodes and
there is no advantage in suc
From: "Mike Rapoport (Microsoft)"
Hi,
This is an updated version of execmem ROX caches.
v6: https://lore.kernel.org/all/20241016122424.1655560-1-r...@kernel.org
* Fixed handling of alternatives for fineibt (kbuild bot)
* Restored usage of text_poke_early for ftrace boot time init
From: "Mike Rapoport (Microsoft)"
There are a couple of declarations that depend on CONFIG_MMU in
include/linux/vmalloc.h spread all over the file.
Group them all together to improve code readability.
No functional changes.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by:
On Thu, Oct 17, 2024 at 10:17:12AM -0400, Steven Rostedt wrote:
> On Wed, 16 Oct 2024 17:01:28 -0400
> Steven Rostedt wrote:
>
> > If this is only needed for module load, can we at least still use the
> > text_poke_early() at boot up?
> >
> > if (ftrace_poke_late) {
> > text_poke
On Thu, Oct 17, 2024 at 11:35:15AM +0200, Peter Zijlstra wrote:
> On Wed, Oct 16, 2024 at 05:01:28PM -0400, Steven Rostedt wrote:
> > On Wed, 16 Oct 2024 15:24:22 +0300
> > Mike Rapoport wrote:
> >
> > > diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftr
From: "Mike Rapoport (Microsoft)"
In order to support ROX allocations for module text, it is necessary to
handle modifications to the code, such as relocations and alternatives
patching, without write access to that memory.
One option is to use text patching, but this would make modu
From: "Mike Rapoport (Microsoft)"
Add an API that will allow updates of the direct/linear map for a set of
physically contiguous pages.
It will be used in the following patches.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by: Christoph Hellwig
---
arch/arm64/include/asm/se
From: "Mike Rapoport (Microsoft)"
Several architectures support text patching, but they name the header
files that declare patching functions differently.
Make all such headers consistently named text-patching.h and add an empty
header in asm-generic for architectures that do not su
From: "Mike Rapoport (Microsoft)"
vmalloc allocations with VM_ALLOW_HUGE_VMAP that do not explicitly
specify node ID will use huge pages only if size_per_node is larger than
a huge page.
Still the actual allocated memory is not distributed between nodes and
there is no advantage in suc
From: "Mike Rapoport (Microsoft)"
Enable execmem's cache of PMD_SIZE'ed pages mapped as ROX for module
text allocations on 64 bit.
Signed-off-by: Mike Rapoport (Microsoft)
---
arch/x86/Kconfig | 1 +
arch/x86/mm/init.c | 37 -
From: "Mike Rapoport (Microsoft)"
Using large pages to map text areas reduces iTLB pressure and improves
performance.
Extend execmem_alloc() with an ability to use huge pages with ROX
permissions as a cache for smaller allocations.
To populate the cache, a writable large page is allo
From: "Mike Rapoport (Microsoft)"
When module text memory will be allocated with ROX permissions, the
memory at the actual address where the module will live will contain
invalid instructions and there will be a writable copy that contains the
actual module code.
Update reloc
From: "Mike Rapoport (Microsoft)"
There are a couple of declarations that depend on CONFIG_MMU in
include/linux/vmalloc.h spread all over the file.
Group them all together to improve code readability.
No functional changes.
Signed-off-by: Mike Rapoport (Microsoft)
Reviewed-by:
From: "Mike Rapoport (Microsoft)"
Hi,
This is an updated version of execmem ROX caches.
Andrew, Luis, there is a conflict with Suren's "page allocation tag
compression" patches:
https://lore.kernel.org/all/20241014203646.1952505-1-sur...@google.com
Probably takin
On Tue, Oct 15, 2024 at 01:11:54PM -0700, Luis Chamberlain wrote:
> On Tue, Oct 15, 2024 at 08:54:29AM +0300, Mike Rapoport wrote:
> > On Mon, Oct 14, 2024 at 09:09:49PM -0700, Luis Chamberlain wrote:
> > > Mike, please run this with kmemleak enabled and running, and also
On Mon, Oct 14, 2024 at 09:09:49PM -0700, Luis Chamberlain wrote:
> Mike, please run this with kmemleak enabled and running, and also try to get
> tools/testing/selftests/kmod/kmod.sh to pass.
There was an issue with kmemleak, I fixed it here:
https://lore.kernel.org/linux-mm/20241009180816.83591
On Sun, Oct 13, 2024 at 10:55:25PM -0700, Christoph Hellwig wrote:
> On Sun, Oct 13, 2024 at 11:43:41AM +0300, Mike Rapoport wrote:
> > > But why? That's pretty different from our normal style of arch hooks,
> > > and introduces an indirect call in a security sensitive
On Fri, Oct 11, 2024 at 12:46:23AM -0700, Christoph Hellwig wrote:
> On Thu, Oct 10, 2024 at 03:57:33PM +0300, Mike Rapoport wrote:
> > On Wed, Oct 09, 2024 at 11:58:33PM -0700, Christoph Hellwig wrote:
> > > On Wed, Oct 09, 2024 at 09:08:15PM +0300, M
On Thu, Oct 10, 2024 at 03:54:11PM -0700, Nathan Chancellor wrote:
> Hi Mike,
>
> On Wed, Oct 09, 2024 at 09:08:14PM +0300, Mike Rapoport wrote:
> > From: "Mike Rapoport (Microsoft)"
> >
> > When module text memory will be allocated with ROX permissions,
1 - 100 of 212 matches
Mail list logo