al memory region for SLUB
mm/slub: allocate slabs from virtual memory
mm/slub: introduce the deallocated_pages sysfs attribute
mm/slub: sanity-check freepointers
security: add documentation for SLAB_VIRTUAL
Matteo Rizzo (1):
mm/slub: don't try to dereference invalid freepointers
ecking the integrity of freelist in get_freepointer.
Signed-off-by: Matteo Rizzo
---
mm/slub.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index f7940048138c..a7dae207c2d2 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1820,7 +1820,9 @@ static i
(virt_to_head_page(res)) should be equivalent to
is_slab_addr(res).
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
include/linux/slab.h | 1 +
kernel/resource.c| 2 +-
mm/slab.h| 9 +
mm/slab_common.c | 5 ++---
mm/slub.c| 6
struct
slab.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
include/linux/slub_def.h | 9 -
mm/slab.h| 22 ++
mm/slub.c| 12
3 files changed, 22 insertions(+), 21 deletions(-)
diff
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
mm/memcontrol.c | 2 +-
mm/slab_common.c | 12 +++-
mm/slub.c| 14 ++
3 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e8ca4bdcb03c..0ab9f5323db7
From: Jann Horn
This is refactoring in preparation for SLAB_VIRTUAL. Extract this code
to separate functions so that it's not duplicated in the code that
allocates and frees page with SLAB_VIRTUAL enabled.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo
.
meta_gfp_flags is used for the memory backing the metadata region and
page tables, and gfp_flags for the data memory.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
mm/slub.c | 7 ---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/mm
From: Jann Horn
This is refactoring in preparation for checking freeptrs for corruption
inside freelist_ptr_decode().
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
mm/slub.c | 43 +++
1 file changed, 23
future work.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
security/Kconfig.hardening | 14 ++
1 file changed, 14 insertions(+)
diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening
index 0f295961e773..9f4e6e38aa76 1
From: Jann Horn
With SLAB_VIRTUAL enabled, unused slabs which still have virtual memory
allocated to them but no physical memory are kept in a per-cache list so
that they can be reused later if the cache needs to grow again.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by
From: Jann Horn
SLAB_VIRTUAL reserves 512 GiB of virtual memory and uses them for both
struct slab and the actual slab memory. The pointers returned by
kmem_cache_alloc will point to this range of memory.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
n machines
with many CPUs.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
arch/x86/include/asm/page_64.h | 10 +
arch/x86/include/asm/pgtable_64_types.h | 5 +
arch/x86/mm/physaddr.c | 10 +
include/linux/slab.h
From: Jann Horn
When SLAB_VIRTUAL is enabled this new sysfs attribute tracks the number
of slab pages whose physical memory has been reclaimed but whose virtual
memory is still allocated to a kmem_cache.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
From: Jann Horn
Sanity-check that:
- non-NULL freepointers point into the slab
- freepointers look plausibly aligned
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
lib/slub_kunit.c | 4
mm/slab.h| 8 +++
mm/slub.c| 57
From: Jann Horn
Document what SLAB_VIRTUAL is trying to do, how it's implemented, and
why.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
Documentation/security/self-protection.rst | 102 +
1 file changed, 102 insertions(+)
On Fri, 15 Sept 2023 at 23:50, Dave Hansen wrote:
>
> I have the feeling folks just grabbed the first big-ish chunk they saw
> free in the memory map and stole that one. Not a horrible approach,
> mind you, but I have the feeling it didn't go through the most rigorous
> sizing procedure. :)
>
> M
On Fri, 15 Sept 2023 at 18:30, Lameter, Christopher
wrote:
>
> On Fri, 15 Sep 2023, Dave Hansen wrote:
>
> > What's the cost?
>
> The only thing that I see is 1-2% on kernel compilations (and "more on
> machines with lots of cores")?
I used kernel compilation time (wall clock time) as a benchmark
On Mon, 18 Sept 2023 at 19:39, Ingo Molnar wrote:
>
> What's the split of the increase in overhead due to SLAB_VIRTUAL=y, between
> user-space execution and kernel-space execution?
>
Same benchmark as before (compiling a kernel on a system running the patched
kernel):
Intel Skylake:
LABEL
On Mon, 18 Sept 2023 at 20:05, Linus Torvalds
wrote:
>
> ... and equally importantly, what about DMA?
I'm not exactly sure what you mean by this, I don't think this should
affect the performance of DMA.
> Or what about the fixed-size slabs (aka kmalloc?) What's the point of
> "never re-use the s
On Fri, 15 Sept 2023 at 23:57, Dave Hansen wrote:
>
> I assume that the TLB flushes in the queue are going to be pretty sparse
> on average.
>
> At least on x86, flush_tlb_kernel_range() falls back pretty quickly from
> individual address invalidation to just doing a full flush. It might
> not ev
20 matches
Mail list logo