On Thu, Sep 14, 2023 at 03:47:59PM +0200, Andrew Jones wrote:
> On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > From: Guo Ren
> >
> > Cache-block prefetch instructions are HINTs to the hardware to
> > indicate that software intends to perform a particular type of
> > memory
On Fri, Sep 15, 2023 at 10:10:25AM +0800, Guo Ren wrote:
> On Thu, Sep 14, 2023 at 5:43 PM Leonardo Bras wrote:
> >
> > On Thu, Sep 14, 2023 at 12:46:56PM +0800, Guo Ren wrote:
> > > On Thu, Sep 14, 2023 at 4:29 AM Leonardo Bras wrote:
> > > >
> > > > On Sun, Sep 10, 2023 at 04:28:59AM -0400, guo
The goal of this patch series is to deterministically prevent cross-cache
attacks in the SLUB allocator.
Use-after-free bugs are normally exploited by making the memory allocator
reuse the victim object's memory for an object with a different type. This
creates a type confusion which is a very pow
slab_free_freelist_hook tries to read a freelist pointer from the
current object even when freeing a single object. This is invalid
because single objects don't actually contain a freelist pointer when
they're freed and the memory contains other data. This causes problems
for checking the integrity
From: Jann Horn
This is refactoring in preparation for adding two different
implementations (for SLAB_VIRTUAL enabled and disabled).
virt_to_folio(x) expands to _compound_head(virt_to_page(x)) and
virt_to_head_page(x) also expands to _compound_head(virt_to_page(x))
so PageSlab(virt_to_head_page
From: Jann Horn
This is refactoring for SLAB_VIRTUAL. The implementation needs to know
the order of the virtual memory region allocated to each slab to know
how much physical memory to allocate when the slab is reused. We reuse
kmem_cache_order_objects for this, so we have to move it before struc
From: Jann Horn
This is refactoring in preparation for the introduction of SLAB_VIRTUAL
which does not implement folio_slab.
With SLAB_VIRTUAL there is no longer a 1:1 correspondence between slabs
and pages of physical memory used by the slab allocator. There is no way
to look up the slab which
From: Jann Horn
This is refactoring in preparation for SLAB_VIRTUAL. Extract this code
to separate functions so that it's not duplicated in the code that
allocates and frees page with SLAB_VIRTUAL enabled.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
From: Jann Horn
This is refactoring in preparation for SLAB_VIRTUAL.
The implementation of SLAB_VIRTUAL needs access to struct kmem_cache in
alloc_slab_page in order to take unused slabs from the slab freelist,
which is per-cache.
In addition to that it passes two different sets of GFP flags.
m
From: Jann Horn
This is refactoring in preparation for checking freeptrs for corruption
inside freelist_ptr_decode().
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
mm/slub.c | 43 +++
1 file changed, 23 insertio
From: Jann Horn
SLAB_VIRTUAL is a mitigation for the SLUB allocator which prevents reuse
of virtual addresses across different slab caches and therefore makes
some types of use-after-free bugs unexploitable.
SLAB_VIRTUAL is incompatible with KASAN and we believe it's not worth
adding support for
From: Jann Horn
With SLAB_VIRTUAL enabled, unused slabs which still have virtual memory
allocated to them but no physical memory are kept in a per-cache list so
that they can be reused later if the cache needs to grow again.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by:
From: Jann Horn
SLAB_VIRTUAL reserves 512 GiB of virtual memory and uses them for both
struct slab and the actual slab memory. The pointers returned by
kmem_cache_alloc will point to this range of memory.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
From: Jann Horn
This is the main implementation of SLAB_VIRTUAL. With SLAB_VIRTUAL
enabled, slab memory is not allocated from the linear map but from a
dedicated region of virtual memory. The code ensures that once a range
of virtual addresses is assigned to a slab cache, that virtual memory is
n
From: Jann Horn
When SLAB_VIRTUAL is enabled this new sysfs attribute tracks the number
of slab pages whose physical memory has been reclaimed but whose virtual
memory is still allocated to a kmem_cache.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
i
From: Jann Horn
Sanity-check that:
- non-NULL freepointers point into the slab
- freepointers look plausibly aligned
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
lib/slub_kunit.c | 4
mm/slab.h| 8 +++
mm/slub.c| 57 +
From: Jann Horn
Document what SLAB_VIRTUAL is trying to do, how it's implemented, and
why.
Signed-off-by: Jann Horn
Co-developed-by: Matteo Rizzo
Signed-off-by: Matteo Rizzo
---
Documentation/security/self-protection.rst | 102 +
1 file changed, 102 insertions(+)
diff --
On Fri, Sep 15, 2023 at 05:22:26AM -0300, Leonardo Bras wrote:
> On Thu, Sep 14, 2023 at 03:47:59PM +0200, Andrew Jones wrote:
> > On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > > From: Guo Ren
...
> > > diff --git a/arch/riscv/include/asm/insn-def.h
> > > b/arch/riscv/inc
On Fri, Sep 15, 2023 at 01:07:40PM +0200, Andrew Jones wrote:
> On Fri, Sep 15, 2023 at 05:22:26AM -0300, Leonardo Bras wrote:
> > On Thu, Sep 14, 2023 at 03:47:59PM +0200, Andrew Jones wrote:
> > > On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > > > From: Guo Ren
> ...
> >
Yo,
On Thu, Sep 14, 2023 at 04:47:18PM +0200, Andrew Jones wrote:
> On Thu, Sep 14, 2023 at 04:25:53PM +0200, Andrew Jones wrote:
> > On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > > From: Guo Ren
> > >
> > > Cache-block prefetch instructions are HINTs to the hardware to
On Fri, Sep 15, 2023 at 12:37:50PM +0100, Conor Dooley wrote:
> Yo,
>
> On Thu, Sep 14, 2023 at 04:47:18PM +0200, Andrew Jones wrote:
> > On Thu, Sep 14, 2023 at 04:25:53PM +0200, Andrew Jones wrote:
> > > On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > > > From: Guo Ren
>
On Fri, Sep 15, 2023 at 12:26:20PM +0100, Conor Dooley wrote:
> On Fri, Sep 15, 2023 at 01:07:40PM +0200, Andrew Jones wrote:
> > On Fri, Sep 15, 2023 at 05:22:26AM -0300, Leonardo Bras wrote:
> > > On Thu, Sep 14, 2023 at 03:47:59PM +0200, Andrew Jones wrote:
> > > > On Sun, Sep 10, 2023 at 04:28:
On Wed, Sep 13, 2023 at 4:50 PM Leonardo Bras wrote:
>
> On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > From: Guo Ren
> >
> > Cache-block prefetch instructions are HINTs to the hardware to
> > indicate that software intends to perform a particular type of
> > memory access
> > If this isn't being used in a similar manner, then the w has no reason
> > to be in the odd lowercase form.
>
> Other than to be consistent... However, the CBO_* instructions are not
> consistent with the rest of macros. If we don't need lowercase for any
> reason, then my preference would be
On Fri, Sep 15, 2023 at 02:14:40PM +0200, Andrew Jones wrote:
> On Fri, Sep 15, 2023 at 12:37:50PM +0100, Conor Dooley wrote:
> > On Thu, Sep 14, 2023 at 04:47:18PM +0200, Andrew Jones wrote:
> > > On Thu, Sep 14, 2023 at 04:25:53PM +0200, Andrew Jones wrote:
> > > > On Sun, Sep 10, 2023 at 04:28:5
On 9/15/23 03:59, Matteo Rizzo wrote:
> The goal of this patch series is to deterministically prevent cross-cache
> attacks in the SLUB allocator.
What's the cost?
On 9/14/23 04:27, Alessandro Carminati (Red Hat) wrote:
> Update kernel-parameters.txt to reflect new deferred signature
> verification.
> Enhances boot speed by allowing unsigned modules in initrd after
> bootloader check.
>
> Signed-off-by: Alessandro Carminati (Red Hat)
> ---
> Documentati
On Fri, 15 Sep 2023, Dave Hansen wrote:
On 9/15/23 03:59, Matteo Rizzo wrote:
The goal of this patch series is to deterministically prevent cross-cache
attacks in the SLUB allocator.
What's the cost?
The only thing that I see is 1-2% on kernel compilations (and "more on
machines with lots
On Fri, Sep 15, 2023 at 01:07:40PM +0200, Andrew Jones wrote:
> On Fri, Sep 15, 2023 at 05:22:26AM -0300, Leonardo Bras wrote:
> > On Thu, Sep 14, 2023 at 03:47:59PM +0200, Andrew Jones wrote:
> > > On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > > > From: Guo Ren
> ...
> >
On Fri, Sep 15, 2023 at 10:59:20AM +, Matteo Rizzo wrote:
> slab_free_freelist_hook tries to read a freelist pointer from the
> current object even when freeing a single object. This is invalid
> because single objects don't actually contain a freelist pointer when
> they're freed and the memor
On Fri, Sep 15, 2023 at 10:59:21AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is refactoring in preparation for adding two different
> implementations (for SLAB_VIRTUAL enabled and disabled).
>
> virt_to_folio(x) expands to _compound_head(virt_to_page(x)) and
> virt_to_head_page(x) al
On Fri, Sep 15, 2023 at 10:59:22AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is refactoring for SLAB_VIRTUAL. The implementation needs to know
> the order of the virtual memory region allocated to each slab to know
> how much physical memory to allocate when the slab is reused. We reu
On Fri, Sep 15, 2023 at 10:59:23AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is refactoring in preparation for the introduction of SLAB_VIRTUAL
> which does not implement folio_slab.
>
> With SLAB_VIRTUAL there is no longer a 1:1 correspondence between slabs
> and pages of physical m
On Fri, Sep 15, 2023 at 10:59:24AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is refactoring in preparation for SLAB_VIRTUAL. Extract this code
> to separate functions so that it's not duplicated in the code that
> allocates and frees page with SLAB_VIRTUAL enabled.
>
> Signed-off-by:
On Fri, Sep 15, 2023 at 10:59:25AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is refactoring in preparation for SLAB_VIRTUAL.
>
> The implementation of SLAB_VIRTUAL needs access to struct kmem_cache in
> alloc_slab_page in order to take unused slabs from the slab freelist,
> which is
On Fri, Sep 15, 2023 at 10:59:27AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> SLAB_VIRTUAL is a mitigation for the SLUB allocator which prevents reuse
> of virtual addresses across different slab caches and therefore makes
> some types of use-after-free bugs unexploitable.
>
> SLAB_VIRTUAL
On Fri, Sep 15, 2023 at 10:59:26AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is refactoring in preparation for checking freeptrs for corruption
> inside freelist_ptr_decode().
>
> Signed-off-by: Jann Horn
> Co-developed-by: Matteo Rizzo
> Signed-off-by: Matteo Rizzo
> ---
> mm/sl
On Fri, Sep 15, 2023 at 10:59:28AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> With SLAB_VIRTUAL enabled, unused slabs which still have virtual memory
> allocated to them but no physical memory are kept in a per-cache list so
> that they can be reused later if the cache needs to grow again.
On Fri, Sep 15, 2023 at 10:59:29AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> SLAB_VIRTUAL reserves 512 GiB of virtual memory and uses them for both
> struct slab and the actual slab memory. The pointers returned by
> kmem_cache_alloc will point to this range of memory.
I think the 512 GiB
On Fri, Sep 15, 2023 at 10:59:30AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> This is the main implementation of SLAB_VIRTUAL. With SLAB_VIRTUAL
> enabled, slab memory is not allocated from the linear map but from a
> dedicated region of virtual memory. The code ensures that once a range
>
On Fri, Sep 15, 2023 at 10:59:31AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> When SLAB_VIRTUAL is enabled this new sysfs attribute tracks the number
> of slab pages whose physical memory has been reclaimed but whose virtual
> memory is still allocated to a kmem_cache.
>
> Signed-off-by: J
On Fri, Sep 15, 2023 at 10:59:32AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> Sanity-check that:
> - non-NULL freepointers point into the slab
> - freepointers look plausibly aligned
>
> Signed-off-by: Jann Horn
> Co-developed-by: Matteo Rizzo
> Signed-off-by: Matteo Rizzo
> ---
> li
On Fri, Sep 15, 2023 at 10:59:33AM +, Matteo Rizzo wrote:
> From: Jann Horn
>
> Document what SLAB_VIRTUAL is trying to do, how it's implemented, and
> why.
>
> Signed-off-by: Jann Horn
> Co-developed-by: Matteo Rizzo
> Signed-off-by: Matteo Rizzo
> ---
> Documentation/security/self-prot
On 9/15/23 14:13, Kees Cook wrote:
> On Fri, Sep 15, 2023 at 10:59:29AM +, Matteo Rizzo wrote:
>> From: Jann Horn
>>
>> SLAB_VIRTUAL reserves 512 GiB of virtual memory and uses them for both
>> struct slab and the actual slab memory. The pointers returned by
>> kmem_cache_alloc will point to t
On 9/15/23 03:59, Matteo Rizzo wrote:
> + spin_lock_irqsave(&slub_kworker_lock, irq_flags);
> + list_splice_init(&slub_tlbflush_queue, &local_queue);
> + list_for_each_entry(slab, &local_queue, flush_list_elem) {
> + unsigned long start = (unsigned long)slab_to_virt(slab);
>
ould've done it to begin with. Maybe there was some
> other consideration at the time.
FWIW, I sent a patch for this earlier today. I figure you saw it Drew,
but nonetheless:
https://lore.kernel.org/all/20230915-aloe-dollar-99493746@spud/
signature.asc
Description: PGP signature
On Fri, Sep 15, 2023 at 08:36:31PM +0800, Guo Ren wrote:
> On Wed, Sep 13, 2023 at 4:50 PM Leonardo Bras wrote:
> >
> > On Sun, Sep 10, 2023 at 04:28:57AM -0400, guo...@kernel.org wrote:
> > > From: Guo Ren
> > >
> > > Cache-block prefetch instructions are HINTs to the hardware to
> > > indicate
47 matches
Mail list logo