On Tue, 2025-04-08 at 19:49 +0100, Mark Brown wrote:
> +int arch_shstk_validate_clone(struct task_struct *t,
> + struct vm_area_struct *vma,
> + struct page *page,
> + struct kernel_clone_args *args)
> +{
> + /*
> +
On Fri, 2024-11-01 at 12:30 +, Mark Brown wrote:
> > Where can I find this base commit?
>
> Ah, that's still my branch from when I posted what's now applied in the
> arm64 tree, this is the same code:
>
> https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-
> next/gcs
>
>
On Thu, 2024-10-31 at 19:25 +, Mark Brown wrote:
> ---
> base-commit: d17cd7b7cc92d37ee8b2df8f975fc859a261f4dc
Where can I find this base commit?
> change-id: 20231019-clone3-shadow-stack-15d40d2bf536
On Wed, 2024-10-02 at 22:01 +0100, Mark Brown wrote:
> BTW it's probably also worth noting that at least on arm64 (perhaps x86
> is different here?) the shadow stack of a thread that exited won't have
> a token placed on it so it won't be possible to use it with clone3() at
> all unless another tok
On Tue, 2024-10-01 at 18:33 +0100, Mark Brown wrote:
> > > A shadow stack size is more symmetric on the surface, but I'm not sure it
> > > will
> > > be easier for userspace to handle. So I think we should just have a
> > > pointer to
> > > the token. But it will be a usable implementation either w
On Fri, 2024-09-27 at 10:50 +0200, Christian Brauner wrote:
> The legacy clone system call had required userspace to know in which
> direction the stack was growing and then pass down the stack pointer
> appropriately (e.g., parisc grows upwards).
>
> And in fact, the old clone() system call did t
.
>
> Adding new functions to update values on shadow stack and using
> them in uprobe code to keep shadow stack in sync with uretprobe
> changes to user stack.
>
> Fixes: 8b1c23543436 ("x86/shstk: Add return uprobe support")
> Signed-off-by: Jiri Olsa
> ---
Acked-by: Rick Edgecombe
On Mon, 2024-05-20 at 00:18 +0200, Jiri Olsa wrote:
> anyway I think we can fix that in another way by using the optimized
> trampoline,
> but returning to the user space through iret when shadow stack is detected
> (as I did in the first version, before you adjusted it to the sysret path).
>
> we
On Wed, 2024-05-15 at 17:26 +0200, Oleg Nesterov wrote:
> > I think it will crash, there's explanation in the comment in
> > tools/testing/selftests/x86/test_shadow_stack.c test
>
> OK, thanks...
>
> But test_shadow_stack.c doesn't do ARCH_PRCTL(ARCH_SHSTK_DISABLE) if
> all the tests succeed ? Co
On Wed, 2024-05-15 at 08:36 -0600, Jiri Olsa wrote:
> >
> > Let me ask a couple of really stupid questions. What if the shadow stack
> > is "shorter" than the normal stack? I mean,
The shadow stack could overflow if it is not big enough. However since the
normal stack has return addresses and dat
On Wed, 2024-05-15 at 13:35 +0200, Oleg Nesterov wrote:
> Let me repeat I know nothing about shadow stacks, only tried to
> read Documentation/arch/x86/shstk.rst few minutes ago ;)
>
> On 05/13, Jiri Olsa wrote:
> >
> > 1) current uretprobe which are not working at the moment and we change
> >
On Mon, 2024-05-13 at 15:23 -0600, Jiri Olsa wrote:
> so at the moment the patch 6 changes shadow stack for
>
> 1) current uretprobe which are not working at the moment and we change
> the top value of shadow stack with shstk_push_frame
> 2) optimized uretprobe which needs to push new frame on
On Mon, 2024-05-13 at 18:50 +0900, Masami Hiramatsu wrote:
> > I guess it's doable, we'd need to keep both trampolines around, because
> > shadow stack is enabled by app dynamically and use one based on the
> > state of shadow stack when uretprobe is installed
> >
> > so you're worried the optimiz
On Thu, 2024-05-09 at 10:30 +0200, Jiri Olsa wrote:
> > Per the earlier discussion, this cannot be reached unless uretprobes are in
> > use,
> > which cannot happen without something with privileges taking an action. But
> > are
> > uretprobes ever used for monitoring applications where security is
On Tue, 2024-05-07 at 12:53 +0200, Jiri Olsa wrote:
> diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
> index 81e6ee95784d..ae6c3458a675 100644
> --- a/arch/x86/kernel/uprobes.c
> +++ b/arch/x86/kernel/uprobes.c
> @@ -406,6 +406,11 @@ SYSCALL_DEFINE0(uretprobe)
> * tramp
On Mon, 2024-05-06 at 09:18 -0700, Christoph Hellwig wrote:
> > On Mon, May 06, 2024 at 09:07:47AM -0700, Rick Edgecombe wrote:
> > > > if (flags & MAP_FIXED) {
> > > > /* Ok, don't mess with it. */
> > > > -
On Mon, 2024-05-06 at 12:32 -0400, Liam R. Howlett wrote:
>
> I like this patch.
Thanks for taking a look.
>
> I think the context of current->mm is implied. IOW, could we call it
> get_unmapped_area() instead? There are other functions today that use
> current->mm that don't start with curren
xes: 529ce23a764f ("mm: switch mm->get_unmapped_area() to a flag")
Suggested-by: Dan Williams
Signed-off-by: Rick Edgecombe
Link:
https://lore.kernel.org/lkml/6603bed6662a_4a98a29...@dwillia2-mobl3.amr.corp.intel.com.notmuch/
---
Based on linux-next.
---
arch/sparc/kernel/sys_sparc_64.c | 9
On Fri, 2024-05-03 at 22:17 +0200, Jiri Olsa wrote:
> when uretprobe is created, kernel overwrites the return address on user
> stack to point to user space trampoline, so the setup is in kernel hands
I mean for uprobes in general. I'm didn't have any specific ideas in mind, but
in general when we
+Some more shadow stack folks from other archs. We are discussing how uretprobes
work with shadow stack.
Context:
https://lore.kernel.org/lkml/ZjU4ganRF1Cbiug6@krava/
On Fri, 2024-05-03 at 21:18 +0200, Jiri Olsa wrote:
>
> hack below seems to fix it for the current uprobe setup,
> we need simila
On Fri, 2024-05-03 at 15:04 +0200, Jiri Olsa wrote:
> On Fri, May 03, 2024 at 01:34:53PM +0200, Peter Zijlstra wrote:
> > On Thu, May 02, 2024 at 02:23:08PM +0200, Jiri Olsa wrote:
> > > Adding uretprobe syscall instead of trap to speed up return probe.
> > >
> > > At the moment the uretprobe setu
On Wed, 2024-03-27 at 15:15 +0200, Jarkko Sakkinen wrote:
> I mean I believe the change itself makes sense, it is just not
> fully documented in the commit message.
Ah, I see. Yes, there could be more background on arch_pick_mmap_layout().
On Tue, 2024-03-26 at 23:38 -0700, Dan Williams wrote:
> > +unsigned long
> > +mm_get_unmapped_area(struct mm_struct *mm, struct file *file,
> > + unsigned long addr, unsigned long len,
> > + unsigned long pgoff, unsigned long flags)
> > +{
>
> Seems like a sm
On Tue, 2024-03-26 at 13:57 +0200, Jarkko Sakkinen wrote:
> In which conditions which path is used during the initialization of mm
> and why is this the case? It is an open claim in the current form.
There is an arch_pick_mmap_layout() that arch's can have their own rules for.
There is also a
gen
y actual size reductions in the compiled layout of mm_struct.
But depending on compiler or arch alignment requirements, the change could
shrink the size of mm_struct.
Signed-off-by: Rick Edgecombe
Acked-by: Dave Hansen
Acked-by: Liam R. Howlett
Reviewed-by: Kirill A. Shutemov
Cc: linux-s...@v
On Mon, 2023-09-18 at 10:29 +0300, Mike Rapoport wrote:
> +/**
> + * struct execmem_range - definition of a memory range suitable for
> code and
> + * related data allocations
> + * @start: address space start
> + * @end: address space end (inclusive)
> + * @pgprot:
On Thu, 2023-10-05 at 08:26 +0300, Mike Rapoport wrote:
> On Wed, Oct 04, 2023 at 03:39:26PM +, Edgecombe, Rick P wrote:
> > On Tue, 2023-10-03 at 17:29 -0700, Rick Edgecombe wrote:
> > > It seems a bit weird to copy all of this. Is it trying to be
> > > fast
On Tue, 2023-10-03 at 17:29 -0700, Rick Edgecombe wrote:
> It seems a bit weird to copy all of this. Is it trying to be faster
> or
> something?
>
> Couldn't it just check r->start in execmem_text/data_alloc() path and
> switch to EXECMEM_DEFAULT if needed then? The exec
On Mon, 2023-09-18 at 10:29 +0300, Mike Rapoport wrote:
> +
> +static void execmem_init_missing(struct execmem_params *p)
> +{
> + struct execmem_range *default_range = &p-
> >ranges[EXECMEM_DEFAULT];
> +
> + for (int i = EXECMEM_DEFAULT + 1; i < EXECMEM_TYPE_MAX; i++)
> {
> +
On Mon, 2023-09-18 at 10:29 +0300, Mike Rapoport wrote:
> diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
> index 5f71a0cf4399..9d37375e2f05 100644
> --- a/arch/x86/kernel/module.c
> +++ b/arch/x86/kernel/module.c
> @@ -19,6 +19,7 @@
> #include
> #include
> #include
> +#inclu
On Mon, 2021-04-05 at 22:32 +0100, Matthew Wilcox wrote:
> On Mon, Apr 05, 2021 at 02:01:58PM -0700, Dave Hansen wrote:
> > On 4/5/21 1:37 PM, Rick Edgecombe wrote:
> > > +static void __dispose_pages(struct list_head *head)
> > > +{
> > >
On Mon, 2021-04-05 at 14:01 -0700, Dave Hansen wrote:
> On 4/5/21 1:37 PM, Rick Edgecombe wrote:
> > +static void __dispose_pages(struct list_head *head)
> > +{
> > + struct list_head *cur, *next;
> > +
> > + list_for_each_safe(cur, next, head) {
aware list_lru's since it is not needed by the
intended caller.
Signed-off-by: Rick Edgecombe
---
include/linux/list_lru.h | 13 +
mm/list_lru.c| 28
2 files changed, 41 insertions(+)
diff --git a/include/linux/list_lru.h b/include/
Callers of module_alloc() will set permissions on the allocation. Use
the VM_GROUP_PAGES to reduce direct map breakage.
Signed-off-by: Rick Edgecombe
---
arch/x86/kernel/module.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel
!CONFIG_BPF_JIT_ALWAYS_ON.
The new APIs and invasive changes in the callers can happen after vmalloc huge
pages bring more benefits. Although, I can post shootdown reduction changes with
previous comments integrated if anyone disagrees.
Based on v5.11.
Thanks,
Rick
Rick Edgecombe (3):
list: Support
c GFP flag that matches the
intended user of this vm_flag (module_alloc()). In the case of the vm
and GFP flags mismatching, fail the page allocation. In the case of a
huge page size page not being available, fallback to the normal page
allocator logic and use non-grouped pages.
Signed-off-by:
On Thu, 2021-02-04 at 11:34 +0100, Paolo Bonzini wrote:
> On 04/02/21 03:19, Sean Christopherson wrote:
> > Ah, took me a few minutes, but I see what you're saying. LAM will
> > introduce
> > bits that are repurposed for CR3, but not generic GPAs. And, the
> > behavior is
> > based on CPU support
On Wed, 2021-02-03 at 16:01 -0800, Sean Christopherson wrote:
>
> - unsigned long cr3_lm_rsvd_bits;
> + u64 reserved_gpa_bits;
LAM defines bits above the GFN in CR3:
https://software.intel.com/content/www/us/en/develop/download/intel-architecture-instruction-set-extensions-programmin
hings
so flags are less likely to be missed in the future.
Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
Suggested-by: Matthew Wilcox
Signed-off-by: Rick Edgecombe
---
[v2]
Changed comment format like suggested by Matthew and listed him as Suggested-by.
Dropped Revie
On Thu, 2021-01-21 at 13:19 +, Matthew Wilcox wrote:
> On Wed, Jan 20, 2021 at 05:41:18PM -0800, Rick Edgecombe wrote:
> > When VM_MAP_PUT_PAGES was added, it was defined with the same value
> > as
> > VM_FLUSH_RESET_PERMS. This doesn't seem like it will cause any b
mment
and remove whitespace for VM_KASAN such that the flags lower down are less
likely to be missed in the future.
Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
Signed-off-by: Rick Edgecombe
---
include/linux/vmalloc.h | 6 ++
1 file changed, 2 insertions(+), 4 del
On Fri, 2020-12-04 at 15:24 -0800, Sean Christopherson wrote:
> On Fri, Nov 20, 2020, Rick Edgecombe wrote:
> > +struct perm_allocation {
> > + struct page **pages;
> > + virtual_perm cur_perm;
> > + virtual_perm orig_perm;
> > + struct vm_struct *a
On Fri, 2020-12-04 at 18:12 +1000, Nicholas Piggin wrote:
> Excerpts from Edgecombe, Rick P's message of December 1, 2020 6:21
> am:
> > On Sun, 2020-11-29 at 01:25 +1000, Nicholas Piggin wrote:
> > > Support huge page vmalloc mappings. Config option
> > > H
essed via TDP. So zap based on a maximum gfn calculated with MAXPHYADDR
retrieved from CPUID. This is already stored in shadow_phys_bits, so use
it instead of x86_phys_bits.
Fixes: faaf05b00aec ("kvm: x86/mmu: Support zapping SPTEs in the TDP MMU")
Signed-off-by: Rick Edgecombe
---
a
On Mon, 2020-11-30 at 12:21 -0800, Rick Edgecombe wrote:
> another option could be to use the changes here:
> https://lore.kernel.org/lkml/20201125092208.12544-4-r...@kernel.org/
> to reset the direct map for a large page range at a time for large
> vmalloc pages.
Oops, sorry. This
On Sun, 2020-11-29 at 01:25 +1000, Nicholas Piggin wrote:
> Support huge page vmalloc mappings. Config option
> HAVE_ARCH_HUGE_VMALLOC
> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
> supports PMD sized vmap mappings.
>
> vmalloc will attempt to allocate PMD-sized pages if
On Tue, 2020-11-24 at 10:16 +, Christoph Hellwig wrote:
> On Mon, Nov 23, 2020 at 12:01:35AM +, Edgecombe, Rick P wrote:
> > Another option could be putting the new metadata in vm_struct and
> > just
> > return that, like get_vm_area(). Then we don't need to in
On Tue, 2020-11-24 at 10:19 +, h...@infradead.org wrote:
> But I thought that using those pgprot flags was still sort
> overloading
> > the meaning of pgprot. My understanding was that it is supposed to
> > hold
> > the actual bits set in the PTE. For example large pages or TLB
> > hints
> > (l
On Mon, 2020-11-23 at 09:00 +, Christoph Hellwig wrote:
> First thanks for doing this, having a vmalloc variant that starts out
> with proper permissions has been on my todo list for a while.
>
> > +#define PERM_R 1
> > +#define PERM_W 2
> > +#define PERM_X 4
> > +#define PERM_RWX
On Sat, 2020-11-21 at 20:10 -0800, Andy Lutomirski wrote:
> On Fri, Nov 20, 2020 at 12:30 PM Rick Edgecombe
> wrote:
> > In order to allow for future arch specific optimizations for
> > vmalloc
> > permissions, first add an implementation of a new interface that
> &g
;s chosen place for executable code.
Signed-off-by: Rick Edgecombe
---
arch/Kconfig| 3 +
include/linux/vmalloc.h | 82
mm/nommu.c | 66
mm/vmalloc.c| 135
4 files changed,
Since modules can have a separate writable address during loading,
do the orc unwind at the writable address.
Signed-off-by: Rick Edgecombe
---
arch/x86/kernel/unwind_orc.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel
ff-by: Rick Edgecombe
---
include/linux/module.h | 22 ++
kernel/module.c| 14 +-
2 files changed, 31 insertions(+), 5 deletions(-)
diff --git a/include/linux/module.h b/include/linux/module.h
index 9964f909d879..32dd22b2a38a 100644
--- a/include/linux/module.h
relocations at the writable address of the perm_allocation to
support a future implementation that has the writable address in a
different allocation.
Signed-off-by: Rick Edgecombe
---
arch/x86/kernel/module.c | 84 +---
1 file changed, 71 insertions(+), 13
Since modules can have a separate writable address during loading,
do the nop application at the writable address.
As long as info is on hand about if the operations is happening during
a module load, don't do a full text_poke() when writing data to a
writable address.
Signed-off-by:
Modules being loaded using perm_allocs may have a separate writable
address. Handle this case in alternatives for operations called during
module loading.
Signed-off-by: Rick Edgecombe
---
arch/x86/kernel/alternative.c | 25 -
1 file changed, 16 insertions(+), 9
Use the module writable address to accommodate arch's that have a
separate writable address for perm_alloc.
Signed-off-by: Rick Edgecombe
---
kernel/trace/ftrace.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
directed to a separate writable
staging area.
Signed-off-by: Rick Edgecombe
---
arch/arm/net/bpf_jit_32.c | 3 +-
arch/arm64/net/bpf_jit_comp.c | 5 ++--
arch/mips/net/bpf_jit.c | 2 +-
arch/mips/net/ebpf_jit.c | 3 +-
arch/powerpc/net/bpf_jit_comp.c | 2 +-
arch
other caches.
Signed-off-by: Rick Edgecombe
---
arch/x86/Kconfig | 1 +
arch/x86/include/asm/set_memory.h | 2 +
arch/x86/mm/Makefile | 1 +
arch/x86/mm/pat/set_memory.c | 13 +
arch/x86/mm/vmalloc.c | 438 ++
5
orted.
So this should not have any functional change yet. It is just a change
to how the different regions of the module allocations are tracked in
module.c such that future patches can actually make the regions separate
allocations.
Signed-off-by: Rick Edgecombe
---
include/linux/module.h
ail.gmail.com/
[1] https://lore.kernel.org/lkml/20201009201410.3209180-1-ira.we...@intel.com/
[2] https://lore.kernel.org/lkml/20200924132904.1391-1-r...@kernel.org/
This RFC has been acked by Dave Hansen.
Rick Edgecombe (10):
vmalloc: Add basic perm alloc implementation
bpf: Use perm_alloc() for BPF
On Thu, 2020-10-29 at 10:12 +0200, Mike Rapoport wrote:
> This series goal was primarily to separate dependincies and make it
> clearer what DEBUG_PAGEALLOC and what SET_DIRECT_MAP are. As it
> turned
> out, there is also some lack of consistency between architectures
> that
> implement either of t
On Thu, 2020-10-29 at 09:54 +0200, Mike Rapoport wrote:
> __kernel_map_pages() on arm64 will also bail out if rodata_full is
> false:
> void __kernel_map_pages(struct page *page, int numpages, int enable)
> {
> if (!debug_pagealloc_enabled() && !rodata_full)
> return;
>
>
On Wed, 2020-10-28 at 13:09 +0200, Mike Rapoport wrote:
> On Tue, Oct 27, 2020 at 09:46:35AM +0100, David Hildenbrand wrote:
> > On 27.10.20 09:38, Mike Rapoport wrote:
> > > On Mon, Oct 26, 2020 at 06:05:30PM +0000, Edgecombe, Rick P
> > > wrote:
> > >
> &g
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> + if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
> + unsigned long addr = (unsigned
> long)page_address(page);
> + int ret;
> +
> + if (enable)
> + ret = set_direct_map
On Wed, 2020-10-28 at 13:30 +0200, Mike Rapoport wrote:
> On Wed, Oct 28, 2020 at 11:20:12AM +, Will Deacon wrote:
> > On Tue, Oct 27, 2020 at 10:38:16AM +0200, Mike Rapoport wrote:
> > > On Mon, Oct 26, 2020 at 06:05:30PM +0000, Edgecombe, Rick P
> > > wrote:
>
On Tue, 2020-10-27 at 10:49 +0200, Mike Rapoport wrote:
> On Mon, Oct 26, 2020 at 06:57:32PM +, Edgecombe, Rick P wrote:
> > On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote:
> > > On Mon, Oct 26, 2020 at 12:38:32AM +0000, Edgecombe, Rick P
> > > wrote:
>
On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote:
> The intention of this series is to disallow usage of
> __kernel_map_pages() when DEBUG_PAGEALLOC=n. I'll update this patch
> to
> better handle possible errors, but I still want to keep WARN in the
> caller.
Sorry, I missed this snippet at
On Mon, 2020-10-26 at 10:37 +0200, Mike Rapoport wrote:
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page,
> int numpages)
> return __change_page_attr_set_clr(&cpa, 0);
> }
>
> -int set_direct_map_invalid_noflush(struct page *page)
On Mon, 2020-10-26 at 11:15 +0200, Mike Rapoport wrote:
> On Mon, Oct 26, 2020 at 12:38:32AM +, Edgecombe, Rick P wrote:
> > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > > From: Mike Rapoport
> > >
> > > When DEBUG_PAGEALLOC or ARCH_
On Mon, 2020-10-26 at 11:05 +0200, Mike Rapoport wrote:
> On Mon, Oct 26, 2020 at 01:13:52AM +, Edgecombe, Rick P wrote:
> > On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> > > Indeed, for architectures that define
> > > CONFIG_ARCH_HAS_SET_DIRECT_MAP
>
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP
> it is
> possible that __kernel_map_pages() would fail, but since this
> function is
> void, the failure will go unnoticed.
Could you elaborate on how this could happen?
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> index 7f248fc45317..16f878c26667 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -2228,7 +2228,6 @@ void __kernel_map_pages(struct page *page, int
> numpages, int enable)
> }
> #endif /* CONFIG_DEBUG_P
On Sun, 2020-10-25 at 12:15 +0200, Mike Rapoport wrote:
> From: Mike Rapoport
>
> When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may
> be
> not present in the direct map and has to be explicitly mapped before
> it
> could be copied.
>
> On arm64 it is possible that a page woul
On Thu, 2020-10-22 at 15:06 +0300, Kirill A. Shutemov wrote:
> > I think the page could have got unmapped since the gup via the
> > hypercall on another CPU. It could be an avenue for the guest to
> > crash
> > the host.
>
> Hm.. I'm not sure I follow. Could you elaborate on what scenario you
> ha
On Tue, 2020-10-20 at 09:18 +0300, Kirill A. Shutemov wrote:
> We cannot access protected pages directly. Use ioremap() to
> create a temporary mapping of the page. The mapping is destroyed
> on __kvm_unmap_gfn().
>
> The new interface gfn_to_pfn_memslot_protected() is used to detect if
> the page
On Tue, 2020-10-20 at 09:18 +0300, Kirill A. Shutemov wrote:
> If the protected memory feature enabled, unmap guest memory from
> kernel's direct mappings.
>
> Migration and KSM is disabled for protected memory as it would
> require a
> special treatment.
>
So do we care about this scenario where
On Tue, 2020-10-20 at 09:18 +0300, Kirill A. Shutemov wrote:
> include/linux/mm.h | 8
> mm/gup.c| 20
> mm/huge_memory.c| 20
> mm/memory.c | 3 +++
> mm/mmap.c | 3 +++
> virt/kvm/async_pf.c | 2 +-
> virt/
On Tue, 2020-10-20 at 15:20 +0200, David Hildenbrand wrote:
> On 20.10.20 14:18, David Hildenbrand wrote:
> > On 20.10.20 08:18, Kirill A. Shutemov wrote:
> > > If the protected memory feature enabled, unmap guest memory from
> > > kernel's direct mappings.
> >
> > Gah, ugly. I guess this also def
lush in the nested case, but this operation
already flushed for each memslot in order to facilitate the spin break.
If slot_handle_level_range() took some extra parameters it could maybe
be avoided. Not sure if it's worth it.
Rick
On Wed, 2020-09-30 at 13:35 +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 08:06:03PM +, Edgecombe, Rick P wrote:
> > On Tue, 2020-09-29 at 16:06 +0300, Mike Rapoport wrote:
> > > On Tue, Sep 29, 2020 at 04:58:44AM +0000, Edgecombe, Rick P
> > > wrote:
>
On Tue, 2020-09-29 at 16:06 +0300, Mike Rapoport wrote:
> On Tue, Sep 29, 2020 at 04:58:44AM +, Edgecombe, Rick P wrote:
> > On Thu, 2020-09-24 at 16:29 +0300, Mike Rapoport wrote:
> > > Introduce "memfd_secret" system call with the ability to create
> > &g
On Thu, 2020-09-24 at 16:29 +0300, Mike Rapoport wrote:
> Introduce "memfd_secret" system call with the ability to create
> memory
> areas visible only in the context of the owning process and not
> mapped not
> only to other processes but in the kernel page tables as well.
>
> The user will creat
On Tue, 2020-08-11 at 23:54 +0200, John Paul Adrian Glaubitz wrote:
> Hi Rick!
>
> I have been bisecting some regressions on ia64 and one problem I ran
> into is that
> udev is causing the kernel to crash after the following change from
> 2019:
>
> commit 868b104d7379e280
are slow so the
inefficiencies of kernfs don't show. That doesn't bother you?
Rick
er on "kernfs was never designed for
that." If so, we're in agreement. We're suggesting a way it can be
extended to be more robust, with no (apparent) side effects. I'd like
to discuss the merits of the patch itself.
Rick
eventually other devices will be subject to it too. Why not address this
now?
'Doctor, it hurts when I do this'
'Then don't do that'
Funny as a joke. Less funny as a review comment.
Rick
f this hardware, what
do you (all, not just you Greg) consider to be "properly"?
Rick
nd
recognized by the kernel for that to work.
Rick
tructures decrease markedly too. The contention for the
lockref taken in dput dropped 66% and, likely due to reduced thrash, the time
used waiting for that structure dropped 99%.
Rick
plugging and partitioning memory. The size of the
segments (and thus the number of them) is dictated by the underlying hardware.
Rick
Hi Yong,
On Thu, 2020-06-18 at 17:32 +0800, Yong Wu wrote:
> + Rick
>
> On Sat, 2020-05-30 at 16:10 +0800, Yong Wu wrote:
> >
> > MediaTek IOMMU has already added device_link between the consumer
> > and smi-larb device. If the jpg device call the
> > pm_run
ponse! I was examining the 25 tests in the 'cpu-cache'
class and had nothing but head scratching so far on what could be having that
effect.
Rick
e switchover. Much faster! So, why is the second one necessary? Are
there some architectures that need that? I've not found anyone who can answer
that, so going that route presents us with a different big risk.
Rick
The following commit has been merged into the x86/urgent branch of tip:
Commit-ID: ab5130186d7476dcee0d4e787d19a521ca552ce9
Gitweb:
https://git.kernel.org/tip/ab5130186d7476dcee0d4e787d19a521ca552ce9
Author:Rick Edgecombe
AuthorDate:Wed, 22 Apr 2020 20:13:55 -07:00
On Mon, 2019-10-14 at 14:47 +0800, Yu Zhang wrote:
> On Thu, Oct 03, 2019 at 02:23:48PM -0700, Rick Edgecombe wrote:
> > Mask gfn by maxphyaddr in kvm_mtrr_get_guest_memory_type so that the
> > guests view of gfn is used when high bits of the physical memory are
> > used as e
On Fri, 2019-10-04 at 18:33 -0700, Andy Lutomirski wrote:
> On Fri, Oct 4, 2019 at 1:10 PM Edgecombe, Rick P
> wrote:
> >
> > On Fri, 2019-10-04 at 07:56 -0700, Andy Lutomirski wrote:
> > > On Thu, Oct 3, 2019 at 2:38 PM Rick Edgecombe
> > > wrote:
> &
On Fri, 2019-10-04 at 07:56 -0700, Andy Lutomirski wrote:
> On Thu, Oct 3, 2019 at 2:38 PM Rick Edgecombe
> wrote:
> >
> > This patchset enables the ability for KVM guests to create execute-only (XO)
> > memory by utilizing EPT based XO permissions. XO memory is curren
On Fri, 2019-10-04 at 09:34 +0200, Paolo Bonzini wrote:
> On 03/10/19 23:23, Rick Edgecombe wrote:
> > +
> > + protection_map[4] = PAGE_EXECONLY;
> > + protection_map[12] = PAGE_EXECONLY;
>
> Can you add #defines for the bits in protection_map? Also perhaps you
On Fri, 2019-10-04 at 09:42 +0200, Paolo Bonzini wrote:
> On 03/10/19 23:23, Rick Edgecombe wrote:
> > + if (!vcpu->arch.gva_available)
> > + return 0;
>
> Please return RET_PF_* constants, RET_PF_EMULATE here.
Ok.
> > + if
1 - 100 of 642 matches
Mail list logo