On Mon Oct 24, 2022 at 12:17 AM CDT, Benjamin Gray wrote:
> On Mon, 2022-10-24 at 14:45 +1100, Russell Currey wrote:
> > On Fri, 2022-10-21 at 16:22 +1100, Benjamin Gray wrote:
> > > From: "Christopher M. Riedl"
> > >
-%<--
> > >
> &
n
>
> On Fri, 22 Oct 2021, Christophe Leroy wrote:
>
> > ...
> > >
> > > Forwarded Message ----
> > > Subject: Fwd: X stopped working with 5.14 on iBook
> > > Date: Fri, 22 Oct 2021 11:35:21 -0600
> > > From: Stan Johnson
> &
On Tue Sep 14, 2021 at 11:24 PM CDT, Jordan Niethe wrote:
> On Sat, Sep 11, 2021 at 12:39 PM Christopher M. Riedl
> wrote:
> > ...
> > +/*
> > + * This can be called for kernel text or a module.
> > + */
> > +static int map_patch_mm(const void *addr, stru
On Sat Sep 11, 2021 at 4:14 AM CDT, Jordan Niethe wrote:
> On Sat, Sep 11, 2021 at 12:39 PM Christopher M. Riedl
> wrote:
> >
> > When code patching a STRICT_KERNEL_RWX kernel the page containing the
> > address to be patched is temporarily mapped as writeable. Currentl
On Sat Sep 11, 2021 at 3:26 AM CDT, Jordan Niethe wrote:
> On Sat, Sep 11, 2021 at 12:35 PM Christopher M. Riedl
> wrote:
> >
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situatio
d-off-by: Christopher M. Riedl
---
v6: * Remove the pr_warn() message from unmap_patch_area().
v5: * New to series.
---
arch/powerpc/lib/code-patching.c | 35
1 file changed, 17 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/lib/code-patching.c b
portunity to fix the failure check in the WARN_ON():
cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, ...) returns a positive integer
on success and a negative integer on failure.
Signed-off-by: Christopher M. Riedl
---
v6: * New to series - based on Christophe's relentless feedback in the
maths
* Add LKDTM test
[0]: https://github.com/linuxppc/issues/issues/224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
Christopher M. Riedl (4):
powerpc/64s: Introduce temporary mm for Radix MMU
powerpc: Rework and improve STRICT_KERNEL_RWX p
breakpoints set by perf.
Based on x86 implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
v6: * Use {start,stop}_using_temporary_mm() instead of
{use,unuse}_temporary_mm() as suggested by Christophe.
v5: * Drop s
On Thu Aug 5, 2021 at 4:48 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > When code patching a STRICT_KERNEL_RWX kernel the page containing the
> > address to be patched is temporarily mapped as writeable. Currently, a
> >
On Thu Aug 5, 2021 at 4:34 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > Rework code-patching with STRICT_KERNEL_RWX to prepare for the next
> > patch which uses a temporary mm for patching under the Book3s64 Radix
> >
On Thu Aug 5, 2021 at 4:27 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situations
>
On Thu Aug 5, 2021 at 4:18 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > Code patching on powerpc with a STRICT_KERNEL_RWX uses a userspace
> > address in a temporary mm on Radix now. Use __put_user() to avoid write
> >
On Thu Aug 5, 2021 at 4:13 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > When live patching with STRICT_KERNEL_RWX the CPU doing the patching
> > must temporarily remap the page(s) containing the patch site with +W
> &
On Thu Aug 5, 2021 at 4:09 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > A previous commit implemented an LKDTM test on powerpc to exploit the
> > temporary mapping established when patching code with STRICT_KERNEL_RWX
> &g
On Thu Aug 5, 2021 at 4:03 AM CDT, Christophe Leroy wrote:
>
>
> Le 13/07/2021 à 07:31, Christopher M. Riedl a écrit :
> > When compiled with CONFIG_STRICT_KERNEL_RWX, the kernel must create
> > temporary mappings when patching itself. These mappings temporarily
> > o
d-off-by: Christopher M. Riedl
---
v5: * New to series.
---
arch/powerpc/lib/code-patching.c | 51 +---
1 file changed, 27 insertions(+), 24 deletions(-)
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 3122d8e4cc013..9f2eba9b
implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
v5: * Drop support for using a temporary mm on Book3s64 Hash MMU.
v4: * Pass the prev mm instead of NULL to switch_mm_irqs_off() when
using/unusing the temp mm as su
ess on non-Radix MMUs.
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/perms.c | 9 -
1 file changed, 9 deletions(-)
diff --git a/drivers/misc/lkdtm/perms.c b/drivers/misc/lkdtm/perms.c
index 41e87e5f9cc86..da6a34a0a49fb 100644
--- a/drivers/misc/lkdtm/perms.c
+++ b/drivers/
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/code-patching.h | 4
arch/powerpc/lib/code-patching.c | 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/powerpc/include/asm/code-patching.h
b/arch/powerpc/include/asm/code-patching.h
e temporary mm for text poking")
Signed-off-by: Christopher M. Riedl
---
v5: * Only support Book3s64 Radix MMU for now.
* Use a per-cpu datastructure to hold the patching_addr and
patching_mm to avoid the need for a synchronization lock/mutex.
v4: * In the previous series this wa
A previous commit implemented an LKDTM test on powerpc to exploit the
temporary mapping established when patching code with STRICT_KERNEL_RWX
enabled. Extend the test to work on x86_64 as well.
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/perms.c | 26 ++
1
mcmp' where a simple comparison is appropriate
* Simplify expression for patch address by removing pointer maths
* Add LKDTM test
[0]: https://github.com/linuxppc/issues/issues/224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
(echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT)
A passing test indicates that it is not possible to overwrite kernel
text from another CPU by using the temporary mapping established by
a CPU for patching.
Signed-off-by: Christopher M. Riedl
---
v5: * Use `u32*` instead o
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/x86/include/asm/text-patching.h | 4
arch/x86/kernel/alternative.c| 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/text-patching.h
b/arch/x86/include/asm/text-patching.h
index b7421780e4e92..f0caf9
On Thu Jul 1, 2021 at 2:51 AM CDT, Nicholas Piggin wrote:
> Excerpts from Christopher M. Riedl's message of July 1, 2021 5:02 pm:
> > On Thu Jul 1, 2021 at 1:12 AM CDT, Nicholas Piggin wrote:
> >> Excerpts from Christopher M. Riedl's message of May 6, 2021 2:34 pm:
> >> > When code patching a STRIC
gt; >> > On Wed Jun 30, 2021 at 11:15 PM CDT, Nicholas Piggin wrote:
> >> >> Excerpts from Christopher M. Riedl's message of July 1, 2021 1:48 pm:
> >> >> > On Sun Jun 20, 2021 at 10:13 PM CDT, Daniel Axtens wrote:
> >> >> >> "
On Thu Jul 1, 2021 at 1:12 AM CDT, Nicholas Piggin wrote:
> Excerpts from Christopher M. Riedl's message of May 6, 2021 2:34 pm:
> > When code patching a STRICT_KERNEL_RWX kernel the page containing the
> > address to be patched is temporarily mapped as writeable. Currently, a
> > per-cpu vmalloc p
gt; >> > On Sun Jun 20, 2021 at 10:13 PM CDT, Daniel Axtens wrote:
> >> >> "Christopher M. Riedl" writes:
> >> >>
> >> >> > Switching to a different mm with Hash translation causes SLB entries
> >> >> > to
> &g
On Wed Jun 30, 2021 at 11:15 PM CDT, Nicholas Piggin wrote:
> Excerpts from Christopher M. Riedl's message of July 1, 2021 1:48 pm:
> > On Sun Jun 20, 2021 at 10:13 PM CDT, Daniel Axtens wrote:
> >> "Christopher M. Riedl" writes:
> >>
> >>
On Sun Jun 20, 2021 at 10:19 PM CDT, Daniel Axtens wrote:
> Hi Chris,
>
> > + /*
> > +* Choose a randomized, page-aligned address from the range:
> > +* [PAGE_SIZE, DEFAULT_MAP_WINDOW - PAGE_SIZE]
> > +* The lower address bound is PAGE_SIZE to avoid the zero-page.
> > +* The upper
On Sun Jun 20, 2021 at 10:13 PM CDT, Daniel Axtens wrote:
> "Christopher M. Riedl" writes:
>
> > Switching to a different mm with Hash translation causes SLB entries to
> > be preloaded from the current thread_info. This reduces SLB faults, for
> > example
On Thu May 6, 2021 at 5:51 AM CDT, Peter Zijlstra wrote:
> On Wed, May 05, 2021 at 11:34:51PM -0500, Christopher M. Riedl wrote:
> > Powerpc allows for multiple CPUs to patch concurrently. When patching
> > with STRICT_KERNEL_RWX a single patching_mm is allocated for use by all
&
* Simplify expression for patch address by removing pointer maths
* Add LKDTM test
[0]: https://github.com/linuxppc/issues/issues/224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
Christopher M. Riedl (11):
powerpc: Add LKDTM accessor f
< n; ++i)
patch_instruction_unlocked(...);
unlock_patching(flags); <-- unlock once
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/kernel/epapr_paravirt.c | 9 ++-
arch/powerpc/kernel/optprobes.c | 22 --
arch/powerpc/lib/f
avoid this.
Based on x86 implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
v4: * Pass the prev mm instead of NULL to switch_mm_irqs_off() when
using/unusing the temp mm as suggested by Jann Horn to keep
available via hash_page_mm() already.
Make slb_allocate_user() non-static and add a prototype so the next
patch can use it during code-patching.
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/mm/book
patch_instruction_unlocked() instead.
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/include/asm/code-patching.h | 4 ++
arch/powerpc/lib/code-patching.c | 85 +---
2 files changed, 79 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/include
for text poking")
Signed-off-by: Christopher M. Riedl
---
v4: * In the previous series this was two separate patches: one to init
the temporary mm in poking_init() (unused in powerpc at the time)
and the other to use it for patching (which removed all the
per-cpu vmalloc co
A previous commit implemented an LKDTM test on powerpc to exploit the
temporary mapping established when patching code with STRICT_KERNEL_RWX
enabled. Extend the test to work on x86_64 as well.
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/perms.c | 29
) remains unchanged.
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/include/asm/book3s/64/mmu.h | 3 ++
arch/powerpc/include/asm/mmu_context.h | 13 ++
arch/powerpc/mm/book3s64/mmu_context.c | 2 +
arch/powerpc/mm/book3s64/slb.c | 56
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/code-patching.h | 4
arch/powerpc/lib/code-patching.c | 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/powerpc/include/asm/code-patching.h
b/arch/powerpc/include/asm/code-patching.h
Code patching on powerpc with a STRICT_KERNEL_RWX uses a userspace
address in a temporary mm now. Use __put_user() to avoid write failures
due to KUAP when attempting a "hijack" on the patching address.
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/perms.c | 9 ---
(echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT)
A passing test indicates that it is not possible to overwrite kernel
text from another CPU by using the temporary mapping established by
a CPU for patching.
Signed-off-by: Christopher M. Riedl
---
v4: * Separate the powe
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/x86/include/asm/text-patching.h | 4
arch/x86/kernel/alternative.c| 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/text-patching.h
b/arch/x86/include/asm/text-patching.h
index b7421780e4e92..f0caf9
avoid this.
Based on x86 implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
v4: * Pass the prev mm instead of NULL to switch_mm_irqs_off() when
using/unusing the temp mm as suggested by Jann Horn to keep
for text poking")
Signed-off-by: Christopher M. Riedl
---
v4: * In the previous series this was two separate patches: one to init
the temporary mm in poking_init() (unused in powerpc at the time)
and the other to use it for patching (which removed all the
per-cpu vmalloc co
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/x86/include/asm/text-patching.h | 4
arch/x86/kernel/alternative.c| 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/x86/include/asm/text-patching.h
b/arch/x86/include/asm/text-patching.h
index b7421780e4e92..f0caf9
available via hash_page_mm() already.
Make slb_allocate_user() non-static and add a prototype so the next
patch can use it during code-patching.
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/include/asm/book3s/64/mmu-hash.h | 1 +
arch/powerpc/mm/book
) remains unchanged.
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/include/asm/book3s/64/mmu.h | 3 ++
arch/powerpc/include/asm/mmu_context.h | 13 ++
arch/powerpc/mm/book3s64/mmu_context.c | 2 +
arch/powerpc/mm/book3s64/slb.c | 56
(echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT)
A passing test indicates that it is not possible to overwrite kernel
text from another CPU by using the temporary mapping established by
a CPU for patching.
Signed-off-by: Christopher M. Riedl
---
v4: * Separate the powe
< n; ++i)
patch_instruction_unlocked(...);
unlock_patching(flags); <-- unlock once
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/kernel/epapr_paravirt.c | 9 ++-
arch/powerpc/kernel/optprobes.c | 22 --
arch/powerpc/lib/f
re a simple comparison is appropriate
* Simplify expression for patch address by removing pointer maths
* Add LKDTM test
[0]: https://github.com/linuxppc/issues/issues/224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
Christopher M. Riedl
patch_instruction_unlocked() instead.
Signed-off-by: Christopher M. Riedl
---
v4: * New to series.
---
arch/powerpc/include/asm/code-patching.h | 4 ++
arch/powerpc/lib/code-patching.c | 85 +---
2 files changed, 79 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/include
A previous commit implemented an LKDTM test on powerpc to exploit the
temporary mapping established when patching code with STRICT_KERNEL_RWX
enabled. Extend the test to work on x86_64 as well.
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/perms.c | 29
her CPU.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/code-patching.h | 4
arch/powerpc/lib/code-patching.c | 7 +++
2 files changed, 11 insertions(+)
diff --git a/arch/powerpc/include/asm/code-patching.h
b/arch/powerpc/include/asm/code-patching.h
Code patching on powerpc with a STRICT_KERNEL_RWX uses a userspace
address in a temporary mm now. Use __put_user() to avoid write failures
due to KUAP when attempting a "hijack" on the patching address.
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/perms.c | 9 ---
sly called
copy_from_user() which, unlike __copy_from_user(), calls access_ok().
Replacing this w/ __get_user() (no access_ok()) is fine here since both
callsites in signal_32.c are preceded by an earlier access_ok().
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal.h| 7 +++
ar
with their 'unsafe' versions.
Modify the callers to first open, call unsafe_setup_sigcontext() and
then close the uaccess window.
Signed-off-by: Christopher M. Riedl
---
v7: * Don't use unsafe_op_wrap() since Christophe indicates this
macro may go away in the future.
--
with their 'unsafe'
versions. Modify the callers to first open, call
unsafe_restore_sigcontext(), and then close the uaccess window.
Signed-off-by: Christopher M. Riedl
---
v7: * Don't use unsafe_op_wrap() since Christophe indicates this
macro may go away in the fu
uot;longer" uaccess block.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 56 -
1 file changed, 35 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/sig
signal series
* Simplify/remove TM ifdefery similar to PPC32 series and clean
up the uaccess begin/end calls
* Isolate non-inline functions so they are not called when
uaccess window is open
Christopher M. Riedl (8):
powerpc/uaccess: Add unsafe_copy_
("powerpc/signal32: Remove ifdefery in middle of if/else").
Unlike in the commit for ppc32, the ifdef can't be removed entirely
since uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/process.c | 3 +-
arch
nning with uaccess must be audited manually which
means: less code -> less work -> fewer problems (in theory).
A follow-up commit converts setup_sigcontext() to be "unsafe".
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 32 +-
g with the argument in the macro to avoid a
potential compile warning.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Daniel Axtens
---
arch/powerpc/include/asm/reg.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/r
Reuse the "safe" implementation from signal.c but call unsafe_get_user()
directly in a loop to avoid the intermediate copy into a local buffer.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Daniel Axtens
---
arch/powerpc/kernel/signal.h | 26 ++
1 file c
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
Use the same approach as unsafe_copy_to_user() but instead call
unsafe_get_user() in a loop.
Signed-off-by: Christopher M. Riedl
---
v7: * Change implementation to call unsafe_get_user() and remove
dja's 'Reviewed-by' tag
---
arch/powerpc/include/asm
On Tue Feb 23, 2021 at 11:36 AM CST, Christophe Leroy wrote:
>
>
> Le 21/02/2021 à 02:23, Christopher M. Riedl a écrit :
> > Previously restore_sigcontext() performed a costly KUAP switch on every
> > uaccess operation. These repeated uaccess switches cause a significa
On Tue Feb 23, 2021 at 11:12 AM CST, Christophe Leroy wrote:
>
>
> Le 21/02/2021 à 02:23, Christopher M. Riedl a écrit :
> > Previously setup_sigcontext() performed a costly KUAP switch on every
> > uaccess operation. These repeated uaccess switches cause a significant
> &
On Tue Feb 23, 2021 at 11:15 AM CST, Christophe Leroy wrote:
>
>
> Le 21/02/2021 à 02:23, Christopher M. Riedl a écrit :
> > Just wrap __copy_tofrom_user() for the usual 'unsafe' pattern which
> > accepts a label to goto on error.
> >
> > Signed-of
with their 'unsafe' versions.
Modify the callers to first open, call unsafe_setup_sigcontext() and
then close the uaccess window.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 71 -
1 file changed, 44 insertions(+), 27 deletion
("powerpc/signal32: Remove ifdefery in middle of if/else").
Unlike in the commit for ppc32, the ifdef can't be removed entirely
since uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/process.c | 3 +-
arch
defery similar to PPC32 series and clean
up the uaccess begin/end calls
* Isolate non-inline functions so they are not called when
uaccess window is open
Christopher M. Riedl (8):
powerpc/uaccess: Add unsafe_copy_from_user
powerpc/signal: Add unsafe_copy_{vsx,
uot;longer" uaccess block.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 56 -
1 file changed, 35 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/sig
sly called
copy_from_user() which, unlike __copy_from_user(), calls access_ok().
Replacing this w/ __get_user() (no access_ok()) is fine here since both
callsites in signal_32.c are preceded by an earlier access_ok().
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal.h| 7 +++
ar
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
with their 'unsafe'
versions. Modify the callers to first open, call
unsafe_restore_sigcontext(), and then close the uaccess window.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 68 -
1 file changed, 41 insertions(+), 27
Reuse the "safe" implementation from signal.c but call unsafe_get_user()
directly in a loop to avoid the intermediate copy into a local buffer.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Daniel Axtens
---
arch/powerpc/kernel/signal.h | 26 ++
1 file c
nning with uaccess must be audited manually which
means: less code -> less work -> fewer problems (in theory).
A follow-up commit converts setup_sigcontext() to be "unsafe".
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 32 +-
g with the argument in the macro to avoid a
potential compile warning.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Daniel Axtens
---
arch/powerpc/include/asm/reg.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/r
Just wrap __copy_tofrom_user() for the usual 'unsafe' pattern which
accepts a label to goto on error.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Daniel Axtens
---
arch/powerpc/include/asm/uaccess.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/include/asm
On Fri Feb 12, 2021 at 3:21 PM CST, Daniel Axtens wrote:
> "Christopher M. Riedl" writes:
>
> > Usually sigset_t is exactly 8B which is a "trivial" size and does not
> > warrant using __copy_from_user(). Use __get_user() directly in
> > anticipati
ed entirely since
> > uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM.
> >
> > Signed-off-by: Christopher M. Riedl
> > ---
> > arch/powerpc/kernel/signal_64.c | 16 +++-
> > 1 file changed, 7 insertions(+), 9 deletions(-)
> >
> > diff
pening too wide a window in user_read_access_begin,
> it seems to me that it could be reduced to just the
> uc_mcontext. (Again, not that it makes a difference with the current
> HW.)
Ok, I'll fix these in the next version as well.
>
> Kind regards,
> Daniel
>
> > Sign
nging the
> callers of the old safe versions to first open the window, then call the
> unsafe variants, then close the window again.
Noted!
>
> > Signed-off-by: Christopher M. Riedl
> > ---
> > arch/powerpc/kernel/signal_64.c | 70 -
> >
On Wed Feb 10, 2021 at 3:06 PM CST, Daniel Axtens wrote:
> "Christopher M. Riedl" writes:
>
> > On Sun Feb 7, 2021 at 10:44 PM CST, Daniel Axtens wrote:
> >> Hi Chris,
> >>
> >> These two paragraphs are a little confusing and they seem slightly
gcontext()
> > function which must be called first and before opening up a uaccess
> > window. A follow-up commit converts setup_sigcontext() to be "unsafe".
>
> This was a bit confusing until we realise that you're moving the _
On Tue Feb 9, 2021 at 3:45 PM CST, Christophe Leroy wrote:
> "Christopher M. Riedl" a écrit :
>
> > Usually sigset_t is exactly 8B which is a "trivial" size and does not
> > warrant using __copy_from_user(). Use __get_user() directly in
> > anticipati
On Sun Feb 7, 2021 at 4:12 AM CST, Christophe Leroy wrote:
>
>
> Le 06/02/2021 à 18:39, Christopher M. Riedl a écrit :
> > On Sat Feb 6, 2021 at 10:32 AM CST, Christophe Leroy wrote:
> >>
> >>
> >> Le 20/10/2020 à 04:01, Christopher M. Riedl a écrit :
On Sat Feb 6, 2021 at 10:32 AM CST, Christophe Leroy wrote:
>
>
> Le 20/10/2020 à 04:01, Christopher M. Riedl a écrit :
> > On Fri Oct 16, 2020 at 10:48 AM CDT, Christophe Leroy wrote:
> >>
> >>
> >> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
obably not an issue, but it's still incorrect so fix it.
Also expand the comments to explain why using the stack "red zone"
instead of creating a new stackframe is appropriate here.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/idle_book3s.S | 138 +
On Wed Feb 3, 2021 at 12:43 PM CST, Christopher M. Riedl wrote:
> Usually sigset_t is exactly 8B which is a "trivial" size and does not
> warrant using __copy_from_user(). Use __get_user() directly in
> anticipation of future work to remove the trivial size optimizations
>
On Sat Jan 30, 2021 at 7:44 AM CST, Nicholas Piggin wrote:
> Excerpts from Michael Ellerman's message of January 30, 2021 9:32 pm:
> > "Christopher M. Riedl" writes:
> >> The idle entry/exit code saves/restores GPRs in the stack "red zone"
> >&g
* Rebase on latest linuxppc/next + Christophe Leroy's PPC32
signal series
* Simplify/remove TM ifdefery similar to PPC32 series and clean
up the uaccess begin/end calls
* Isolate non-inline functions so they are not called when
uaccess
with their 'unsafe'
versions which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 68 -
1 file changed, 41 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/ar
g with the argument in the macro to avoid a
potential compile warning.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/reg.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index e40a921d78f9..c5a3e8561
l handling throughput here.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 14 --
1 file changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index 817b64e1e409..42fdc4a7ff72 100644
--- a/ar
nction which must be called first and before opening up a uaccess
window. A follow-up commit converts setup_sigcontext() to be "unsafe".
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 32 +---
1 file changed, 21 insertions(+), 11 d
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
1 - 100 of 266 matches
Mail list logo