> On March 29, 2019 at 3:41 AM Christophe Leroy wrote:
>
>
>
>
> Le 29/03/2019 à 05:21, cmr a écrit :
> > Operations which write to memory should be restricted on secure systems
> > and optionally to avoid self-destructive behaviors.
> >
> > Add a config option, XMON_RO, to control default xm
> On March 29, 2019 at 12:49 AM Andrew Donnellan
> wrote:
>
>
> On 29/3/19 3:21 pm, cmr wrote:
> > Operations which write to memory should be restricted on secure systems
> > and optionally to avoid self-destructive behaviors.
>
> For reference:
> - https://github.com/linuxppc/issues/issue
> On April 3, 2019 at 12:15 AM Christophe Leroy wrote:
>
>
>
>
> Le 03/04/2019 à 05:38, Christopher M Riedl a écrit :
> >> On March 29, 2019 at 3:41 AM Christophe Leroy
> >> wrote:
> >>
> >>
> >>
> >>
> >> Le 2
disable memset
disable memzcan
memex:
no-op'd mwrite
super_regs:
no-op'd write_spr
bpt_cmds:
disable
proc_call:
disable
Signed-off-by: Christopher M. Riedl
---
v1->v2:
Use bool type for xmon_is_ro flag
Replace XMON_RO with X
> On April 8, 2019 at 1:34 AM Oliver wrote:
>
>
> On Mon, Apr 8, 2019 at 1:06 PM Christopher M. Riedl
> wrote:
> >
> > Operations which write to memory and special purpose registers should be
> > restricted on systems with integrity guarantees (such as Sec
> On April 8, 2019 at 2:37 AM Andrew Donnellan
> wrote:
>
>
> On 8/4/19 1:08 pm, Christopher M. Riedl wrote:
> > Operations which write to memory and special purpose registers should be
> > restricted on systems with integrity guarantees (such as Secure Boot)
&g
> On April 11, 2019 at 8:37 AM Michael Ellerman wrote:
>
>
> Christopher M Riedl writes:
> >> On April 8, 2019 at 1:34 AM Oliver wrote:
> >> On Mon, Apr 8, 2019 at 1:06 PM Christopher M. Riedl
> >> wrote:
> ...
> >> >
> &g
Signed-off-by: Christopher M. Riedl
---
v2->v3:
Use XMON_DEFAULT_RO_MODE to set xmon read-only mode
Untangle read-only mode from STRICT_KERNEL_RWX and PAGE_KERNEL_ROX
Update printed msg string for write ops in read-only mode
arch/powerpc/Kconfig.debug | 8
Signed-off-by: Christopher M. Riedl
Reviewed-by: Oliver O'Halloran
---
v3->v4:
Address Andrew's nitpick.
arch/powerpc/Kconfig.debug | 8
arch/powerpc/xmon/xmon.c | 42 ++
2 files changed, 50 insertions(+)
diff --git a/arch/
. Choose a
randomized patching address inside the temporary mm userspace address
portion. The next patch uses the temporary mm and patching address for
code patching.
Based on x86 implementation:
commit 4fc19708b165
("x86/alternatives: Initialize temporary mm for patching")
Signed-off-by: Chr
@gmail.com/
Christopher M. Riedl (3):
powerpc/mm: Introduce temporary mm
powerpc/lib: Initialize a temporary mm for code patching
powerpc/lib: Use a temporary mm for code patching
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/include/asm/mmu_context.h | 56 +-
arch/powe
.
Based on x86 implementation:
commit b3fd8e83ada0
("x86/alternatives: Use temporary mm for text poking")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 128 ++-
1 file changed, 57 insertions(+), 71 deletions(-)
diff --git a/arch/p
implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/include/asm/mmu_context.h | 56 +-
arch/powerpc/kernel/process.c | 5 +++
3 files c
> On March 24, 2020 11:07 AM Christophe Leroy wrote:
>
>
> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situations
> >
> On March 24, 2020 11:10 AM Christophe Leroy wrote:
>
>
> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
> > When code patching a STRICT_KERNEL_RWX kernel the page containing the
> > address to be patched is temporarily mapped with permissive memory
> >
> On April 8, 2020 6:01 AM Christophe Leroy wrote:
>
>
> Le 31/03/2020 à 05:19, Christopher M Riedl a écrit :
> >> On March 24, 2020 11:10 AM Christophe Leroy
> >> wrote:
> >>
> >>
> >> Le 23/03/2020 à 05:52, Christopher M. Rie
> On March 24, 2020 11:25 AM Christophe Leroy wrote:
>
>
> Le 23/03/2020 à 05:52, Christopher M. Riedl a écrit :
> > Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
> > mappings to other CPUs. These mappings should be kept local to the CPU
> > d
> On March 26, 2020 9:42 AM Christophe Leroy wrote:
>
>
> This patch fixes the RFC series identified below.
> It fixes three points:
> - Failure with CONFIG_PPC_KUAP
> - Failure to write do to lack of DIRTY bit set on the 8xx
> - Inadequaly complex WARN post verification
>
> However, it has an
> On April 15, 2020 4:12 AM Christophe Leroy wrote:
>
>
> Le 15/04/2020 à 07:16, Christopher M Riedl a écrit :
> >> On March 26, 2020 9:42 AM Christophe Leroy wrote:
> >>
> >>
> >> This patch fixes the RFC series identified below.
> On April 15, 2020 3:45 AM Christophe Leroy wrote:
>
>
> Le 15/04/2020 à 07:11, Christopher M Riedl a écrit :
> >> On March 24, 2020 11:25 AM Christophe Leroy
> >> wrote:
> >>
> >>
> >> Le 23/03/2020 à 05:52, Christ
On Sat Apr 18, 2020 at 12:27 PM, Christophe Leroy wrote:
>
>
>
>
> Le 15/04/2020 à 18:22, Christopher M Riedl a écrit :
> >> On April 15, 2020 4:12 AM Christophe Leroy wrote:
> >>
> >>
> >> Le 15/04/2020 à 07:16, Christopher M Riedl a écrit
On Fri Apr 24, 2020 at 9:15 AM, Steven Rostedt wrote:
> On Thu, 23 Apr 2020 18:21:14 +0200
> Christophe Leroy wrote:
>
>
> > Le 23/04/2020 à 17:09, Naveen N. Rao a écrit :
> > > With STRICT_KERNEL_RWX, we are currently ignoring return value from
> > > __patch_instruction() in do_patch_instruction
tps://github.com/linuxppc/issues/issues/224
[1]:
https://lore.kernel.org/kernel-hardening/20190426232303.28381-1-nadav.a...@gmail.com/
Christopher M. Riedl (5):
powerpc/mm: Introduce temporary mm
powerpc/lib: Initialize a temporary mm for code patching
powerpc/lib: Use a temporary mm for co
ignored (see PowerISA v3.0b, Fig, 35).
Based on x86 implementation:
commit b3fd8e83ada0
("x86/alternatives: Use temporary mm for text poking")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 149 ---
1 file changed, 55 inserti
nel text is now overwritten.
How to run the test:
mount -t debugfs none /sys/kernel/debug
(echo HIJACK_PATCH > /sys/kernel/debug/provoke-crash/DIRECT)
Signed-off-by: Christopher M. Riedl
---
drivers/misc/lkdtm/core.c | 1 +
drivers/misc/lkdtm/lkdt
. Choose a
randomized patching address inside the temporary mm userspace address
portion. The next patch uses the temporary mm and patching address for
code patching.
Based on x86 implementation:
commit 4fc19708b165
("x86/alternatives: Initialize temporary mm for patching")
Signed-off-by: Chr
CPU.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/lib/code-patching.c | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 26f06cdb5d7e..cfbdef90384e 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/a
implementation:
commit cefa929c034e
("x86/mm: Introduce temporary mm structs")
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/debug.h | 1 +
arch/powerpc/include/asm/mmu_context.h | 54 ++
arch/powerpc/kernel/process.c | 5 +++
3 files c
On Wed Apr 29, 2020 at 7:52 AM, Christophe Leroy wrote:
>
>
>
>
> Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
> > Currently, code patching a STRICT_KERNEL_RWX exposes the temporary
> > mappings to other CPUs. These mappings should be kept local to the CPU
On Wed Apr 29, 2020 at 7:39 AM, Christophe Leroy wrote:
>
>
>
>
> Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situatio
On Wed Apr 29, 2020 at 7:48 AM, Christophe Leroy wrote:
>
>
>
>
> Le 29/04/2020 à 04:05, Christopher M. Riedl a écrit :
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situatio
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files changed, 7 insertions(+), 5 deletions
Fixes an oops when calling the shared-processor spinlock implementation
from a non-SP LPAR. Also take this opportunity to refactor
SHARED_PROCESSOR a bit.
Reference: https://github.com/linuxppc/issues/issues/229
Christopher M. Riedl (3):
powerpc/spinlocks: Refactor SHARED_PROCESSOR
powerpc
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 21 +++--
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/in
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 36 -
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/arch/p
> On July 30, 2019 at 4:31 PM Thiago Jung Bauermann
> wrote:
>
>
>
> Christopher M. Riedl writes:
>
> > Determining if a processor is in shared processor mode is not a constant
> > so don't hide it behind a #define.
> >
> > Signed-off-b
> On July 30, 2019 at 7:11 PM Thiago Jung Bauermann
> wrote:
>
>
>
> Christopher M Riedl writes:
>
> >> On July 30, 2019 at 4:31 PM Thiago Jung Bauermann
> >> wrote:
> >>
> >>
> >>
> >> Christopher M. Riedl wr
is
required in is_shared_processor() in spinlock.h
- Replace empty #define of splpar_*_yield() with actual functions with
empty bodies.
Christopher M. Riedl (3):
powerpc/spinlocks: Refactor SHARED_PROCESSOR
powerpc/spinlocks: Rename SPLPAR-only spinlocks
powerpc/spinlocks: Fix oops in
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 24 ++--
1 file changed, 18 insertions(+), 6 dele
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 36 -
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/arch/p
> On August 2, 2019 at 6:38 AM Michael Ellerman wrote:
>
>
> "Christopher M. Riedl" writes:
> > diff --git a/arch/powerpc/include/asm/spinlock.h
> > b/arch/powerpc/include/asm/spinlock.h
> > index 0a8270183770..6aed8a83b180 100644
> > --- a/arch
t; lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christ
> On July 29, 2019 at 2:00 AM Daniel Axtens wrote:
>
> Would you be able to send a v2 with these changes? (that is, not purging
> breakpoints when entering integrity mode)
>
Just sent out a v3 with that change among a few others and a rebase.
Thanks,
Chris R.
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 24 ++--
1 file changed, 18 insertions(+), 6 dele
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
Changes since v2:
- Directly call splpar_*_yield() to avoid duplicate call to
is_shared_processor() in some cases
arch/powerpc/inclu
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files
is_shared_processor() in some cases
Changes since v1:
- Improve comment wording to make it clear why the BOOK3S #ifdef is
required in is_shared_processor() in spinlock.h
- Replace empty #define of splpar_*_yield() with actual functions with
empty bodies
Christopher M. Riedl (3):
powerpc
> On August 6, 2019 at 7:14 AM Michael Ellerman wrote:
>
>
> Christopher M Riedl writes:
> >> On August 2, 2019 at 6:38 AM Michael Ellerman wrote:
> >> "Christopher M. Riedl" writes:
> >>
> >> This leaves us with a double test of
is_shared_processor() in spinlock.h
- Replace empty #define of splpar_*_yield() with actual functions with
empty bodies
Christopher M. Riedl (3):
powerpc/spinlocks: Refactor SHARED_PROCESSOR
powerpc/spinlocks: Rename SPLPAR-only spinlocks
powerpc/spinlocks: Fix oops in shared-processor spinlocks
arch
Determining if a processor is in shared processor mode is not a constant
so don't hide it behind a #define.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 24 ++--
1 file changed, 18 insertions(+), 6 dele
e8e7 38e70100 <7ca03c2c> 70a70001 78a50020
4d820020
[0.452808] ---[ end trace 474d6b2b8fc5cb7e ]---
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/spinlock.h | 36 -
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/arch/p
The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
Rename them to make this relationship obvious.
Signed-off-by: Christopher M. Riedl
Reviewed-by: Andrew Donnellan
---
arch/powerpc/include/asm/spinlock.h | 6 --
arch/powerpc/lib/locks.c| 6 +++---
2 files
t; lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christ
/11049461/
(based on: f632a8170a6b667ee4e3f552087588f0fe13c4bb)
- Do not clear existing breakpoints when transitioning from
lockdown=none to lockdown=integrity
- Remove line continuation and dangling quote (confuses checkpatch.pl)
from the xmon command help/usage string
Christopher M
Xmon can enter read-only mode dynamically due to changes in kernel
lockdown state. This transition does not clear active breakpoints and
any these breakpoints should remain visible to the xmon'er.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 19 ++-
1
Read-only mode should not prevent listing and clearing any active
breakpoints.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher M. Ri
checkpatch.pl)
from the xmon command help/usage string
Christopher M. Riedl (2):
powerpc/xmon: Allow listing active breakpoints in read-only mode
powerpc/xmon: Restrict when kernel is locked down
arch/powerpc/xmon/xmon.c | 104 +++
include/linux/security.h
> On August 29, 2019 at 2:43 AM Daniel Axtens wrote:
>
>
> Hi,
>
> > Xmon should be either fully or partially disabled depending on the
> > kernel lockdown state.
>
> I've been kicking the tyres of this, and it seems to work well:
>
> Tested-by: Daniel Axtens
>
Thank you for taking the t
> On August 29, 2019 at 1:40 AM Daniel Axtens wrote:
>
>
> Hi Chris,
>
> > Read-only mode should not prevent listing and clearing any active
> > breakpoints.
>
> I tested this and it works for me:
>
> Tested-by: Daniel Axtens
>
> > + if (xmon_is_ro || !scanhex(&a)) {
>
> It took
Read-only mode should not prevent listing and clearing any active
breakpoints.
Tested-by: Daniel Axtens
Reviewed-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch
lockdown=none to lockdown=integrity
- Remove line continuation and dangling quote (confuses checkpatch.pl)
from the xmon command help/usage string
Christopher M. Riedl (2):
powerpc/xmon: Allow listing and clearing breakpoints in read-only mode
powerpc/xmon: Restrict when kernel is locked down
wed-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 92
include/linux/security.h | 2 +
security/lockdown/lockdown.c | 2 +
3 files changed, 76 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c
Read-only mode should not prevent listing and clearing any active
breakpoints.
Tested-by: Daniel Axtens
Reviewed-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/xmon/xmon.c | 16 +++-
1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
(3) lockdown=integrity -> lockdown=confidentiality
clear all breakpoints, set xmon read-only mode,
prevent user re-entry into xmon
Suggested-by: Andrew Donnellan
Signed-off-by: Christopher M. Ri
(confuses checkpatch.pl)
from the xmon command help/usage string
Christopher M. Riedl (2):
powerpc/xmon: Allow listing and clearing breakpoints in read-only mode
powerpc/xmon: Restrict when kernel is locked down
arch/powerpc/xmon/xmon.c | 119
On Mon Oct 24, 2022 at 12:17 AM CDT, Benjamin Gray wrote:
> On Mon, 2022-10-24 at 14:45 +1100, Russell Currey wrote:
> > On Fri, 2022-10-21 at 16:22 +1100, Benjamin Gray wrote:
> > > From: "Christopher M. Riedl"
> > >
-%<--
> > >
> &
On Thu Aug 27, 2020 at 11:15 AM CDT, Jann Horn wrote:
> On Thu, Aug 27, 2020 at 7:24 AM Christopher M. Riedl
> wrote:
> > x86 supports the notion of a temporary mm which restricts access to
> > temporary PTEs to a single CPU. A temporary mm is useful for situations
> &
On Tue Aug 18, 2020 at 12:19 PM CDT, Christophe Leroy wrote:
> Change those two functions to be used within a user access block.
>
> For that, change save_general_regs() to and unsafe_save_general_regs(),
> then replace all user accesses by unsafe_ versions.
>
> This series leads to a reduction fro
On Tue Aug 18, 2020 at 12:19 PM CDT, Christophe Leroy wrote:
> For the non VSX version, that's trivial. Just use unsafe_copy_to_user()
> instead of __copy_to_user().
>
> For the VSX version, remove the intermediate step through a buffer and
> use unsafe_put_user() directly. This generates a far sma
| hash | radix |
| --- | -- | -- |
| linuxppc/next | 289014 | 158408 |
| unsafe-signal64 | 298506 | 253053 |
[0]: https://github.com/linuxppc/issues/issues/277
[1]: https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=196278
uaccess functions with their 'unsafe'
versions to avoid the repeated uaccess switches.
Signed-off-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 32 +++-
1 file changed, 19 insertions(+), 13 deletions(-)
diff --
with their 'unsafe'
versions which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 68 -
1 file changed, 41 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/ar
their 'unsafe' versions
which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 71 -
1 file changed, 44 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powe
ignificantly reduces signal handling performance.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal.h | 33 +
1 file changed, 33 insertions(+)
diff --git a/arch/powerpc/kernel/signal.h b/arch/powerpc/kernel/signal.h
index 2559a681536e..e9aaeac0da
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal
uot;longer" uaccess block.
Signed-off-by: Daniel Axtens
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 54 -
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/
ed() calls
__copy_tofrom_user() internally, but this is still safe to call in user
access blocks formed with user_*_access_begin()/user_*_access_end()
since asm functions are not instrumented for tracing.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/uaccess.h | 28 +++--
ned-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/process.c | 20 ++--
arch/powerpc/mm/mem.c | 4 ++--
2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index ba2c987b8403..bf5d9654bd2c 100644
On Fri Oct 16, 2020 at 4:02 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > Functions called between user_*_access_begin() and user_*_access_end()
> > should be either inlined or marked 'notrace' to prevent leaving
>
On Fri Oct 16, 2020 at 10:48 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > Reuse the "safe" implementation from signal.c except for calling
> > unsafe_copy_from_user() to copy into a local buffer. Unlike the
&g
On Fri Oct 16, 2020 at 10:56 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > From: Daniel Axtens
> >
> > Previously setup_trampoline() performed a costly KUAP switch on every
> > uaccess operation. These
On Fri Oct 16, 2020 at 11:00 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > From: Daniel Axtens
> >
> > Add uaccess blocks and use the 'unsafe' versions of functions doing user
> > access where possible t
On Fri Oct 16, 2020 at 11:07 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > From: Daniel Axtens
> >
> > Add uaccess blocks and use the 'unsafe' versions of functions doing user
> > access where possible t
On Fri Oct 16, 2020 at 10:17 AM CDT, Christophe Leroy wrote:
>
>
> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
> > Implement raw_copy_from_user_allowed() which assumes that userspace read
> > access is open. Use this new function to implement raw_copy_from_user().
ignificantly reduces signal handling performance.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal.h | 33 +
1 file changed, 33 insertions(+)
diff --git a/arch/powerpc/kernel/signal.h b/arch/powerpc/kernel/signal.h
index 2559a681536e..e9aaeac0da
en uaccess window. Non-inline functions should be
avoided when uaccess is open.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 32 +---
1 file changed, 21 insertions(+), 11 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powe
Similar to commit 1c32940f5220 ("powerpc/signal32: Remove ifdefery in
middle of if/else") for PPC32, remove the messy ifdef. Unlike PPC32, the
ifdef cannot be removed entirely since the uc_transact member of the
sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM=y.
Signed-off-by: Chr
* Isolate non-inline functions so they are not called when
uaccess window is open
Christopher M. Riedl (6):
powerpc/uaccess: Add unsafe_copy_from_user
powerpc/signal: Add unsafe_copy_{vsx,fpr}_from_user()
powerpc/signal64: Move non-inline functions out of setup_sigcont
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
with their 'unsafe'
versions which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 68 -
1 file changed, 41 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/ar
ed() calls
__copy_tofrom_user() internally, but this is still safe to call in user
access blocks formed with user_*_access_begin()/user_*_access_end()
since asm functions are not instrumented for tracing.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/uaccess.h | 28 +++--
their 'unsafe' versions
which avoid the repeated uaccess switches.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 70 -
1 file changed, 43 insertions(+), 27 deletions(-)
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powe
uot;longer" uaccess block.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal_64.c | 54 +
1 file changed, 34 insertions(+), 20 deletions(-)
diff --git a/arch/powerpc/kernel/sig
On Wed Feb 3, 2021 at 12:43 PM CST, Christopher M. Riedl wrote:
> Usually sigset_t is exactly 8B which is a "trivial" size and does not
> warrant using __copy_from_user(). Use __get_user() directly in
> anticipation of future work to remove the trivial size optimizations
>
obably not an issue, but it's still incorrect so fix it.
Also expand the comments to explain why using the stack "red zone"
instead of creating a new stackframe is appropriate here.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/idle_book3s.S | 138 +
On Sat Feb 6, 2021 at 10:32 AM CST, Christophe Leroy wrote:
>
>
> Le 20/10/2020 à 04:01, Christopher M. Riedl a écrit :
> > On Fri Oct 16, 2020 at 10:48 AM CDT, Christophe Leroy wrote:
> >>
> >>
> >> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
On Sun Feb 7, 2021 at 4:12 AM CST, Christophe Leroy wrote:
>
>
> Le 06/02/2021 à 18:39, Christopher M. Riedl a écrit :
> > On Sat Feb 6, 2021 at 10:32 AM CST, Christophe Leroy wrote:
> >>
> >>
> >> Le 20/10/2020 à 04:01, Christopher M. Riedl a écrit :
On Tue Feb 9, 2021 at 3:45 PM CST, Christophe Leroy wrote:
> "Christopher M. Riedl" a écrit :
>
> > Usually sigset_t is exactly 8B which is a "trivial" size and does not
> > warrant using __copy_from_user(). Use __get_user() directly in
> > anticipati
1 - 100 of 266 matches
Mail list logo