passes on these large machines.
Signed-off-by: Michael Neuling
---
tools/testing/selftests/powerpc/math/vmx_signal.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/tools/testing/selftests/powerpc/math/vmx_signal.c
b/tools/testing/selftests/powerpc/math/vmx_signal.c
index b340a5c4e79d
On Thu, 2022-08-04 at 11:27 +0930, Joel Stanley wrote:
> Enable the LiteX MMC device and it's dependency the common clock
> framework.
>
> Signed-off-by: Joel Stanley
Acked-by: Michael Neuling
> ---
> arch/powerpc/configs/microwatt_defconfig | 5 +
> 1
On Fri, 2022-05-20 at 10:06 +1000, Nicholas Piggin wrote:
> Excerpts from Joel Stanley's message of May 19, 2022 10:57 pm:
> > In commit 5402e239d09f ("powerpc/64s: Get LPID bit width from device
> > tree") the kernel tried to determine the pid and lpid bits from the
> > device tree. If they are no
cal section that does not fault.
s/chane/change/
>
> [ The SCRATCH0 change is not strictly part of the fix, it's only used in
> the RI=0 section so it does not have the same problem as the previous
> SCRATCH0 bug. ]
>
> Signed-off-by: Nicholas Piggin
This needs to
CC Rashmica Gupta
On Wed, 2020-11-11 at 16:55 +1100, Jordan Niethe wrote:
> The hardware trace macros which use the memory provided by memtrace are
> able to use trace sizes as small as 16MB. Only memblock aligned values
> can be removed from each NUMA node by writing that value to
> memtrace/ena
Nimbus <= DD2.1 bare metal.
The fix is to align vbuf to a 16 byte boundary.
Fixes: 5080332c2c89 ("powerpc/64s: Add workaround for P9 vector CI load issue")
Signed-off-by: Michael Neuling
Cc: # v4.15+
---
arch/powerpc/kernel/traps.c | 2 +-
1 file changed, 1 insertion(+), 1 delet
t;).
This changes the loop to start from offset 0 rather than 1 so that we
test the kernel emulation in p9_hmi_special_emu().
Signed-off-by: Michael Neuling
---
.../selftests/powerpc/alignment/alignment_handler.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/too
On Mon, 2020-08-03 at 22:41 +1000, Michael Ellerman wrote:
> Michael Neuling writes:
> > On POWER10 bit 12 in the PVR indicates if the core is SMT4 or
> > SMT8. Bit 12 is set for SMT4.
> >
> > Without this patch, /proc/cpuinfo on a SMT4 DD1 POWER10 looks
On POWER10 bit 12 in the PVR indicates if the core is SMT4 or
SMT8. Bit 12 is set for SMT4.
Without this patch, /proc/cpuinfo on a SMT4 DD1 POWER10 looks like
this:
cpu : POWER10, altivec supported
revision: 17.0 (pvr 0080 1100)
Signed-off-by: Michael Neuling
On Fri, 2020-07-10 at 10:52 +0530, Pratik Rajesh Sampat wrote:
> Additional registers DAWR0, DAWRX0 may be lost on Power 10 for
> stop levels < 4.
> Therefore save the values of these SPRs before entering a "stop"
> state and restore their values on wakeup.
>
> Signed-off-by: Pratik Rajesh Sampat
On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
> PowerISA v3.1 has few updates for the Branch History Rolling Buffer(BHRB).
> First is the addition of BHRB disable bit and second new filtering
> modes for BHRB.
>
> BHRB disable is controlled via Monitor Mode Control Register A (MMCRA)
> b
> @@ -480,6 +520,7 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
> mmcr[1] = mmcr1;
> mmcr[2] = mmcra;
> mmcr[3] = mmcr2;
> + mmcr[4] = mmcr3;
This is fragile like the kvm vcpu case I commented on before but it gets passed
in via a function parameter?! Can you create a
On Wed, 2020-07-01 at 05:20 -0400, Athira Rajeev wrote:
> From: Madhavan Srinivasan
>
> Add power10 feature function to dt_cpu_ftrs.c along
> with a power10 specific init() to initialize pmu sprs.
Can you say why you're doing this?
Can you add some text about what you're doing to the BHRB in th
@@ -637,12 +637,12 @@ struct kvm_vcpu_arch {
> u32 ccr1;
> u32 dbsr;
>
> - u64 mmcr[5];
> + u64 mmcr[6];
> u32 pmc[8];
> u32 spmc[2];
> u64 siar;
> + mfspr r5, SPRN_MMCR3
> + mfspr r6, SPRN_SIER2
> + mfspr r7, SPRN_SIER3
> + std r5
Mikey is going to test out pseries.
FWIW this worked for me in the P10 + powervm sim testing.
Tested-by: Michael Neuling
>
> - Alistair
>
> On Thursday, 28 May 2020 12:58:40 AM AEST Michael Ellerman wrote:
> > __init_FSCR() was added originally in commit 2468dcf641e4 ("p
Currently when we boot on a big core system, we get this print:
[0.040500] Using small cores at SMT level
This is misleading as we've actually detected big cores.
This patch clears up the print to say we've detect big cores but are
using small cores for scheduling.
Signed-off-b
-by: Michael Neuling
---
arch/powerpc/configs/ppc64_defconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/configs/ppc64_defconfig
b/arch/powerpc/configs/ppc64_defconfig
index bae8170d74..0a92549924 100644
--- a/arch/powerpc/configs/ppc64_defconfig
+++ b/arch/powerpc/configs
3.1. Hence:
Tested-by: Michael Neuling
> ---
> arch/powerpc/include/uapi/asm/cputable.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/powerpc/include/uapi/asm/cputable.h
> b/arch/powerpc/include/uapi/asm/cputable.h
> index 540592034740..2692a56bf20b
On Tue, 2020-03-31 at 12:12 -0300, Tulio Magno Quites Machado Filho wrote:
> Alistair Popple writes:
>
> > diff --git a/arch/powerpc/include/uapi/asm/cputable.h
> > b/arch/powerpc/include/uapi/asm/cputable.h
> > index 540592034740..c6fe10b2 100644
> > --- a/arch/powerpc/include/uapi/asm/cputa
The ISA has a quirk that's useful for the Linux implementation.
Document it here so others are less likely to trip over it.
Signed-off-by: Michael Neuling
Suggested-by: Michael Ellerman
---
.../powerpc/transactional_memory.rst | 27 +++
1 file changed, 27 inser
Christophe,
> Le 28/06/2019 à 17:47, Christophe Leroy a écrit :
> > The purpose of this series is to reduce the amount of #ifdefs
> > in ptrace.c
> >
>
> Any feedback on this series which aims at fixing the issue you opened at
> https://github.com/linuxppc/issues/issues/128 ?
Yeah, sorry my ba
On Mon, 2020-02-17 at 07:40 +0100, Christophe Leroy wrote:
>
> Le 16/02/2020 à 23:40, Michael Neuling a écrit :
> > On Fri, 2020-02-14 at 08:33 +, Christophe Leroy wrote:
> > > With CONFIG_VMAP_STACK, data MMU has to be enabled
> > > to read data on the stack.
On Sun, 2020-02-16 at 23:57 -0600, Segher Boessenkool wrote:
> On Mon, Feb 17, 2020 at 12:07:31PM +1100, Michael Neuling wrote:
> > On Thu, 2020-02-13 at 10:15 -0500, Gustavo Romero wrote:
> > > On P9 DD2.2 due to a CPU defect some TM instructions need to be emulated
> >
On Thu, 2020-02-13 at 10:15 -0500, Gustavo Romero wrote:
> On P9 DD2.2 due to a CPU defect some TM instructions need to be emulated by
> KVM. This is handled at first by the hardware raising a softpatch interrupt
> when certain TM instructions that need KVM assistance are executed in the
> guest. S
Daniel.
Can you start this commit message with a simple description of what you are
actually doing? This reads like you've been on a long journey to Mordor and
back, which as a reader of this patch in the long distant future, I don't care
about. I just want to know what you're implementing.
Also
Paulus,
Something below for you I think
> We have an IBM POWER server (8247-42L) running Linux kernel 5.4.13 on Debian
> unstable
> hosting a big-endian ppc64 virtual machine running the same kernel in
> big-endian
> mode.
>
> When building OpenJDK-11 on the big-endian VM, the testsuite crash
On Fri, 2020-02-14 at 08:33 +, Christophe Leroy wrote:
> With CONFIG_VMAP_STACK, data MMU has to be enabled
> to read data on the stack.
Can you describe what goes wrong without this? Some oops message? rtas blows up?
Get corrupt data?
Also can you say what you're actually doing (ie turning o
by: Leonardo Bras
LGTM
Reviewed-by: Michael Neuling
> ---
> arch/powerpc/kernel/cputable.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
> index e745abc5457a..5a87ec96582f 100644
> --- a/arch/powerpc/
.kernel.org # v3.9
> Signed-off-by: Gustavo Luiz Duarte
Acked-By: Michael Neuling
On Sun, 2020-02-09 at 21:17 -0800, Haren Myneni wrote:
> On Fri, 2020-02-07 at 16:57 +1100, Michael Neuling wrote:
> > > /*
> > > + * Process CRBs that we receive on the fault window.
> > > + */
> > > +irqreturn_t vas_fault_handler(int irq, void *data)
>
> > > +
> > > + csb.cc = CSB_CC_TRANSLATION;
> > > + csb.ce = CSB_CE_TERMINATION;
> > > + csb.cs = 0;
> > > + csb.count = 0;
> > > +
> > > + /*
> > > + * Returns the fault address in CPU format since it is passed with
> > > + * signal. But if the user space expects BE format, need changes.
> >
> /*
> + * Process CRBs that we receive on the fault window.
> + */
> +irqreturn_t vas_fault_handler(int irq, void *data)
> +{
> + struct vas_instance *vinst = data;
> + struct coprocessor_request_block buf, *crb;
> + struct vas_window *window;
> + void *fifo;
> +
> + /*
> +
On Wed, 2020-01-22 at 00:17 -0800, Haren Myneni wrote:
> For each fault CRB, update fault address in CRB (fault_storage_addr)
> and translation error status in CSB so that user space can touch the
> fault address and resend the request. If the user space passed invalid
> CSB address send signal to
On Thu, 2020-02-06 at 19:13 -0300, Gustavo Luiz Duarte wrote:
>
> On 2/5/20 1:58 AM, Michael Neuling wrote:
> > Other than the minor things below that I think you need, the patch good with
> > me.
> >
> > Acked-by: Michael Neuling
> >
> > > Subj
Other than the minor things below that I think you need, the patch good with me.
Acked-by: Michael Neuling
> Subject: Re: [PATCH v2 1/3] powerpc/tm: Clear the current thread's MSR[TS]
> after treclaim
The subject should mention "signals".
On Mon, 2020-02-03 at 13:0
The Linux kernel for powerpc since v4.15 has a bug in it's TM handling during
interrupts where any user can read the FP/VMX registers of a difference user's
process. Users of TM + FP/VMX can also experience corruption of their FP/VMX
state.
To trigger the bug, a process starts a transaction with F
The Linux kernel for powerpc since v4.12 has a bug in it's TM handling where any
user can read the FP/VMX registers of a difference user's process. Users of TM +
FP/VMX can also experience corruption of their FP/VMX state.
To trigger the bug, a process starts a transaction and reads a FP/VMX regis
imple testcase to replicate this will be posted to
tools/testing/selftests/powerpc/tm/tm-poison.c
This fixes CVE-2019-15031.
Fixes: a7771176b439 ("powerpc: Don't enable FP/Altivec if not checkpointed")
Cc: sta...@vger.kernel.org # 4.15+
Signed-off-by: Gustavo Romero
Signed-of
030.
Fixes: f48e91e87e67 ("powerpc/tm: Fix FP and VMX register corruption")
Cc: sta...@vger.kernel.org # 4.12+
Signed-off-by: Gustavo Romero
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/process.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerp
From: Gustavo Romero
Add TM selftest to check if FP or VEC register values from one process
can leak into another process when both run on the same CPU.
This tests for CVE-2019-15030 and CVE-2019-15031.
Signed-off-by: Gustavo Romero
Signed-off-by: Michael Neuling
---
tools/testing/selftests
The Linux kernel for powerpc since v3.9 has a bug in the TM handling where any
unprivileged local user may crash the operating system.
This bug affects machines using 64-bit CPUs where Transactional Memory (TM) is
not present or has been disabled (see below for more details on affected CPUs).
To
on P9.
This fixes CVE-2019-13648.
Fixes: 2b0a576d15 ("powerpc: Add new transactional memory state to the signal
context")
Cc: sta...@vger.kernel.org # v3.9
Reported-by: Praveen Pandey
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/signal_32.c | 3 +++
arch/powerpc/kernel
On Mon, 2019-06-24 at 21:48 +1000, Michael Ellerman wrote:
> Michael Neuling writes:
> > When emulating tsr, treclaim and trechkpt, we incorrectly set CR0. The
> > code currently sets:
> > CR0 <- 00 || MSR[TS]
> > but according to the ISA it should be:
>
ed-off-by: Michael Neuling
---
arch/powerpc/kvm/book3s_hv_tm.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kvm/book3s_hv_tm.c b/arch/powerpc/kvm/book3s_hv_tm.c
index 888e2609e3..31cd0f327c 100644
--- a/arch/powerpc/kvm/book3s_hv_tm.c
+++ b/arch/power
On Tue, 2019-06-18 at 18:28 +0200, Christophe Leroy wrote:
>
> Le 04/06/2019 à 05:00, Michael Neuling a écrit :
> > If you compile with KVM but without CONFIG_HAVE_HW_BREAKPOINT you fail
> > at linking with:
> >arch/powerpc/kvm/book3s_hv_rmhandlers.o:(.text+0x708): un
On Tue, 2019-06-18 at 09:57 +0530, Ravi Bangoria wrote:
> Watchpoint match range is always doubleword(8 bytes) aligned on
> powerpc. If the given range is crossing doubleword boundary, we
> need to increase the length such that next doubleword also get
> covered. Ex,
>
> address len =
On Tue, 2019-06-18 at 08:01 +0200, Christophe Leroy wrote:
>
> Le 18/06/2019 à 06:27, Ravi Bangoria a écrit :
> > patch 1-3: Code refactor
> > patch 4: Speedup disabling breakpoint
> > patch 5: Fix length calculation for unaligned targets
>
> While you are playing with hw breakpoints, did you hav
On Tue, 2019-06-18 at 09:57 +0530, Ravi Bangoria wrote:
> Directly setting dawr and dawrx with 0 should be enough to
> disable watchpoint. No need to reset individual bits in
> variable and then set in hw.
This seems like a pointless optimisation to me.
I'm all for adding more code/complexity if
This is going to collide with this patch
https://patchwork.ozlabs.org/patch/1109594/
Mikey
On Tue, 2019-06-18 at 09:57 +0530, Ravi Bangoria wrote:
> Remove unnecessary comments. Code itself is self explanatory.
> And, ISA already talks about MRD field. I Don't think we need
> to re-describe it.
> Subject: Powerpc/hw-breakpoint: Replace stale do_dabr() with do_break()
Can you add the word "comment" to this subject. Currently it implies there are
code changes here.
Mikey
On Tue, 2019-06-18 at 09:57 +0530, Ravi Bangoria wrote:
> do_dabr() was renamed with do_break() long ago. But I still
> > > 3:
> > > /* Emulate H_SET_DABR/X on P8 for the sake of compat mode
> > > guests */
> > > rlwimi r5, r4, 5, DAWRX_DR | DAWRX_DW
> > > c010b03c: 74 2e 85 50 rlwimi r5,r4,5,25,26
> > > rlwimi r5, r4, 2, DAWRX_WT
> > > c010b040: f6 16 8
ly.
Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Signed-off-by: Michael Neuling
Reported-by: Cédric Le Goater
--
mpe: This is for 5.2 fixes
v2: Review from Christophe Leroy
- De-Mikey/Cedric-ify commit message
- Add "Fixes:"
- Other trivial
On Wed, 2019-06-12 at 09:43 +0200, Cédric Le Goater wrote:
> On 12/06/2019 09:22, Michael Neuling wrote:
> > In commit c1fe190c0672 ("powerpc: Add force enable of DAWR on P9
> > option") I screwed up some assembler and corrupted a pointer in
> > r3. This resulted
hen we are returning
immediately.
Signed-off-by: Michael Neuling
Reported-by: Cédric Le Goater
--
mpe: This is for 5.2 fixes
---
arch/powerpc/kvm/book3s_hv_rmhandlers.S | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
b/arch/p
On Tue, 2019-06-11 at 09:51 +0200, Christophe Leroy wrote:
>
> Le 11/06/2019 à 09:24, Michael Neuling a écrit :
> > On Tue, 2019-06-11 at 08:48 +0200, Cédric Le Goater wrote:
> > > On 11/06/2019 08:44, Michael Neuling wrote:
> > > > > &
On Tue, 2019-06-11 at 08:48 +0200, Cédric Le Goater wrote:
> On 11/06/2019 08:44, Michael Neuling wrote:
> > > > 2:
> > > > -BEGIN_FTR_SECTION
> > > > - /* POWER9 with disabled DAWR */
> > > > + LOAD_REG_ADDR(r11, da
> > 2:
> > -BEGIN_FTR_SECTION
> > - /* POWER9 with disabled DAWR */
> > + LOAD_REG_ADDR(r11, dawr_force_enable)
> > + lbz r11, 0(r11)
> > + cmpdi r11, 0
> > li r3, H_HARDWARE
> > - blr
> > -END_FTR_SECTION_IFCLR(CPU_FTR_DAWR)
> > + beqlr
>
> Why is this a 'beqlr' ? Sh
On Thu, 2019-06-06 at 12:59 +0530, Ravi Bangoria wrote:
> Powerpc hw triggers watchpoint before executing the instruction.
> To make trigger-after-execute behavior, kernel emulates the
> instruction. If the instruction is 'load something into non-
> volatile register', exception handler should rest
, provide an intermediate callback function to avoid the warning.
Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Suggested-by: Christoph Hellwig
Signed-off-by: Mathieu Malaterre
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/hw_breakpoint.c | 7 ++-
.
This moves a bunch of code around to fix this. It moves a lot of the
DAWR code in a new file and creates a new CONFIG_PPC_DAWR to enable
compiling it.
Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Signed-off-by: Michael Neuling
--
v5:
- Changes based on com
++ b/arch/powerpc/kernel/dawr.c
> > @@ -0,0 +1,100 @@
> > +// SPDX-License-Identifier: GPL-2.0+
> > +//
> > +// DAWR infrastructure
> > +//
> > +// Copyright 2019, Michael Neuling, IBM Corporation.
>
> Normal top of file header should be /* */, //-style comment
, provide an intermediate callback function to avoid the warning.
Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Suggested-by: Christoph Hellwig
Signed-off-by: Mathieu Malaterre
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/hw_breakpoint.c | 7 ++-
.
This moves a bunch of code around to fix this. It moves a lot of the
DAWR code in a new file and creates a new CONFIG_PPC_DAWR to enable
compiling it.
Fixes: c1fe190c0672 ("powerpc: Add force enable of DAWR on P9 option")
Signed-off-by: Michael Neuling
--
v4:
- Fix merge conflic
> > > > --
> > > > v2:
> > > > Fixes based on Christophe Leroy's comments:
> > > > - Fix commit message formatting
> > > > - Move more DAWR code into dawr.c
> > > > ---
> > > >arch/powerpc/Kconfig | 5 ++
> > > >arch/powerpc/include/asm/hw_breakpoint.h | 20
On Mon, 2019-05-13 at 11:08 +0200, Christophe Leroy wrote:
>
> Le 13/05/2019 à 09:17, Michael Neuling a écrit :
> > If you compile with KVM but without CONFIG_HAVE_HW_BREAKPOINT you fail
> > at linking with:
> >arch/powerpc/kvm/book3s_hv_rmhandlers.o:(.text+0x708): un
This puts more of the dawr infrastructure in a new file.
Signed-off-by: Michael Neuling
--
v2:
Fixes based on Christophe Leroy's comments:
- Fix commit message formatting
- Move more DAWR code into dawr.c
---
arch/powerpc/Kconfig | 5 ++
arch/powerpc/include/asm/
commit 243e25112d06 ("powerpc/xive: Native exploitation of the XIVE
interrupt controller") added an option to turn off Linux native XIVE
usage via the xive=off kernel command line option.
This documents this option.
Signed-off-by: Michael Neuling
---
Documentation/admin-gu
chael Neuling
powerpc: Add force enable of DAWR on P9 option
This builds dawr_force_enable in always via a new file.
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/Makefile| 2 +-
arch/powerpc/kernel/dawr.c | 11 +++
arch/powerpc/kernel/hw_breakpoint.c | 3 -
400c
> SP (7fffeca90f40) is in userspace
>
> The solution for this problem is running the sigreturn code with
> regs->msr[TS] disabled, thus, avoiding hitting the side effect above. This
> does not seem to be a problem since regs->msr will be replaced by the
> uconte
On Mon, 2019-04-01 at 16:41 +1030, Joel Stanley wrote:
> Those not of us not drowning in POWER might not know what this means.
Hehe... thanks!
> Signed-off-by: Joel Stanley
Acked-by: Michael Neuling
> ---
> Documentation/powerpc/DAWR-POWER9.txt | 8
> 1 file change
will fail if the hypervisor doesn't support
writing the DAWR.
To double check the DAWR is working, run this kernel selftest:
tools/testing/selftests/powerpc/ptrace/ptrace-hwbreak.c
Any errors/failures/skips mean something is wrong.
Signed-off-by: Michael Neuling
---
v2:
Fix compile
will fail if the hypervisor doesn't support
writing the DAWR.
To double check the DAWR is working, run this kernel selftest:
tools/testing/selftests/powerpc/ptrace/ptrace-hwbreak.c
Any errors/failures/skips mean something is wrong.
Signed-off-by: Michael Neuling
---
Documentatio
accelerated)
>
> Fixes: ee13cb249fab ("powerpc/64s: Add support for software count cache
> flush")
> Cc: sta...@vger.kernel.org # v4.19+
> Signed-off-by: Michael Ellerman
LGTM
Reviewed-by: Michael Neuling
> ---
> arch/powerpc/kernel/security.c | 23 --
On Wed, 2018-11-28 at 11:23 -0200, Breno Leitao wrote:
> A new self test that forces MSR[TS] to be set without calling any TM
> instruction. This test also tries to cause a page fault at a signal
> handler, exactly between MSR[TS] set and tm_recheckpoint(), forcing
> thread->texasr to be rewritten
> Do you mean in this part of code?
>
> SYSCALL_DEFINE0(rt_sigreturn)
> {
>
> if (__copy_from_user(&set, &uc->uc_sigmask, sizeof(set)))
> goto badframe;
>
> ...
> if (MSR_TM_SUSPENDED(mfmsr()))
> tm_reclaim_current(0);
I'm actu
On Mon, 2018-11-19 at 10:44 -0200, Breno Leitao wrote:
> On a signal handler return, the user could set a context with MSR[TS] bits
> set, and these bits would be copied to task regs->msr.
>
> At restore_tm_sigcontexts(), after current task regs->msr[TS] bits are set,
> several __get_user() are ca
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> Currently the signal context restore code enables the bit on the MSR
> register without restoring the TM SPRs, which can cause undesired side
> effects.
>
> This is not correct because if TM is enabled in MSR, it means the TM SPR
> registers
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> Since every kernel entrance is calling TM_KERNEL_ENTRY, it is not
> expected to arrive at this point with a suspended transaction.
>
> If that is the case, cause a warning and reclaim the current thread in
> order to avoid a TM Bad Thing.
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> In the past, TIF_RESTORE_TM was being handled with the rest of the TIF
> workers,
> but, that was too early, and can cause some IRQ to be replayed in suspended
> state (after recheckpoint).
>
> This patch moves TIF_RESTORE_TM handler to as l
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> __switch_to_tm is the function that switches between two tasks which might
> have TM enabled. This function is clearly split in two parts, the task that
> is leaving the CPU, known as 'prev' and the task that is being scheduled,
> known as 'n
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> This is the only place we are going to recheckpoint now. Now the task
> needs to have TIF_RESTORE_TM flag set, which will get into
> restore_tm_state() at exception exit path, and execute the recheckpoint
> depending on the MSR.
>
> Every ti
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> This patch creates a macro that will be invoked on all entrance to the
> kernel, so, in kernel space the transaction will be completely reclaimed
> and not suspended anymore.
>
> This patchset checks if we are coming from PR, if not, skip.
On Tue, 2018-11-06 at 10:40 -0200, Breno Leitao wrote:
> If there is a FP/VEC/Altivec touch inside a transaction and the facility is
> disabled, then a facility unavailable exception is raised and ends up
> calling {fp,vec,vsx}_unavailable_tm, which was reclaiming and
> recheckpointing.
>
> This i
> In fact, I was the one that identified this performance degradation issue,
> and reported to Adhemerval who kindly fixed it with
> f0458cf4f9ff3d870c43b624e6dccaaf657d5e83.
>
> Anyway, I think we are safe here.
FWIW Agreed. PPC_FEATURE2_HTM_NOSC should be persevered by this series.
Mikey
On Tue, 2018-10-02 at 23:35 +0200, Andreas Schwab wrote:
> On Sep 14 2018, Michael Neuling wrote:
>
> > This stops us from doing code patching in init sections after they've
> > been freed.
>
> This breaks booting on PowerBook6,7, crashing very earl
overwriting the latest SPRs (which were
> valid).
>
> This patch checks if TM is enabled for current task before
> saving the SPRs, otherwise, the TM is lazily disabled and the thread
> value is already up-to-date and could be used directly, and saving is
> not required.
Acke
c62b58b3 ("powerpc: Avoid code patching freed init sections")
> Signed-off-by: Christophe Leroy
Thanks
Acked-by: Michael Neuling
The original patch was also marked for stable so we should do the same here.
Cc: sta...@vger.kernel.org # 4.13+
> ---
> arch/powerpc/lib/code-patchin
On Mon, 2018-10-01 at 13:25 +0200, Christophe LEROY wrote:
>
> Le 21/09/2018 à 13:59, Michael Ellerman a écrit :
> > On Fri, 2018-09-14 at 01:14:11 UTC, Michael Neuling wrote:
> > > This stops us from doing code patching in init sections after they've
> > > bee
On Sun, 2018-09-30 at 20:51 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/28/2018 02:36 AM, Michael Neuling wrote:
> > > > > + WARN_ON(MSR_TM_SUSPENDED(mfmsr())); + + tm_enable(); +
> > > > > tm_save_sprs(&(tsk->thread));
> > > >
On Thu, 2018-09-27 at 18:03 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/18/2018 02:36 AM, Michael Neuling wrote:
> > On Wed, 2018-09-12 at 16:40 -0300, Breno Leitao wrote:
> > > Make sure that we are not suspended on ptrace and that the registers were
On Thu, 2018-09-27 at 17:58 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/17/2018 10:29 PM, Michael Neuling wrote:
> > On Wed, 2018-09-12 at 16:40 -0300, Breno Leitao wrote:
> > > Now the transaction reclaims happens very earlier in the trap handler, and
> &
On Thu, 2018-09-27 at 17:57 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/18/2018 03:36 AM, Michael Neuling wrote:
> > On Wed, 2018-09-12 at 16:40 -0300, Breno Leitao wrote:
> > > The Documentation/powerpc/transactional_memory.txt says:
> > >
> > &g
On Thu, 2018-09-27 at 17:52 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/18/2018 02:50 AM, Michael Neuling wrote:
> > On Wed, 2018-09-12 at 16:40 -0300, Breno Leitao wrote:
> > > Since the transaction will be doomed with treckpt., the TEXASR[FS]
> > > s
On Thu, 2018-09-27 at 17:51 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/18/2018 02:41 AM, Michael Neuling wrote:
> > On Wed, 2018-09-12 at 16:40 -0300, Breno Leitao wrote:
> > > In the previous TM code, trecheckpoint was being executed in the middle of
> >
The comments in this file don't conform to the coding style so take
them to "Comment Formatting Re-Education Camp"
Suggested-by: Michael "Camp Drill Sargent" Ellerman
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/tm.S | 49 +--
On Tue, 2018-09-25 at 22:00 +1000, Michael Ellerman wrote:
> Michael Neuling writes:
> > Current we store the userspace r1 to PACATMSCRATCH before finally
> > saving it to the thread struct.
> >
> > In theory an exception could be taken here (like a machine check
hen copy r1 from the kernel stack to the thread
struct once we have MSR[RI] back on.
Suggested-by: Breno Leitao
Signed-off-by: Michael Neuling
---
arch/powerpc/kernel/tm.S | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/tm.S b/arch/powerpc/kernel/t
On Mon, 2018-09-24 at 11:32 -0300, Breno Leitao wrote:
> Hi Mikey,
>
> On 09/24/2018 04:27 AM, Michael Neuling wrote:
> > When we treclaim we store the userspace checkpointed r13 to a scratch
> > SPR and then later save the scratch SPR to the user thread struct.
> &g
t now contains the userspace r13.
To fix this, we store r13 to the kernel stack (which can't fault)
before we access the user thread struct.
Found by running P8 guest + powervm + disable_1tb_segments + TM. Seen
as a random userspace segfault with r13 looking like a kernel address.
Signed-off-b
On Fri, 2018-09-21 at 22:47 +0530, Gautham R Shenoy wrote:
> Hello Michael,
>
> On Fri, Sep 21, 2018 at 01:02:45PM +1000, Michael Neuling wrote:
> > This doesn't compile for me with:
> >
> > arch/powerpc/kernel/smp.c: In function ‘smp_prepare_cpus’:
> > a
1 - 100 of 1646 matches
Mail list logo