On Wed, 2 May 2018 23:07:26 +1000
Michael Ellerman wrote:
> A CPU that gets stuck with interrupts hard disable can be difficult to
> debug, as on some platforms we have no way to interrupt the CPU to
> find out what it's doing.
>
> A stop-gap is to have the CPU save it's stack pointer (r1) in i
On 05/04/2018 06:12 PM, Ram Pai wrote:
>> That new line boils down to:
>>
>> [ilog2(0)] = "",
>>
>> on x86. It wasn't *obvious* to me that it is OK to do that. The other
>> possibly undefined bits (VM_SOFTDIRTY for instance) #ifdef themselves
>> out of this array.
>>
>> I would
From: Fabio Estevam
Structure platform_driver does not need to set the owner field, as this
will be populated by the driver core.
Generated by scripts/coccinelle/api/platform_no_drv_owner.cocci.
Signed-off-by: Fabio Estevam
---
arch/powerpc/sysdev/cpm_gpio.c | 1 -
1 file changed, 1 deletion(
On Fri, May 04, 2018 at 03:57:33PM -0700, Dave Hansen wrote:
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index 0c9e392..3c7 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -679,6 +679,7 @@ static void show_smap_vma_flags(struct seq_file *m,
> > struct v
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 0c9e392..3c7 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -679,6 +679,7 @@ static void show_smap_vma_flags(struct seq_file *m,
> struct vm_area_struct *vma)
> [ilog2(VM_PKEY_BIT1)] = "",
>
On Fri, 2018-05-04 at 20:42 +1000, Michael Ellerman wrote:
> Cédric Le Goater writes:
>
> > This is not the case for the moment, but future releases of pHyp might
> > need to introduce some synchronisation routines under the hood which
> > would make the XIVE hcalls longer to complete.
> >
> > A
Currently the architecture specific code is expected to
display the protection keys in smap for a given vma.
This can lead to redundant code and possibly to divergent
formats in which the key gets displayed.
This patch changes the implementation. It displays the
pkey only if the archite
Only 4bits are allocated in the vma flags to hold 16 keys. This is
sufficient on x86. PowerPC supports 32 keys, which needs 5bits.
Allocate an additional bit.
cc: Dave Hansen
cc: Michael Ellermen
cc: Benjamin Herrenschmidt
cc: Andrew Morton
Reviewed-by: Ingo Molnar
Acked-by: Balbir Singh
Si
VM_PKEY_BITx are defined only if CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
is enabled. Powerpc also needs these bits. Hence lets define the
VM_PKEY_BITx bits for any architecture that enables
CONFIG_ARCH_HAS_PKEYS.
cc: Michael Ellermen
cc: Benjamin Herrenschmidt
cc: Andrew Morton
Reviewed-by: Dav
This patch series provides arch-neutral enhancements to
enable memory-keys on new architecutes, and the corresponding
changes in x86 and powerpc specific code to support that.
a) Provides ability to support upto 32 keys. PowerPC
can handle 32 keys and hence needs this.
b) Arch-neutral co
On Fri, May 04, 2018 at 02:31:10PM -0700, Dave Hansen wrote:
> On 05/04/2018 02:26 PM, Michal Suchánek wrote:
> > If it is not ok to change permissions of pkey 0 is it ok to free it?
>
> It's pretty much never OK to free it on x86 or ppc. But, we're not
> going to put code in to keep userspace fr
On 05/04/2018 02:26 PM, Michal Suchánek wrote:
> If it is not ok to change permissions of pkey 0 is it ok to free it?
It's pretty much never OK to free it on x86 or ppc. But, we're not
going to put code in to keep userspace from shooting itself in the foot,
at least on x86.
On Fri, 4 May 2018 12:22:58 -0700
"Ram Pai" wrote:
> Applications need the ability to associate an address-range with some
> key and latter revert to its initial default key. Pkey-0 comes close
> to providing this function but falls short, because the current
> implementation disallows applicati
On Fri, May 04, 2018 at 12:59:27PM -0700, Dave Hansen wrote:
> On 05/04/2018 12:22 PM, Ram Pai wrote:
> > @@ -407,9 +414,6 @@ static bool pkey_access_permitted(int pkey, bool write,
> > bool execute)
> > int pkey_shift;
> > u64 amr;
> >
> > - if (!pkey)
> > - return true;
> >
Disassociate the exec_key from a VMA if the VMA permission is not
PROT_EXEC anymore. Otherwise the exec_only key continues to be
associated with the vma, causing unexpected behavior.
The problem was reported on x86 by Shakeel Butt,
which is also applicable on powerpc.
cc: Shakeel Butt
Reported-
On 05/04/2018 12:22 PM, Ram Pai wrote:
> @@ -407,9 +414,6 @@ static bool pkey_access_permitted(int pkey, bool write,
> bool execute)
> int pkey_shift;
> u64 amr;
>
> - if (!pkey)
> - return true;
> -
> if (!is_pkey_enabled(pkey))
> return true;
Lo
When mprotect(,PROT_EXEC) is called, the kernel allocates a
execute-only pkey and associates the pkey with the given address space.
The permission of this key should not be modifiable from userspace.
However a bug in the current implementation lets the permissions on the
key modifiable from use
Applications need the ability to associate an address-range with some
key and latter revert to its initial default key. Pkey-0 comes close to
providing this function but falls short, because the current
implementation disallows applications to explicitly associate pkey-0 to
the address range.
Lets
This masks the new gcc8 warning
regset.h:270:4: error: 'memcpy' offset [-527, -529] is out
of the bounds [0, 16] of object 'vrsave' with type 'union '
Signed-off-by: Khem Raj
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/kernel/Makefile | 1 +
1 file changed, 1
Fixes
alias between functions of incompatible types warnings
which are new with gcc8
Signed-off-by: Khem Raj
Cc: Benjamin Herrenschmidt
Cc: linuxppc-dev@lists.ozlabs.org
---
arch/powerpc/kernel/Makefile | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/Ma
These are not local timer interrupts but IPIs. It's good to be able
to see how timer offloading is behaving, so split these out into
their own category.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/hardirq.h | 1 +
arch/powerpc/kernel/irq.c | 6 ++
arch/powerpc/kernel
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/smp.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 914708eeb43f..28ec1638a540 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -193,7 +193,9 @@ cons
Signed-off-by: Nicholas Piggin
---
arch/powerpc/kernel/smp.c | 8
arch/powerpc/kernel/time.c | 2 ++
2 files changed, 10 insertions(+)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 5441a47701b1..914708eeb43f 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/p
Large decrementers (e.g., POWER9) can take a very long time to wrap,
so when the timer iterrupt handler sets the decrementer to max so as
to avoid taking another decrementer interrupt when hard enabling
interrupts before running timers, it effectively disables the soft
NMI coverage for timer interr
The broadcast tick recipient can call tick_receive_broadcast rather
than re-running the full timer interrupt.
It does not have to check for the next event time, because the sender
already determined the timer has expired. It does not have to test
irq_work_pending, because that's a direct decrement
For SPLPAR, lparcfg provides a sum of PURR registers for all CPUs.
Currently this is done by reading PURR in context switch and timer
interrupt, and storing that into a per-CPU variable. These are summed
to provide the value.
This does not work with all timer schemes (e.g., NO_HZ_FULL), and it
is
These fields are only written to.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/processor.h | 4
arch/powerpc/kernel/process.c| 6 +-
2 files changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch/powerpc/include/asm/processor.h
b/arch/powerpc/include/asm/proc
Book3S minimum supported ISA version now requires mtmsrd L=1. This
instruction does not require bits other than RI and EE to be supplied,
so __hard_irq_enable() and __hard_irq_disable() does not have to read
the kernel_msr from paca.
Interrupt entry code already relies on L=1 support.
Signed-off-
When the masked interrupt handler clears MSR[EE] for an interrupt in
the PACA_IRQ_MUST_HARD_MASK set, it does not set PACA_IRQ_HARD_DIS.
This makes them get out of synch.
With that taken into account, it's only low level irq manipulation
(and interrupt entry before reconcile) where they can be out
This check does not catch IRQ soft mask bugs, but this option is
slightly more suitable than TRACE_IRQFLAGS.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/plpar_wrappers.h | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/plpar_wra
irq_work_raise should not cause a decrementer exception unless it is
called from NMI context. Doing so often just results in an immediate
masked decrementer interrupt:
<...>-55090d...4us : update_curr_rt <-dequeue_task_rt
<...>-55090d...5us : dbs_update_util_handler <-update_
These are a bunch of small things I've built up from looking
through code trying to track down some rare irq latency issues.
None of them actually fix any long irq latencies, but they
hopefully make the code a bit neater, get rid of some small
glitches, increase watchdog coverage etc.
Ben spotted
Laurent Dufour writes:
> On 30/04/2018 20:43, Punit Agrawal wrote:
>> Hi Laurent,
>>
>> I am looking to add support for speculative page fault handling to
>> arm64 (effectively porting this patch) and had a few questions.
>> Apologies if I've missed an obvious explanation for my queries. I'm
>>
Michael Ellerman wrote:
From: Al Viro
Signed-off-by: Al Viro
---
arch/powerpc/kernel/pci_32.c | 6 +++---
arch/powerpc/kernel/pci_64.c | 4 ++--
arch/powerpc/mm/subpage-prot.c | 4 +++-
arch/powerpc/platforms/cell/spu_syscalls.c | 3 ++-
4 files changed
On Fri, 2018-05-04 at 14:34 +0200, Christophe Leroy wrote:
> CAUTION: This email originated from outside of the organization. Do not click
> links or open attachments unless you recognize the sender and know the
> content is safe.
>
>
> commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and
Some syscall entry functions on powerpc are prefixed with
ppc_/ppc32_/ppc64_ rather than the usual sys_/__se_sys prefix. fork(),
clone(), swapcontext() are some examples of syscalls with such entry
points. We need to match against these names when initializing ftrace
syscall tracing.
Signed-off-by
On powerpc64 ABIv1, we are enabling syscall tracing for only ~20
syscalls. This is due to commit e145242ea0df6 ("syscalls/core,
syscalls/x86: Clean up syscall stub naming convention") which has
changed the syscall entry wrapper prefix from "SyS" to "__se_sys".
Update the logic for ABIv1 to not jus
The "Power Architecture 64-Bit ELF V2 ABI" says in section 2.3.2.3:
[...] There are several rules that must be adhered to in order to ensure
reliable and consistent call chain backtracing:
* Before a function calls any other function, it shall establish its
own stack frame, whose size shall be
Le 04/05/2018 à 13:17, Michael Ellerman a écrit :
kbuild test robot writes:
Hi Christophe,
Thank you for the patch! Yet something to improve:
[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.17-rc2 next-20180426]
[if your patch is applied to the wrong git tree, please dro
DO NOT APPLY THAT ONE, IT BUGS. But comments are welcome.
In 16k pages mode, the 8xx still need only 4k for the page table.
This patch makes use of the pte_fragment functions in order
to avoid wasting memory space
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-8xx.h
In order to allow the 8xx to handle pte_fragments, this patch
makes in common to PPC32 and PPC64
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu_context.h | 28 ++
arch/powerpc/mm/mmu_context_book3s64.c | 28 --
arch/powerpc/mm/pgtable.c | 67
We can now use SPRN_M_TW in the DAR Fixup code, freeing
SPRN_SPRG_SCRATCH2
Then SPRN_SPRG_SCRATCH2 may be used for something else in the future.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/po
Each handler must not exceed 64 instructions to fit into the main
exception area.
Following the significant size reduction of TLB handler routines,
the side handlers can be brought back close to the main part.
In the worst case:
Main part of ITLB handler is 45 insn, side part is 9 insn ==> total 5
Today, on the 8xx the TLB handlers do SW tablewalk by doing all
the calculation in ASM, in order to match with the Linux page
table structure.
The 8xx offers hardware assistance which allows significant size
reduction of the TLB handlers, hence also reduces the time spent
in the handlers.
However
commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and TLB miss")
introduced non atomic PTE updates and started the work of removing
PTE updates in TLB miss handlers, but kept PTE_ATOMIC_UPDATES for the
8xx with the following comment:
/* Until my rework is finished, 8xx still needs atomic PTE up
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we set it in the PMD.
For this, we split the VM alloc space in two parts, one
for VM alloc and non Guarded IO, and one for Guarded IO.
Signed-off-by: Chr
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h
b/arch/powerpc/include/asm/nohash/32/pgtable.h
index b413abcd5a09..93dc22dbe964 100644
--- a/arch/
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 1 +
arch/powerpc/mm/ioremap.c| 126 +++
2 files changed, 34 insertions(+), 93 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h
b/arch/powerpc/i
Today, early ioremap maps from IOREMAP_BASE down to up on PPC64
and from IOREMAP_TOP up to down on PPC32
This patchs modifies PPC32 behaviour to get same behaviour as PPC64
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 16 +---
arch/powerpc/inclu
This patch makes __iounmap() common to PPC32 and PPC64.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ioremap.c | 26 ++
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c
index 153657db084e..65d611d
__ioremap(), ioremap(), ioremap_wc() et ioremap_prot() are
very similar between PPC32 and PPC64, they can easily be
made common.
_PAGE_WRITE equals to _PAGE_RW on PPC32
_PAGE_RO and _PAGE_HWWRITE are 0 on PPC64
iounmap() can also be made common by renamig the PPC32
iounmap() as __iounmap()
Signe
This patch is the first of a serie that intends to make
io mappings common to PPC32 and PPC64.
It moves ioremap/unmap fonctions into a new file called ioremap.c with
no other modification to the functions.
For the time being, the PPC32 and PPC64 parts get enclosed into #ifdef.
Following patches wi
This reverts commit 4f94b2c7462d9720b2afa7e8e8d4c19446bb31ce.
That commit was buggy, as it used rlwinm instead of rlwimi.
Instead of fixing that bug, we revert the previous commit in order to
reduce the dependency between L1 entries and L2 entries
Signed-off-by: Christophe Leroy
---
arch/powerp
By using IS_ENABLED() we can simplify __set_pte_at() by removing
redundant *ptep = pte
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/pgtable.h | 23 ---
1 file changed, 8 insertions(+), 15 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/pgtabl
_PAGE_BUSY is always 0, remove it
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/64/pgtable.h | 10 +++---
arch/powerpc/include/asm/nohash/pte-book3e.h | 5 -
2 files changed, 3 insertions(+), 12 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.
The purpose of this serie is to implement hardware assistance for TLB table walk
on the 8xx.
First part is to make L1 entries and L2 entries independant.
For that, we need to alter ioremap functions in order to handle GUARD attribute
at the PGD/PMD level.
Last part is to try and reuse PTE fragmen
When nohash and book3s header were split, some hash related stuff
remained in the nohash header. This patch removes them.
Signed-off-by: Christophe Leroy
---
Removed the call to pte_young() as it fails, back to using PAGE_ACCESSED
directly.
arch/powerpc/include/asm/nohash/32/pgtable.h | 29 ++
On 05/04/2018 01:13 PM, Cédric Le Goater wrote:
> On 05/04/2018 12:41 PM, Michael Ellerman wrote:
>> Cédric Le Goater writes:
>>
>>> The kexec_state KEXEC_STATE_IRQS_OFF barrier is reached by all
>>> secondary CPUs before the kexec_cpu_down() operation is called on
>>> secondaries. This can raise
On 05/04/2018 12:41 PM, Michael Ellerman wrote:
> Cédric Le Goater writes:
>
>> diff --git a/arch/powerpc/sysdev/xive/spapr.c
>> b/arch/powerpc/sysdev/xive/spapr.c
>> index 091f1d0d0af1..7113f5d87952 100644
>> --- a/arch/powerpc/sysdev/xive/spapr.c
>> +++ b/arch/powerpc/sysdev/xive/spapr.c
>> @@
On 05/04/2018 12:42 PM, Michael Ellerman wrote:
> Cédric Le Goater writes:
>
>> The hcall H_INT_RESET should be called to make sure XIVE is fully
>> reseted.
>>
>> Signed-off-by: Cédric Le Goater
>> ---
>> arch/powerpc/platforms/pseries/kexec.c | 7 +--
>> 1 file changed, 5 insertions(+), 2
On 05/04/2018 12:42 PM, Michael Ellerman wrote:
> Cédric Le Goater writes:
>
>> This is not the case for the moment, but future releases of pHyp might
>> need to introduce some synchronisation routines under the hood which
>> would make the XIVE hcalls longer to complete.
>>
>> As this was done f
kbuild test robot writes:
> Hi Christophe,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.17-rc2 next-20180426]
> [if your patch is applied to the wrong git tree, please drop us a note to
> help improve the system
On 05/04/2018 12:41 PM, Michael Ellerman wrote:
> Cédric Le Goater writes:
>
>> The kexec_state KEXEC_STATE_IRQS_OFF barrier is reached by all
>> secondary CPUs before the kexec_cpu_down() operation is called on
>> secondaries. This can raise conflicts and provoque errors in the XIVE
>> hcalls wh
Cédric Le Goater writes:
> The hcall H_INT_RESET should be called to make sure XIVE is fully
> reseted.
>
> Signed-off-by: Cédric Le Goater
> ---
> arch/powerpc/platforms/pseries/kexec.c | 7 +--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/platforms/pseri
Cédric Le Goater writes:
> This is not the case for the moment, but future releases of pHyp might
> need to introduce some synchronisation routines under the hood which
> would make the XIVE hcalls longer to complete.
>
> As this was done for H_INT_RESET, let's wrap the other hcalls in a
> loop c
Cédric Le Goater writes:
> diff --git a/arch/powerpc/sysdev/xive/spapr.c
> b/arch/powerpc/sysdev/xive/spapr.c
> index 091f1d0d0af1..7113f5d87952 100644
> --- a/arch/powerpc/sysdev/xive/spapr.c
> +++ b/arch/powerpc/sysdev/xive/spapr.c
> @@ -108,6 +109,52 @@ static void xive_irq_bitmap_free(int ir
Cédric Le Goater writes:
> The kexec_state KEXEC_STATE_IRQS_OFF barrier is reached by all
> secondary CPUs before the kexec_cpu_down() operation is called on
> secondaries. This can raise conflicts and provoque errors in the XIVE
> hcalls when XIVE is shutdowned with H_INT_RESET on the primary CP
Hi there,
Quick question (I have not investigate root cause): is support for
seccomp complete on ppc32 ?
$ make KBUILD_OUTPUT=/tmp/kselftest TARGETS=seccomp kselftest
...
seccomp_bpf.c:1804:TRACE_syscall.ptrace_syscall_dropped:Expected 1 (1)
== syscall(286) (4294967295)
TRACE_syscall.ptrace_sysca
On 03/05/2018 17:42, Minchan Kim wrote:
> On Thu, May 03, 2018 at 02:25:18PM +0200, Laurent Dufour wrote:
>> On 23/04/2018 09:42, Minchan Kim wrote:
>>> On Tue, Apr 17, 2018 at 04:33:18PM +0200, Laurent Dufour wrote:
When handling speculative page fault, the vma->vm_flags and
vma->vm_page
Christophe LEROY writes:
> Le 16/05/2017 à 11:23, Aneesh Kumar K.V a écrit :
>> Signed-off-by: Aneesh Kumar K.V
>> ---
>> arch/powerpc/platforms/Kconfig.cputype | 5 +
>> 1 file changed, 5 insertions(+)
>>
>> diff --git a/arch/powerpc/platforms/Kconfig.cputype
>> b/arch/powerpc/platform
On Wed, 2018-05-02 at 16:36 +1000, Sam Bobroff wrote:
> Add a for_each-style macro for iterating through PEs without the
> boilerplate required by a traversal function. eeh_pe_next() is now
> exported, as it is now used directly in place.
>
> Signed-off-by: Sam Bobroff
Reviewed-by: Russell Curre
71 matches
Mail list logo