On 2021/1/26 12:45, Nicholas Piggin wrote:
> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
> supports PMD sized vmap mappings.
>
> vmalloc will attempt to allocate PMD-sized pages if allocating PMD si
Looks good,
Reviewed-by: Christoph Hellwig
On Tue, Jan 26, 2021 at 02:54:02PM +1000, Nicholas Piggin wrote:
> iounmap will remove ptes.
Looks good,
Reviewed-by: Christoph Hellwig
Reviewed-by: Ding Tianhong
On 2021/1/26 12:45, Nicholas Piggin wrote:
> This changes the awkward approach where architectures provide init
> functions to determine which levels they can provide large mappings for,
> to one where the arch is queried for each call.
>
> This removes code and indire
iounmap will remove ptes.
Cc: "Cédric Le Goater"
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Nicholas Piggin
---
arch/powerpc/sysdev/xive/common.c | 4
1 file changed, 4 deletions(-)
diff --git a/arch/powerpc/sysdev/xive/common.c
b/arch/powerpc/sysdev/xive/common.c
index 595310e056f
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Nicholas Piggin
---
.../admin-guide/kernel-parameters.txt | 2 ++
arch/powerpc/Kconfig | 1 +
arch/powerpc/kernel/module.c | 21 +++
3 files changed, 20 insertions(+), 4 deletions(
Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
supports PMD sized vmap mappings.
vmalloc will attempt to allocate PMD-sized pages if allocating PMD size
or larger, and fall back to small pages if that wa
As a side-effect, the order of flush_cache_vmap() and
arch_sync_kernel_mappings() calls are switched, but that now matches
the other callers in this file.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16 +---
1 file changed, 13 insertions(+), 3 de
This is a generic kernel virtual memory mapper, not specific to ioremap.
Code is unchanged other than making vmap_range non-static.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
include/linux/vmalloc.h | 3 +
mm/ioremap.c| 203 -
If an architecture doesn't support a particular page table level as
a huge vmap page size then allow it to skip defining the support
query function.
Suggested-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
arch/arm64/include/asm/vmalloc.h | 7 +++
arch/powerpc/include/asm/vmall
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Cc: "H. Peter Anvin"
Signed-off-by: Nicholas Piggin
---
arch/x86/include/asm/vmallo
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Catalin Marinas
Cc: Will Deacon
Cc: linux-arm-ker...@lists.infradead.org
Acked-by: Catalin Marinas
Signed-off-by: Nicholas Piggin
---
arch/arm64/include/asm
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: linuxppc-dev@lists.ozlabs.org
Acked-by: Michael Ellerman
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/vmalloc.h | 19 ---
This changes the awkward approach where architectures provide init
functions to determine which levels they can provide large mappings for,
to one where the arch is queried for each call.
This removes code and indirection, and allows constant-folding of dead
code for unsupported levels.
This also
This will be used as a generic kernel virtual mapping function, so
re-name it in preparation.
Signed-off-by: Nicholas Piggin
---
mm/ioremap.c | 64 +++-
1 file changed, 33 insertions(+), 31 deletions(-)
diff --git a/mm/ioremap.c b/mm/ioremap.c
ind
The vmalloc mapper operates on a struct page * array rather than a
linear physical address, re-name it to make this distinction clear.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --gi
apply_to_pte_range might mistake a large pte for bad, or treat it as a
page table, resulting in a crash or corruption. Add a test to warn and
return error if large entries are found.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/memory.c | 66 +
vmalloc_to_page returns NULL for addresses mapped by larger pages[*].
Whether or not a vmap is huge depends on the architecture details,
alignments, boot options, etc., which the caller can not be expected
to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page.
This change teaches vmallo
I think I ended up implementing all Christoph's comments because
they turned out better in the end. Cleanups coming in another
series though.
Thanks,
Nick
Since v10:
- Fixed code style, most > 80 colums, tweak patch titles, etc [thanks Christoph]
- Made huge vmalloc code and data structure compil
From: Stefan Berger
Return error code -ETIMEDOUT rather than '0' when waiting for the
rtce_buf to be set has timed out.
Fixes: d8d74ea3c002 ("tpm: ibmvtpm: Wait for buffer to be set before
proceeding")
Reported-by: Hulk Robot
Signed-off-by: Wang Hai
Signed-off-by: Stefan Berger
---
drivers/
Konrad Rzeszutek Wilk writes:
> On Fri, Jan 08, 2021 at 09:27:01PM -0300, Thiago Jung Bauermann wrote:
>>
>> Ram Pai writes:
>>
>> > On Wed, Dec 23, 2020 at 09:06:01PM -0300, Thiago Jung Bauermann wrote:
>> >>
>> >> Hi Ram,
>> >>
>> >> Thanks for reviewing this patch.
>> >>
>> >> Ram Pai
Mike Rapoport writes:
> On Sat, Jan 23, 2021 at 06:09:11PM -0800, Andrew Morton wrote:
>> On Fri, 22 Jan 2021 01:37:14 -0300 Thiago Jung Bauermann
>> wrote:
>>
>> > Mike Rapoport writes:
>> >
>> > > > Signed-off-by: Roman Gushchin
>> > >
>> > > Reviewed-by: Mike Rapoport
>> >
>> > I've
Am 2021-01-21 12:01, schrieb Geert Uytterhoeven:
Hi Saravana,
On Thu, Jan 21, 2021 at 1:05 AM Saravana Kannan
wrote:
On Wed, Jan 20, 2021 at 3:53 PM Michael Walle
wrote:
> Am 2021-01-20 20:47, schrieb Saravana Kannan:
> > On Wed, Jan 20, 2021 at 11:28 AM Michael Walle
> > wrote:
> >>
> >>
Am 2021-01-25 19:58, schrieb Saravana Kannan:
On Mon, Jan 25, 2021 at 8:50 AM Lorenzo Pieralisi
wrote:
On Wed, Jan 20, 2021 at 08:28:36PM +0100, Michael Walle wrote:
> [RESEND, fat-fingered the buttons of my mail client and converted
> all CCs to BCCs :(]
>
> Am 2021-01-20 20:02, schrieb Sarav
On Wed, Jan 20, 2021 at 08:28:36PM +0100, Michael Walle wrote:
> [RESEND, fat-fingered the buttons of my mail client and converted
> all CCs to BCCs :(]
>
> Am 2021-01-20 20:02, schrieb Saravana Kannan:
> > On Wed, Jan 20, 2021 at 6:24 AM Rob Herring wrote:
> > >
> > > On Wed, Jan 20, 2021 at 4:
By saving the pointer pointing to thread_info.flags, gcc copies r2
in a non-volatile register.
We know 'current' doesn't change, so avoid that intermediaite pointer.
Reduces null_syscall benchmark by 2 cycles (322 => 320 cycles)
On PPC64, gcc seems to know that 'current' is not changing, and it
Combine all tests of regs->msr into a single logical one.
Before the patch:
0: 81 6a 00 84 lwz r11,132(r10)
4: 90 6a 00 88 stw r3,136(r10)
8: 69 60 00 02 xorir0,r11,2
c: 54 00 ff fe rlwinm r0,r0,31,31,31
10: 0f 00 00 00 twnei r0,0
14:
For book3s/64, FULL_REGS() is 'true' at all time, so the test voids.
For others, non volatile registers are saved inconditionally.
So the verification is pointless.
Should one fail to do it, it would anyway be caught by the
CHECK_FULL_REGS() in copy_thread() as we have removed the
special version
Only PPC64 has scv. No need to check the 0x7ff0 trap on PPC32.
And ignore the scv parameter in syscall_exit_prepare (Save 14 cycles
346 => 332 cycles)
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/entry_32.S | 1 -
arch/powerpc/kernel/syscall.c | 7 +--
2 files changed, 5 inserti
When r3 is not modified, reload it from regs->orig_r3 to free
volatile registers. This avoids a stack frame for the likely part
of system_call_exception()
Before the patch:
c000b4d4 :
c000b4d4: 7c 08 02 a6 mflrr0
c000b4d8: 94 21 ff e0 stwur1,-32(r1)
c000b4dc: 93
system_call_exception() checks MSR_PR and BUGs if a syscall
is issued from kernel mode.
No need to handle it anymore from the ASM entry code.
null_syscall reduction 2 cycles (348 => 346 cycles)
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/entry_32.S | 30 --
That's port of PPC64 syscall entry/exit logic in C to PPC32.
Performancewise on 8xx:
Before : 304 cycles on null_syscall
After : 348 cycles on null_syscall
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/entry_32.S | 227 ---
arch/powerpc/kernel/head_32.h
In preparation for porting syscall entry/exit to C, inconditionally
save non volatile general purpose registers.
Commit 965dd3ad3076 ("powerpc/64/syscall: Remove non-volatile GPR save
optimisation") provides detailed explanation.
This increases the number of cycles by 24 cycles on 8xx with
null_s
In system_call_exception(), MSR_RI also needs to be checked on 8xx.
Only booke and 40x doesn't have MSR_RI.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/syscall.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/syscall.c b/arch/powerpc/kernel/sy
Save r3 in regs->orig_r3 in system_call_exception()
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/entry_64.S | 1 -
arch/powerpc/kernel/syscall.c | 2 ++
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index
Instead of hard comparing task flags with _TIF_32BIT, use
is_compat_task(). The advantage is that it returns 0 on PPC32
allthough _TIF_32BIT is always set.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/syscall.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arc
ifdef out specific PPC64 stuff to allow building
syscall.c on PPC32.
Modify Makefile to always build syscall.o
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/Makefile | 4 ++--
arch/powerpc/kernel/syscall.c | 9 +
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/
syscall_64.c will be reused almost as is for PPC32.
Rename it syscall.c
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/Makefile| 2 +-
arch/powerpc/kernel/{syscall_64.c => syscall.c} | 0
2 files changed, 1 insertion(+), 1 deletion(-)
rename arch/powerpc/kernel/{sy
To allow building syscall_64.c smoothly on PPC32, add stub version
of irq_soft_mask_return().
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/hw_irq.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/include/asm/hw_irq.h
b/arch/powerpc/include/asm/hw_irq.h
inde
In preparation of porting PPC32 to C syscall entry/exit,
rewrite the following helpers as static inline functions and
add support for PPC32 in them:
__hard_irq_enable()
__hard_irq_disable()
__hard_EE_RI_disable()
__hard_RI_enable()
Then use them in PPC32 version of
In preparation of porting powerpc32 to C syscall entry/exit,
rename kuap_check_amr() and kuap_get_and_check_amr() as kuap_check()
and kuap_get_and_check(), and move in the generic asm/kup.h the stub
for when CONFIG_PPC_KUAP is not selected.
Signed-off-by: Christophe Leroy
---
arch/powerpc/includ
In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_user_restore() and kuap_kernel_restore()
and kuap_check() and kuap_get_and_check() on book3s/32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/kup.h | 33
1 file ch
regs->softe doesn't exist on PPC32.
Add irq_soft_mask_regs_set_state() helper to set regs->softe.
This helper will void on PPC32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/hw_irq.h | 11 +--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/in
In preparation of porting PPC32 to C syscall entry/exit,
create C version of kuap_user_restore() and kuap_kernel_restore()
and kuap_check() and kuap_get_and_check() on 8xx
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/kup-8xx.h | 27
1 file changed,
If the code can use a stack in vm area, it can also use a
stack in linear space.
Simplify code by removing old non VMAP stack code on PPC32 in syscall.
That means the data translation is now re-enabled early in
syscall entry in all cases, not only when using VMAP stacks.
Signed-off-by: Christoph
This series implements C syscall entry/exit for PPC32. It reuses
the work already done for PPC64.
This series is based on Nick's v6 series "powerpc: interrupt wrappers".
Patch 1 is a bug fix submitted separately but this series depends on it.
Patches 2-4 are an extract from the series "powerpc/32
On 40x and 8xx, kernel text is pinned.
On book3s/32, kernel text is mapped by BATs.
Enable instruction translation at the same time as data translation, it
makes things simpler.
MSR_RI can also be set at the same time because srr0/srr1 are already
saved and r1 is set properly.
On booke, translat
Userspace Execution protection and fast syscall entry were implemented
independently from each other and were both merged in kernel 5.2,
leading to syscall entry missing userspace execution protection.
On syscall entry, execution of user space memory must be
locked in the same way as on exception
Now that we are using rfi instead of mtmsr to reactivate MMU, it is
possible to reorder instructions and avoid the need to use CTR for
stashing SRR0.
null_syscall on 8xx is reduced by 3 cycles (283 => 280 cycles).
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_32.h | 22 ++
We currently just percolate the return value from analyze_instr()
to the caller of emulate_step(), especially if it is a -1.
For one particular case (opcode = 4) for instructions that aren't
currently emulated, we are returning 'should not be single-stepped'
while we should have returned 0 which s
We currently unconditionally try to emulate newer instructions on older
Power versions that could cause issues. Gate it.
Fixes: 350779a29f11 ("powerpc: Handle most loads and stores in instruction
emulation code")
Signed-off-by: Ananth N Mavinakayanahalli
---
[v4] Based on feedback from Paul Mac
From: Christophe Leroy
> Sent: 25 January 2021 09:15
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
> > Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
> > enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
> > supports PMD sized vmap mappings.
> >
Le 25/01/2021 à 12:37, Nicholas Piggin a écrit :
Excerpts from Christophe Leroy's message of January 25, 2021 7:14 pm:
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
enables support on architectures that define HAVE
Excerpts from Christophe Leroy's message of January 25, 2021 7:14 pm:
>
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
>> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
>> supports PMD sized vm
Excerpts from Christophe Leroy's message of January 25, 2021 6:42 pm:
>
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> This allows unsupported levels to be constant folded away, and so
>> p4d_free_pud_page can be removed because it's no longer linked to.
>
> Ah, ok, you did it here. Why
Remove superfluous semicolons after function definitions.
Signed-off-by: Chengyang Fan
---
arch/powerpc/include/asm/book3s/32/mmu-hash.h | 2 +-
arch/powerpc/include/asm/book3s/64/mmu.h| 2 +-
arch/powerpc/include/asm/book3s/64/tlbflush-radix.h | 2 +-
arch/powerpc/include/a
Le 22/01/2021 à 13:32, Ganesh Goudar a écrit :
Access to per-cpu variables requires translation to be enabled on
pseries machine running in hash mmu mode, Since part of MCE handler
runs in realmode and part of MCE handling code is shared between ppc
architectures pseries and powernv, it become
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
supports PMD sized vmap mappings.
vmalloc will attempt to allocate PMD-sized pages if allocating PMD s
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Ah, ok, you did it here. Why not squashing this patch into patch 5 directly ?
Cc: linuxppc-dev@lists.ozlabs
Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
This changes the awkward approach where architectures provide init
functions to determine which levels they can provide large mappings for,
to one where the arch is queried for each call.
This removes code and indirection, and allows constant-f
Le 24/01/2021 à 12:40, Christoph Hellwig a écrit :
diff --git a/arch/arm64/include/asm/vmalloc.h b/arch/arm64/include/asm/vmalloc.h
index 2ca708ab9b20..597b40405319 100644
--- a/arch/arm64/include/asm/vmalloc.h
+++ b/arch/arm64/include/asm/vmalloc.h
@@ -1,4 +1,12 @@
#ifndef _ASM_ARM64_VMALLO
Fix the following coccicheck warnings:
./arch/powerpc/kvm/book3s_hv_rm_xics.c:381:3-15: WARNING: Assignment of
0/1 to bool variable.
Reported-by: Abaci Robot
Signed-off-by: Jiapeng Zhong
---
arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git
62 matches
Mail list logo