Move TRANSACTIONAL_MEM functions out of ptrace.c, into
ptrace-tm.c
Signed-off-by: Christophe Leroy
---
v4: leave asm-prototypes.h
---
arch/powerpc/kernel/ptrace/Makefile | 1 +
arch/powerpc/kernel/ptrace/ptrace-decl.h | 89 +++
arch/powerpc/kernel/ptrace/ptrace-tm.c | 851
Move CONFIG_SPE functions out of ptrace.c, into
ptrace-spe.c
Signed-off-by: Christophe Leroy
---
v5: Added ptrace-decl.h
---
arch/powerpc/kernel/ptrace/Makefile | 1 +
arch/powerpc/kernel/ptrace/ptrace-decl.h | 9
arch/powerpc/kernel/ptrace/ptrace-spe.c | 68
Move ADV_DEBUG_REGS functions out of ptrace.c, into
ptrace-adv.c and ptrace-noadv.c
Signed-off-by: Christophe Leroy
---
v4: Leave hw_breakpoint.h for ptrace.c
---
arch/powerpc/kernel/ptrace/Makefile | 4 +
arch/powerpc/kernel/ptrace/ptrace-adv.c | 468
arch/powerpc
Move CONFIG_ALTIVEC functions out of ptrace.c, into
ptrace-altivec.c
Signed-off-by: Christophe Leroy
---
v4: add missing ptrace_decl.h
v5: that's ptrace-decl.h in fact
---
arch/powerpc/kernel/ptrace/Makefile | 1 +
arch/powerpc/kernel/ptrace/ptrace-altivec.c
Create a dedicated ptrace-view.c file.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/ptrace/Makefile | 4 +-
arch/powerpc/kernel/ptrace/ptrace-decl.h | 43 +
arch/powerpc/kernel/ptrace/ptrace-view.c | 904 +
arch/powerpc/kernel/ptrace/ptrace.c | 966
Create ippc_gethwdinfo() to handle PPC_PTRACE_GETHWDBGINFO and
reduce ifdef mess
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/ptrace/ptrace-adv.c | 15 +++
arch/powerpc/kernel/ptrace/ptrace-decl.h | 1 +
arch/powerpc/kernel/ptrace/ptrace-noadv.c | 20 ++
arch
Create ptrace_get_debugreg() to handle PTRACE_GET_DEBUGREG and
reduce ifdef mess
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/ptrace/ptrace-adv.c | 9 +
arch/powerpc/kernel/ptrace/ptrace-decl.h | 2 ++
arch/powerpc/kernel/ptrace/ptrace-noadv.c | 13 +
arch
Le 28/02/2020 à 06:53, Pingfan Liu a écrit :
Since new_property() is used in several calling sites, splitting it out for
reusing.
To ease the review, although the split out part has coding style issue,
keeping it untouched and fixed in next patch.
The moved function fits in one screen. I do
Le 28/02/2020 à 06:53, Pingfan Liu a écrit :
At present, plpar_hcall(H_SCM_BIND_MEM, ...) takes a very long time, so
if dumping to fsdax, it will take a very long time.
Take a closer look, during the papr_scm initialization, the only
configuration is through drc_pmem_bind()-> plpar_hcall(H_SC
WANG Wenhu
Reviewed-by: Christophe Leroy
---
arch/powerpc/sysdev/fsl_85xx_cache_sram.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/sysdev/fsl_85xx_cache_sram.c
b/arch/powerpc/sysdev/fsl_85xx_cache_sram.c
index f6c665dac725..be3aef4229d7 100644
--- a/arch/powerpc/sysdev/fs
Anshuman Khandual a écrit :
On 02/27/2020 04:59 PM, Christophe Leroy wrote:
Le 27/02/2020 à 11:33, Anshuman Khandual a écrit :
This adds new tests validating arch page table helpers for these following
core memory features. These tests create and test specific mapping types at
various page
Le 02/03/2020 à 20:40, Qian Cai a écrit :
On Wed, 2020-02-26 at 10:51 -0500, Qian Cai wrote:
On Wed, 2020-02-26 at 15:45 +0100, Christophe Leroy wrote:
Le 26/02/2020 à 15:09, Qian Cai a écrit :
On Mon, 2020-02-17 at 08:47 +0530, Anshuman Khandual wrote:
This adds tests which will
Le 03/03/2020 à 09:56, YueHaibing a écrit :
core99_l2_cache/core99_l3_cache no need to mark as volatile,
just remove it.
Signed-off-by: YueHaibing
Reviewed-by: Christophe Leroy
---
v2: remove 'volatile' qualifier
---
arch/powerpc/platforms/powermac/smp.c | 4 ++--
1 file
Le 03/03/2020 à 13:59, Michael Ellerman a écrit :
We received a report of strange kernel faults which turned out to be
due to a missing KUAP disable in flush_coherent_icache() called
from flush_icache_range().
The fault looks like:
Kernel attempted to access user page (7fffc30d9c00) - exp
Le 04/03/2020 à 02:39, Qian Cai a écrit :
Below is slightly modified version of your change above and should still
prevent the bug on powerpc. Will it be possible for you to re-test this
? Once confirmed, will send a patch enabling this test on powerpc64
keeping your authorship. Thank you.
Le 05/03/2020 à 01:54, Anshuman Khandual a écrit :
On 03/04/2020 04:59 PM, Qian Cai wrote:
On Mar 4, 2020, at 1:49 AM, Christophe Leroy wrote:
AFAIU, you are not taking an interrupt here. You are stuck in the pte_update(),
most likely due to nested locks. Try with LOCKDEP ?
Not
Le 05/03/2020 à 05:47, Qian Cai a écrit :
Booting a power9 server with hash MMU could trigger an undefined
behaviour because pud_offset(p4d, 0) will do,
0 >> (PAGE_SHIFT:16 + PTE_INDEX_SIZE:8 + H_PMD_INDEX_SIZE:10)
UBSAN: shift-out-of-bounds in arch/powerpc/mm/ptdump/ptdump.c:282:15
shif
of_bounds+0x160/0x21c
walk_pagetables+0x2cc/0x700
walk_pud at arch/powerpc/mm/ptdump/ptdump.c:282
(inlined by) walk_pagetables at arch/powerpc/mm/ptdump/ptdump.c:311
ptdump_check_wx+0x8c/0xf0
mark_rodata_ro+0x48/0x80
kernel_init+0x74/0x194
ret_from_kernel_thread+0x5c/0x74
Suggested
ft_out_of_bounds+0x160/0x21c
walk_pagetables+0x2cc/0x700
walk_pud at arch/powerpc/mm/ptdump/ptdump.c:282
(inlined by) walk_pagetables at arch/powerpc/mm/ptdump/ptdump.c:311
ptdump_check_wx+0x8c/0xf0
mark_rodata_ro+0x48/0x80
kernel_init+0x74/0x194
ret_from_kernel_thread+0x5c/0x74
Suggested-by: Christop
At the moment kasan_remap_early_shadow_ro() does nothing, because
k_end is 0 and k_cur < 0 is always true.
Change the test to k_cur != k_end, as done in
kasan_init_shadow_page_tables()
Signed-off-by: Christophe Leroy
Fixes: cbd18991e24f ("powerpc/mm: Fix an Oops in kasan_mmu_init()&q
managed by KASAN. To make it simple, just create KASAN page tables
for the entire kernel space at kasan_init(). That doesn't use much
more space, and that's anyway already done for hash platforms.
Fixes: 3d4247fcc938 ("powerpc/32: Add support of KASAN_VMALLOC")
Signed-off-
managed by KASAN. To make it simple, just create KASAN page tables
for the entire kernel space at kasan_init(). That doesn't use much
more space, and that's anyway already done for hash platforms.
Fixes: 3d4247fcc938 ("powerpc/32: Add support of KASAN_VMALLOC")
Signed-off-
Le 07/03/2020 à 01:56, Anshuman Khandual a écrit :
On 03/07/2020 06:04 AM, Qian Cai wrote:
On Mar 6, 2020, at 7:03 PM, Anshuman Khandual wrote:
Hmm, set_pte_at() function is not preferred here for these tests. The idea
is to avoid or atleast minimize TLB/cache flushes triggered from th
Le 06/03/2020 à 20:05, Nick Desaulniers a écrit :
As a heads up, our CI went red last night, seems like a panic from
free_initmem? Is this a known issue?
Thanks for the heads up.
No such issue with either 8xx or book3s/32.
I've now been able to reproduce it with bamboo QEMU.
Reverting 2e
Le 07/03/2020 à 09:42, Christophe Leroy a écrit :
Le 06/03/2020 à 20:05, Nick Desaulniers a écrit :
As a heads up, our CI went red last night, seems like a panic from
free_initmem? Is this a known issue?
Thanks for the heads up.
No such issue with either 8xx or book3s/32.
I'v
drop get_pteptr()")
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/pgtable.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/pgtable.h
b/arch/powerpc/include/asm/pgtable.h
index b80bfd41828d..b1f1d5339735 100644
--- a/arch/powerpc/include
.
Switch _PAGE_USER and _PAGE_PRESET
Switch _PAGE_RW and _PAGE_HASHPTE
This allows to remove a few insns.
Signed-off-by: Christophe Leroy
---
v3: rebased on today's powerpc/merge
v2: rebased on today's powerpc/merge
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s
Le 11/03/2020 à 07:14, Balamuruhan S a écrit :
ld instruction should have 14 bit immediate field (DS) concatenated with
0b00 on the right, encode it accordingly.
Fixes: 4ceae137bdab ("powerpc: emulate_step() tests for load/store
instructions")
Reviewed-by: Sandipan Das
Signed-off-by: Balamu
Le 13/03/2020 à 18:19, WANG Wenhu a écrit :
Include "linux/of_address.h" to fix the compile error for
mpc85xx_l2ctlr_of_probe() when compiling fsl_85xx_cache_sram.c.
CC arch/powerpc/sysdev/fsl_85xx_l2ctlr.o
arch/powerpc/sysdev/fsl_85xx_l2ctlr.c: In function ‘mpc85xx_l2ctlr_of_probe’:
Le 13/03/2020 à 19:17, 王文虎 a écrit :
发件人:Christophe Leroy
发送日期:2020-03-14 01:45:11
收件人:WANG Wenhu ,Benjamin Herrenschmidt ,Paul Mackerras
,Michael Ellerman ,Richard Fontana ,Kate Stewart
,Allison Randal ,Thomas Gleixner
,linuxppc-dev@lists.ozlabs.org,linux-ker...@vger.kernel.org
抄送人:ker
make[2]: *** [arch/powerpc/sysdev/fsl_85xx_l2ctlr.o] Error 1
Fixes: commit 6db92cc9d07d ("powerpc/85xx: add cache-sram support")
Cc: stable
Signed-off-by: WANG Wenhu
Reviewed-by: Christophe Leroy
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be
NULL allthough 'block' is NULL
Check the return of memblock_alloc() directly instead of
the resulting address in the loop.
Fixes: 509cd3f2b473 ("powerpc/32: Simplify KASAN init")
Sig
managed by KASAN. To make it simple, just create KASAN page tables
for the entire kernel space at kasan_init(). That doesn't use much
more space, and that's anyway already done for hash platforms.
Fixes: 3d4247fcc938 ("powerpc/32: Add support of KASAN_VMALLOC")
Signed-off-
and associated
initialisation. The overhead of reading page tables is negligible
compared to the reduction of the miss handlers.
While we were at touching pte_update(), some cleanup was done
there too.
Tested widely on 8xx and 832x. Boot tested on QEMU MAC99.
Christophe Leroy (46):
powerpc/kasan: Fi
.org
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 2 --
arch/powerpc/mm/init_32.c | 2 --
arch/powerpc/mm/kasan/kasan_init_32.c | 4 +++-
3 files changed, 3 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/include/asm/kasan.h b/arch/powerpc/incl
en going
from 0xf800 to 0xff00.
Signed-off-by: Christophe Leroy
Fixes: cbd18991e24f ("powerpc/mm: Fix an Oops in kasan_mmu_init()")
Cc: sta...@vger.kernel.org
---
arch/powerpc/include/asm/kasan.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/arch/powerpc/i
For platforms using shared.c (4xx, Book3e, Book3s/32),
also handle the _PAGE_COHERENT flag with corresponds to the
M bit of the WIMG flags.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/shared.c | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/mm/ptdump
kasan_remap_early_shadow_ro() and kasan_unmap_early_shadow_vmalloc()
are both updating the early shadow mapping: the first one sets
the mapping read-only while the other clears the mapping.
Refactor and create kasan_update_early_region()
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm
needed.
And populate remaining KASAN address space only once performed
the region mapping, to allow 8xx to allocate hugepd instead of
standard page tables for mapping via 8M hugepages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 3 +++
arch/powerpc/mm/kasan
Display the size of areas mapped with BATs.
For that, the size display for pages is refactorised.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/bats.c | 4
arch/powerpc/mm/ptdump/ptdump.c | 23 ++-
arch/powerpc/mm/ptdump/ptdump.h | 2 ++
3 files
Reorder flags in a more logical way:
- Page size (huge) first
- User
- RWX
- Present
- WIMG
- Special
- Dirty and Accessed
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/8xx.c| 30 +++---
arch/powerpc/mm/ptdump/shared.c | 30
hat's not entirely right)
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/8xx.c| 33 ---
arch/powerpc/mm/ptdump/shared.c | 35 +
2 files changed, 35 insertions(+), 33 deletions(-)
diff --git a/arch/powerpc/mm/ptdu
Display BAT flags the same way as page flags: rwx and wimg
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/bats.c | 37 ++-
1 file changed, 15 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/bats.c b/arch/powerpc/mm/ptdump/bats.c
become unneeded and can be removed to
simplify kasan_init_shadow_page_tables()
Also remove inclusion of linux/moduleloader.h and linux/vmalloc.h
which are not needed anymore since the removal of modules management.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan/kasan_init_32.c | 19 ++
In order to properly display information regardless of the page size,
it is necessary to take into account real page size.
Signed-off-by: Christophe Leroy
Fixes: cabe8138b23c ("powerpc: dump as a single line areas mapping a single
physical page.")
Cc: sta...@vger.kernel.org
---
arch/
.
Depending on the number of available IBATs, the last IBATs
might overflow the end of text. Only warn if it crosses
the end of RO data.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/book3s32/mmu.c | 6 --
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/book3s32
Allocate static page tables for the fixmap area. This allows
setting mappings through page tables before memblock is ready.
That's needed to use early_ioremap() early and to use standard
page mappings with fixmap.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/fixmap.h
")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable_32.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 9934659cb871..bd0cb6e3573e 100644
--- a/arch/powerpc/mm/pg
Only 40x still uses PTE_ATOMIC_UPDATES.
40x cannot not select CONFIG_PTE64_BIT.
Drop handling of PTE_ATOMIC_UPDATES:
- In nohash/64
- In nohash/32 for CONFIG_PTE_64BIT
Keep PTE_ATOMIC_UPDATES only for nohash/32 for !CONFIG_PTE_64BIT
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm
when needed in the future.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/ptdump/ptdump.c | 29 ++---
1 file changed, 26 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
index 64434b66f240..1adaa7e794f3 100644
ng' otherwise.
Refactor pte_update() using pte_basic_t.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 26 +++-
1 file changed, 4 insertions(+), 22 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h
b/arch/power
ng' otherwise.
Refactor pte_update() using pte_basic_t.
While we are at it, drop the comment on 44x which is not applicable
to book3s version of pte_update().
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 58 +++-
1 file changed, 20 inse
On PPC32, __ptep_test_and_clear_young() takes the mm->context.id
In preparation of standardising pte_update() params between PPC32 and
PPC64, __ptep_test_and_clear_young() need mm instead of mm->context.id
Replace context param by mm.
Signed-off-by: Christophe Leroy
---
arch/powerpc/i
g a struct of 4 entries.
Those functions are also used for 512k pages which only require one
entry as well allthough replicating it four times was harmless as 512k
pages entries are spread every 128 bytes in the table.
Signed-off-by: Christophe Leroy
---
.../include/asm/nohash/32/hugetlb-8
value while reading code.
Signed-off-by: Christophe Leroy
---
arch/powerpc/configs/adder875_defconfig | 1 -
arch/powerpc/configs/ep88xc_defconfig| 1 -
arch/powerpc/configs/mpc866_ads_defconfig| 1 -
arch/powerpc/configs/mpc885_ads_defconfig| 1 -
arch/powerpc/configs
pte_update() is a bit special for the 8xx. At the time
being, that's an #ifdef inside the nohash/32 pte_update().
As we are going to make it even more special in the coming
patches, create a dedicated version for pte_update() for 8xx.
Signed-off-by: Christophe Leroy
---
arch/powerpc/in
huge_ptep_get_and_clear().
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 15 ---
arch/powerpc/include/asm/hugetlb.h | 4
arch/powerpc/include/asm/nohash/32/pgtable.h | 13 +++--
3 files changed, 15 insertions(+), 17 deletions(-)
diff
: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 13 ++---
arch/powerpc/kernel/head_8xx.S | 15 +--
2 files changed, 15 insertions(+), 13 deletions(-)
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h
b/arch/powerpc/include/asm
512k pages are now standard pages, so only 8M pages
are hugepte.
No more handling of normal page tables through hugepd allocation
and freeing, and hugepte helpers can also be simplified.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/hugetlb-8xx.h | 7 +++
arch
s not defined yet.
In ITLB miss, we keep the possibility to opt it out as when kernel
text is pinned and no user hugepages are used, we can save several
instruction by not using r11.
In DTLB miss, that's just one instruction so it's not worth bothering
with it.
Signed-off-by: Christophe L
As the 8xx now manages 512k pages in standard page tables,
it doesn't need CONFIG_PPC_MM_SLICES anymore.
Don't select it anymore and remove all related code.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 64
arch/powerpc/i
PPC_PIN_TLB options are dedicated to the 8xx, move them into
the 8xx Kconfig.
While we are at it, add some text to explain what it does.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig | 20 ---
arch/powerpc/platforms/8xx/Kconfig | 41
Only early debug requires IMMR to be mapped early.
No need to set it up and pin it in assembly. Map it
through page tables at udbg init when necessary.
If CONFIG_PIN_TLB_IMMR is selected, pin it once we
don't need the 32 Mb pinned RAM anymore.
Signed-off-by: Christophe Leroy
---
arch/po
Pinned TLBs are not easy to modify when the MMU is enabled.
Create a small function to update a pinned TLB entry with MMU off.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 3 ++
arch/powerpc/kernel/head_8xx.S | 44
2
The code to setup linear and IMMR mapping via huge TLB entries is
not called anymore. Remove it.
Also remove the handling of removed code exits in the perf driver.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 8 +-
arch/powerpc/kernel/head_8xx.S
via standard 4k pages. In the
next patches, linear memory mapping and IMMR mapping will be managed
through huge pages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 29 +-
arch/powerpc/mm/nohash/8xx.c | 103 +
2 files changed, 3
Now that space have been freed next to the DTLB miss handler,
it's associated DTLB perf handling can be brought back in
the same place.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 23 +++
1 file changed, 11 insertions(+), 12 deletions(-)
diff
.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 31 +--
arch/powerpc/mm/nohash/8xx.c | 19 +--
2 files changed, 18 insertions(+), 32 deletions(-)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index
Similar to PPC64, accept to map RO data as ROX as a trade off between
between security and memory usage.
Having RO data executable is not a high risk as RO data can't be
modified to forge an exploit.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kconfig
a 'blt'
Otherwise, do a regular comparison using two instructions.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 22 --
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/k
visible.
_PAGE_HUGE flag is now displayed by ptdump.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/hugetlb.h| 2 +
.../include/asm/nohash/32/hugetlb-8xx.h | 5 ++
arch/powerpc/include/asm/pgtable.h| 2 +
arch/powerpc/mm/hugetlbpage.c | 2
Map the IMMR area with a single 512k huge page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/8xx.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
index 81ddcd9554e1..57e0c7496a6a 100644
--- a
also handle huge TLBs
unless kernel text in pinned.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/head_8xx.S | 4 +--
arch/powerpc/mm/nohash/8xx.c | 50 +-
2 files changed, 51 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/head_8xx.S b
: Christophe Leroy
---
arch/powerpc/Kconfig | 8 +---
arch/powerpc/mm/nohash/8xx.c | 32 ++
arch/powerpc/platforms/8xx/Kconfig | 2 +-
3 files changed, 38 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index
y the alignment
is still tunable.
We also allow tuning of alignment for book3s to limit the complexity
of the test in Kconfig that will anyway disappear in the following
patches once DEBUG_PAGEALLOC is handled together with BATs.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kc
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignment. Set
by default to 256M which is a good compromise for keeping
enough BATs for also KASAN and IMMR.
Signed-off-by: Christophe Leroy
---
arch/powerpc
Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 1 +
arch/powerpc/mm/kasan/Makefile| 1 +
arch/powerpc/mm/kasan/book3s_32.c | 57
Implement a kasan_init_region() dedicated to 8xx that
allocates KASAN regions using huge pages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan/8xx.c| 74 ++
arch/powerpc/mm/kasan/Makefile | 1 +
2 files changed, 75 insertions(+)
create mode
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
So far, powerpc Book3S code has been written with an assumption of only
one watchpoint. But future power architecture is introducing second
watchpoint register (DAWR). Even though this patchset does not enable
2nd DAWR, it make the infrastructure
Le 16/03/2020 à 19:43, Segher Boessenkool a écrit :
On Mon, Mar 16, 2020 at 04:05:01PM +0100, Christophe Leroy wrote:
Some book3s (e300 family for instance, I think G2 as well) already have
a DABR2 in addition to DABR.
The original "G2" (meaning 603 and 604) do not have DABR2.
^
Gate hstate_inode() with CONFIG_HUGETLBFS instead of CONFIG_HUGETLB_PAGE.
Reported-by: kbuild test robot
Link: https://patchwork.ozlabs.org/patch/1255548/#2386036
Fixes: a137e1cc6d6e ("hugetlbfs: per mount huge page sizes")
Cc: sta...@vger.kernel.org
Signed-off-by: Christophe Ler
Le 17/03/2020 à 09:25, Baoquan He a écrit :
On 03/17/20 at 08:04am, Christophe Leroy wrote:
When CONFIG_HUGETLB_PAGE is set but not CONFIG_HUGETLBFS, the
following build failure is encoutered:
From the definition of HUGETLB_PAGE, isn't it relying on HUGETLBFS?
I could misunderstan
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Future Power architecture is introducing second DAWR. Rename current
DAWR macros as:
s/SPRN_DAWR/SPRN_DAWR0/
s/SPRN_DAWRX/SPRN_DAWRX0/
I think you should tell that DAWR0 and DAWRX0 is the real name of the
register as documented in (at least
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Future Power architecture is introducing second DAWR. Add SPRN_ macros
for the same.
I'm not sure this is called 'macros'. For me a macro is something more
complex.
For me those are 'constants'.
Christophe
Signed-off-by: Ravi Bangoria
--
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
So far we had only one watchpoint, so we have hardcoded HBP_NUM to 1.
But future Power architecture is introducing 2nd DAWR and thus kernel
should be able to dynamically find actual number of watchpoints
supported by hw it's running on. Introduce
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Introduce new parameter 'nr' to set_dawr() which indicates which DAWR
should be programed.
While we are at it (In another patch I think), we should do the same to
set_dabr() so that we can use both DABR and DABR2
Christophe
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Instead of disabling only one watchpooint, get num of available
watchpoints dynamically and disable all of them.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hw_breakpoint.h | 15 +++
1 file changed, 7 insertions(+),
Le 09/03/2020 à 09:57, Ravi Bangoria a écrit :
Instead of disabling only first watchpoint, disable all available
watchpoints while clearing dawr_force_enable.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/dawr.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
So far powerpc hw supported only one watchpoint. But Future Power
architecture is introducing 2nd DAWR. Convert thread_struct->hw_brk
into an array.
Looks like you are doing a lot more than that in this patch.
Should this patch be splitted in t
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
ptrace_bps is already an array of size HBP_NUM_MAX. But we use
hardcoded index 0 while fetching/updating it. Convert such code
to loop over array.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/hw_breakpoint.c | 7 +--
arch/powerpc
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Introduce is_ptrace_bp() function and move the check inside the
function. We will utilize it more in later set of patches.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/hw_breakpoint.c | 7 ++-
1 file changed, 6 insertions(+), 1 de
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Currently we assume that we have only one watchpoint supported by hw.
Get rid of that assumption and use dynamic loop instead. This should
make supporting more watchpoints very easy.
I think using 'we' is to be avoided in commit message.
Could
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
ptrace and perf watchpoints on powerpc behaves differently. Ptrace
On the 8xx, ptrace generates signal after executing the instruction.
watchpoint works in one-shot mode and generates signal before executing
instruction. It's ptrace user's job
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Xmon allows overwriting breakpoints because it's supported by only
one dawr. But with multiple dawrs, overwriting becomes ambiguous
or unnecessary complicated. So let's not allow it.
Could we drop this completely (I mean the functionnality, not
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
Add support for 2nd DAWR in xmon. With this, we can have two
simultaneous breakpoints from xmon.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/xmon/xmon.c | 101 ++-
1 file changed, 69 insertions(+), 32 del
Le 16/03/2020 à 13:36, Christophe Leroy a écrit :
Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.
Signed-off-by: Christophe Leroy
Note that the sparse warning on pmac32_defconfig is definitely a false
positive. See details patch 16/46
Le 16/03/2020 à 13:36, Christophe Leroy a écrit :
Allocate static page tables for the fixmap area. This allows
setting mappings through page tables before memblock is ready.
That's needed to use early_ioremap() early and to use standard
page mappings with fixmap.
Signed-off-by: Chris
note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]
url:
https://github.com/0day-ci/linux/commits/Christophe-Leroy/Use-hugepages-to-map-kernel-mem-on-8xx/20200317-0
Le 17/03/2020 à 17:40, Mike Kravetz a écrit :
On 3/17/20 1:43 AM, Christophe Leroy wrote:
Le 17/03/2020 à 09:25, Baoquan He a écrit :
On 03/17/20 at 08:04am, Christophe Leroy wrote:
When CONFIG_HUGETLB_PAGE is set but not CONFIG_HUGETLBFS, the
following build failure is encoutered
Le 18/03/2020 à 09:36, Ravi Bangoria a écrit :
On 3/17/20 4:07 PM, Christophe Leroy wrote:
Le 09/03/2020 à 09:58, Ravi Bangoria a écrit :
So far powerpc hw supported only one watchpoint. But Future Power
architecture is introducing 2nd DAWR. Convert thread_struct->hw_brk
into an ar
901 - 1000 of 10287 matches
Mail list logo