This series wires up getrandom() vDSO implementation on powerpc.
Tested on PPC32.
Performance on powerpc 885 (using kernel selftest):
~# ./vdso_test_getrandom bench-single
vdso: 250 times in 7.897495392 seconds
libc: 250 times in 56.091632232 seconds
getrandom vDSO implementation requires __put_unaligned_t() and
__put_unaligned_t() but including asm-generic/unaligned.h pulls
too many other headers.
Follow the same approach as for most things in include/vdso/,
see for instance commit 8165b57bca21 ("linux/const.h: Extract
common header for vDSO"
Building a VDSO32 on a 64 bits kernel is problematic when some
system headers are included. See commit 8c59ab839f52 ("lib/vdso:
Enable common headers") for more details.
Minimise the amount of headers by moving needed items into
dedicated common headers.
For PAGE_SIZE and PAGE_MASK, redefine them
_vdso_data is specific to x86 and __arch_get_k_vdso_data() is provided
so that all architectures can provide the requested pointer.
Do the same with _vdso_rng_data, provide __arch_get_k_vdso_rng_data()
and don't use x86 _vdso_rng_data directly.
Until now vdso/vsyscall.h was only included by time/
Same as for gettimeofday CVDSO implementation, add c-getrandom-y
to ease the inclusion of lib/vdso/getrandom.c in architectures
VDSO builds.
Signed-off-by: Christophe Leroy
---
lib/vdso/Makefile | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/vdso/Makefile b/lib/vdso/Makefile
index 9f031
Changes in v2:
- rename pte_offset_map_{readonly|maywrite}_nolock() to
pte_offset_map_{ro|rw}_nolock() (LEROY Christophe)
- make pte_offset_map_rw_nolock() not accept NULL parameters
(David Hildenbrand)
- rebase onto the next-20240822
Hi all,
As proposed by David Hildenbrand [1], this
Currently, the usage of pte_offset_map_nolock() can be divided into the
following two cases:
1) After acquiring PTL, only read-only operations are performed on the PTE
page. In this case, the RCU lock in pte_offset_map_nolock() will ensure
that the PTE page will not be freed, and there is no
In do_adjust_pte(), we may modify the pte entry. At this time, the write
lock of mmap_lock is not held, and the pte_same() check is not performed
after the PTL held. The corresponding pmd entry may have been modified
concurrently. Therefore, in order to ensure the stability if pmd entry,
use pte_of
Performing SMP atomic operations on u64 fails on powerpc32.
Random driver generation is handled as unsigned long not u64,
see for instance base_cnrg or struct crng.
Use the same type for vDSO's getrandom as it gets copied
from the above. This is also in line with the local
current_generation whic
In assert_pte_locked(), we just get the ptl and assert if it was already
held, so convert it to using pte_offset_map_ro_nolock().
Signed-off-by: Qi Zheng
---
arch/powerpc/mm/pgtable.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/
With the current implementation, __cvdso_getrandom_data() calls
memset(), which is unexpected in the VDSO.
Rewrite opaque data initialisation to avoid memset().
Signed-off-by: Christophe Leroy
---
lib/vdso/getrandom.c | 15 ++-
1 file changed, 10 insertions(+), 5 deletions(-)
diff
In filemap_fault_recheck_pte_none(), we just do pte_none() check, so
convert it to using pte_offset_map_ro_nolock().
Signed-off-by: Qi Zheng
---
mm/filemap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 0f13126b43b08..c98da9af6b9bd 100
In __collapse_huge_page_swapin(), we just use the ptl for pte_same() check
in do_swap_page(). In other places, we directly use pte_offset_map_lock(),
so convert it to using pte_offset_map_ro_nolock().
Signed-off-by: Qi Zheng
---
mm/khugepaged.c | 6 +-
1 file changed, 5 insertions(+), 1 dele
To support getrandom in VDSO which is based on little endian storage,
add macros equivalent to LWZX_BE and STWX_BE for little endian
accesses.
Put them outside of __powerpc64__ #ifdef so that they can also be used
for PPC32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/asm-compa
In handle_pte_fault(), we may modify the vmf->pte after acquiring the
vmf->ptl, so convert it to using pte_offset_map_rw_nolock(). But since we
will do the pte_same() check, so there is no need to get pmdval to do
pmd_same() check, just pass a dummy variable to it.
Signed-off-by: Qi Zheng
---
mm
Commit 9651fcedf7b9 ("mm: add MAP_DROPPABLE for designating always
lazily freeable mappings") only adds VM_DROPPABLE for 64 bits
architectures.
In order to also use the getrandom vDSO implementation on powerpc/32,
use VM_ARCH_1 for VM_DROPPABLE on powerpc/32. This is possible because
VM_ARCH_1 is
In collapse_pte_mapped_thp(), we may modify the pte and pmd entry after
acquring the ptl, so convert it to using pte_offset_map_rw_nolock(). At
this time, the write lock of mmap_lock is not held, and the pte_same()
check is not performed after the PTL held. So we should get pgt_pmd and do
pmd_same(
Commit 08c18b63d965 ("powerpc/vdso32: Add missing _restgpr_31_x to fix
build failure") added _restgpr_31_x to the vdso for gettimeofday, but
the work on getrandom shows that we will need more of those functions.
Remove _restgpr_31_x and link in crtsavres.o so that we get all
save/restore functions
In copy_pte_range(), we may modify the src_pte entry after holding the
src_ptl, so convert it to using pte_offset_map_rw_nolock(). But since we
already hold the write lock of mmap_lock, there is no need to get pmdval
to do pmd_same() check, just pass a dummy variable to it.
Signed-off-by: Qi Zheng
Architectures use different location for vDSO sources:
arch/mips/vdso
arch/sparc/vdso
arch/arm64/kernel/vdso
arch/riscv/kernel/vdso
arch/csky/kernel/vdso
arch/x86/um/vdso
arch/x86/entry/vdso
arch/powerpc/kernel/vdso
arch/arm/vd
__powerpc__ is also defined on powerpc64 so __powerpc64__ needs to be
checked first.
Fixes: 693f5ca08ca0 ("kselftest: Extend vDSO selftest")
Signed-off-by: Christophe Leroy
---
tools/testing/selftests/vDSO/vdso_config.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/to
In move_ptes(), we may modify the new_pte after acquiring the new_ptl, so
convert it to using pte_offset_map_rw_nolock(). But since we already hold
the exclusive mmap_lock, there is no need to get pmdval to do pmd_same()
check, just pass a dummy variable to it.
Signed-off-by: Qi Zheng
---
mm/mre
In the caller of map_pte(), we may modify the pvmw->pte after acquiring
the pvmw->ptl, so convert it to using pte_offset_map_rw_nolock(). At
this time, the write lock of mmap_lock is not held, and the pte_same()
check is not performed after the pvmw->ptl held, so we should get pmdval
and do pmd_sam
On Thu, Aug 22, 2024 at 06:39:33AM +, LEROY Christophe wrote:
> powerpc has a magic instruction 'dcbz' which clears a full cacheline in
> one go. It is far more efficient than a loop to store zeros, and since
> 2015 memset(0) has been implemented with that instruction (commit
> 5b2a32e80634
To be consistent with other VDSO functions, the function is called
__kernel_getrandom()
__arch_chacha20_blocks_nostack() fonction is implemented basically
with 32 bits operations. It performs 4 QUARTERROUND operations in
parallele. There are enough registers to avoid using the stack:
On input:
In move_pages_pte(), we may modify the dst_pte and src_pte after acquiring
the ptl, so convert it to using pte_offset_map_rw_nolock(). But since we
already do the pte_same() check, there is no need to get pmdval to do
pmd_same() check, just pass a dummy variable to it.
Signed-off-by: Qi Zheng
---
In walk_pte_range(), we may modify the pte entry after holding the ptl, so
convert it to using pte_offset_map_rw_nolock(). At this time, the write
lock of mmap_lock is not held, and the pte_same() check is not performed
after the ptl held, so we should get pmdval and do pmd_same() check to
ensure t
Don't hard-code x86 specific names, use vdso_config definitions
to find the correct function matching the architecture.
Add random VDSO function names in names[][]. Remove the #ifdef
CONFIG_VDSO32, having the name there all the time is harmless
and guaranties a steady index for following strings.
Now no users are using the pte_offset_map_nolock(), remove it.
Signed-off-by: Qi Zheng
---
Documentation/mm/split_page_table_lock.rst | 3 ---
include/linux/mm.h | 2 --
mm/pgtable-generic.c | 21 -
3 files changed, 26 deletions
In order to avoid duplication when we add new VDSO functionnalities
in C like getrandom, refactor common CFLAGS.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/vdso/Makefile | 15 +--
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kernel/vdso/Mak
In retract_page_tables(), we may modify the pmd entry after acquiring the
pml and ptl, so we should also check whether the pmd entry is stable.
Using pte_offset_map_rw_nolock() + pmd_same() to do it.
Signed-off-by: Qi Zheng
---
mm/khugepaged.c | 17 -
1 file changed, 16 insertion
On powerpc, a call to a VDSO function is not a standard C function
call. Unlike x86 that returns a negated error code in case of an
error, powerpc sets CR[SO] and returns the error code as a
positive value.
So use a macro called VDSO_CALL() which takes a pointer to the
function to call, the number
Building test_vdso_chacha on powerpc leads to following issue:
In file included from /home/chleroy/linux-powerpc/include/linux/limits.h:7,
from
/opt/powerpc64-e5500--glibc--stable-2024.02-1/powerpc64-buildroot-linux-gnu/sysroot/usr/include/bits/local_lim.h:38,
Add the necessary symbolic link and tell Makefile to build
vdso_test_random for powerpc.
In makefile, don't use $(uname_M) which is wrong when cross-building
for powerpc on an x86_64.
Implement the required VDSO_CALL macro to correctly handle errors.
Signed-off-by: Christophe Leroy
---
tools/a
On 19/08/2024 09:03, Yicong Yang wrote:
> On 2024/8/16 23:55, Dietmar Eggemann wrote:
>> On 06/08/2024 10:53, Yicong Yang wrote:
>>> From: Yicong Yang
[...]
>> So the xarray contains one element for each core_id with the information
>> how often the core_id occurs? I assume you have to iterate o
Hi Baoquan,
On Wed, 2024-01-24 at 13:12 +0800, Baoquan He wrote:
> By splitting CRASH_RESERVE and VMCORE_INFO out from CRASH_CORE, cleaning
> up the dependency of FA_DMUMP on CRASH_DUMP, and moving crash codes from
> kexec_core.c to crash_core.c, now we can rearrange CRASH_DUMP to
> depend on KEXE
Commit 6b0e82791bd0 ("powerpc/e500: switch to 64 bits PGD on 85xx
(32 bits)") switched PGD entries to 64 bits, but pgd_val() returns
an unsigned long which is 32 bits on PPC32. This is not a problem
for regular PMD entries because the upper part is always NULL, but
when PMD entries are leaf they co
During merge of commit 4e991e3c16a3 ("powerpc: add CFUNC assembly
label annotation") a fallback version of CFUNC macro was added at
the last minute, so it can be used inconditionally.
Fixes: 4e991e3c16a3 ("powerpc: add CFUNC assembly label annotation")
Signed-off-by: Christophe Leroy
---
arch/po
Hi,
specific for local cmpxchg enabled on p8 powernv platform on which the patch
enabled vm_state update path,
ftrace data below indicates it is at the level ofabout 4us or 5us
latency, for such a big cache cold operations.
<...>-277787
[008] . 88366.233643: refresh_cpu_vm_s
(in text)
Hi,
specific for local cmpxchg enabled on p8 powernv platform on which the patch
enabled vm_state update path,
ftrace data below indicates it is at the level ofabout 4us or 5us latency,
for such a big cache cold operations.
<...>-277787 [008] . 88366.233643: refresh_
On 08/22/24 at 09:33am, John Paul Adrian Glaubitz wrote:
> Hi Baoquan,
>
> On Wed, 2024-01-24 at 13:12 +0800, Baoquan He wrote:
> > By splitting CRASH_RESERVE and VMCORE_INFO out from CRASH_CORE, cleaning
> > up the dependency of FA_DMUMP on CRASH_DUMP, and moving crash codes from
> > kexec_core.c
Hi Qi,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on powerpc/next powerpc/fixes linus/master v6.11-rc4
next-20240822]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when
On 21.08.24 12:03, Qi Zheng wrote:
On 2024/8/21 17:53, David Hildenbrand wrote:
On 21.08.24 11:51, Qi Zheng wrote:
On 2024/8/21 17:41, David Hildenbrand wrote:
On 21.08.24 11:24, Qi Zheng wrote:
On 2024/8/21 17:17, LEROY Christophe wrote:
Le 21/08/2024 à 10:18, Qi Zheng a écrit :
In
Hi Baoquan,
On Thu, 2024-08-22 at 17:17 +0800, Baoquan He wrote:
> > The change to enable CONFIG_CRASH_DUMP by default apparently broke the boot
> > on 32-bit Power Macintosh systems which fail after GRUB with:
> >
> > "Error: You can't boot a kdump kernel from OF!"
> >
> > We may have to tu
On Thu, 2024-08-22 at 09:14 +0200, Christoph Hellwig wrote:
>
> I'd suggest two things:
>
> 1) remove the warning. The use case is perfectly valid and everything
> using uncached memory is already slow, so people will just have to
> deal with it. Maybe offer a trace point instead if pe
VFIO_EEH_PE_INJECT_ERR ioctl is currently failing on pseries
due to missing implementation of err_inject eeh_ops for pseries.
This patch implements pseries_eeh_err_inject in eeh_ops/pseries
eeh_ops. Implements support for injecting MMIO load/store error
for testing from user space.
The check on PC
Use for_each_child_of_node() to iterate through the device_node, this
can make code more simple.
Zhang Zekun (2):
powerpc/powermac/pfunc_base: Use helper function
for_each_child_of_node()
powerpc/pseries/dlpar: Use helper function for_each_child_of_node()
arch/powerpc/platforms/powermac/
for_each_child_of_node can help to iterate through the device_node,
and we don't need to use while loop. No functional change with this
conversion.
Signed-off-by: Zhang Zekun
---
arch/powerpc/platforms/pseries/dlpar.c | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/p
for_each_child_of_node() can help to iterate through the device_node,
and we don't need to do it manually. No functional change with this
conversion.
Signed-off-by: Zhang Zekun
---
arch/powerpc/platforms/powermac/pfunc_base.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/ar
Hi David,
On 2024/8/22 17:29, David Hildenbrand wrote:
On 21.08.24 12:03, Qi Zheng wrote:
[...]
- vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm,
vmf->pmd,
- vmf->address, &vmf->ptl);
+ vmf->pte = pte_offset_map_maywrite_nolock(vmf->vma->vm_mm,
+
On 22.08.24 14:17, Qi Zheng wrote:
Hi David,
On 2024/8/22 17:29, David Hildenbrand wrote:
On 21.08.24 12:03, Qi Zheng wrote:
[...]
- vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm,
vmf->pmd,
- vmf->address, &vmf->ptl);
+ vmf->pte = pte_offset_map_m
On 2024/8/22 20:19, David Hildenbrand wrote:
On 22.08.24 14:17, Qi Zheng wrote:
Hi David,
On 2024/8/22 17:29, David Hildenbrand wrote:
On 21.08.24 12:03, Qi Zheng wrote:
[...]
- vmf->pte = pte_offset_map_nolock(vmf->vma->vm_mm,
vmf->pmd,
- vmf->address,
The maple_calibrate_decr() have been removed since
commit 10f7e7c15e6c ("[PATCH] ppc64: consolidate calibrate_decr
implementations"), and now it is useless, so remove it.
Signed-off-by: Gaosheng Cui
---
arch/powerpc/platforms/maple/maple.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/
The use_cop() and drop_cop() have been removed since
commit 6ff4d3e96652 ("powerpc: Remove old unused icswx based
coprocessor support"), now they are useless, so remove them.
Signed-off-by: Gaosheng Cui
---
arch/powerpc/include/asm/mmu_context.h | 3 ---
1 file changed, 3 deletions(-)
diff --gi
The _get_SP() have been removed since
commit 917f0af9e5a9 ("powerpc: Remove arch/ppc and include/asm-ppc"),
and now it is useless, so remove it.
Signed-off-by: Gaosheng Cui
---
arch/powerpc/kernel/process.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch
Remove obsoleted declaration for powerpc, thanks!
Gaosheng Cui (4):
powerpc: Remove obsoleted declaration for _get_SP
powerpc: Remove obsoleted declaration for maple_calibrate_decr
powerpc: Remove obsoleted declaration for pas_pci_irq_fixup
powerpc: Remove obsoleted declarations for use_co
The pas_pci_irq_fixup() have been removed since
commit 771f7404a9de ("pasemi_mac: Move the IRQ mapping from the
PCI layer to the driver"), and now it is useless, so remove it.
Signed-off-by: Gaosheng Cui
---
arch/powerpc/platforms/pasemi/pasemi.h | 1 -
1 file changed, 1 deletion(-)
diff --git
The pnv_pci_init_ioda_hub() have been removed since
commit 5ac129cdb50b ("powerpc/powernv/pci: Remove ioda1 support"),
and now it is useless, so remove it.
Signed-off-by: Gaosheng Cui
---
arch/powerpc/platforms/powernv/pci.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/platfor
Le 22/08/2024 à 15:06, Gaosheng Cui a écrit :
> [Vous ne recevez pas souvent de courriers de cuigaoshe...@huawei.com.
> Découvrez pourquoi ceci est important à
> https://aka.ms/LearnAboutSenderIdentification ]
>
> The _get_SP() have been removed since
> commit 917f0af9e5a9 ("powerpc: Remove ar
On Fri, 26 Jul 2024 16:51:10 -0700, Sean Christopherson wrote:
> Put the page reference acquired by gfn_to_pfn_prot() if
> kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory. KVM's less-
> than-stellar heuristics for dealing with pfn-mapped memory means that KVM
> can get a page reference t
On Fri, 26 Jul 2024 16:51:11 -0700, Sean Christopherson wrote:
> Disallow copying MTE tags to guest memory while KVM is dirty logging, as
> writing guest memory without marking the gfn as dirty in the memslot could
> result in userspace failing to migrate the updated page. Ideally (maybe?),
> KVM
Le 22/08/2024 à 10:27, Narayana Murty N a écrit :
> [Vous ne recevez pas souvent de courriers de nnmli...@linux.ibm.com.
> Découvrez pourquoi ceci est important à
> https://aka.ms/LearnAboutSenderIdentification ]
>
> VFIO_EEH_PE_INJECT_ERR ioctl is currently failing on pseries
> due to missing
Hi all,
This series implements the Permission Overlay Extension introduced in 2022
VMSA enhancements [1]. It is based on v6.11-rc4.
Changes since v4[2]:
- Added Acks and R-bs, thanks!
- KVM:
- Move POR_EL{0,1} handling inside TCR_EL2 blocks
- Add vi
The new config option specifies how many bits are in each PKEY.
Signed-off-by: Joey Gouly
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Cc: "Aneesh Kumar K.V"
Cc: "Naveen N. Rao"
Cc: linuxppc-dev@lists.ozlabs.org
Acked-by: Michael Ellerman
---
arch/powerpc/Kconfig | 4
The new config option specifies how many bits are in each PKEY.
Signed-off-by: Joey Gouly
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: H. Peter Anvin
Cc: x...@kernel.org
Acked-by: Dave Hansen
---
arch/x86/Kconfig | 4
1 file changed, 4 insertions(+)
dif
Use the new CONFIG_ARCH_PKEY_BITS to simplify setting these bits
for different architectures.
Signed-off-by: Joey Gouly
Cc: Andrew Morton
Cc: linux-fsde...@vger.kernel.org
Cc: linux...@kvack.org
Acked-by: Dave Hansen
Reviewed-by: Anshuman Khandual
---
fs/proc/task_mmu.c | 2 ++
include/linu
Allow EL0 or EL1 to access POR_EL0 without being trapped to EL2.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Acked-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
arch/arm64/include/asm/el2_setup.h | 10 +-
1 file changed, 9 insertions(+), 1 deletion(-)
diff
POR_EL0 is a register that can be modified by userspace directly,
so it must be context switched.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Catalin Marinas
---
arch/arm64/include/asm/cpufeature.h | 6 ++
arch/arm64/include/asm/processor.h | 1 +
arch/ar
This indicates if the system supports POE. This is a CPUCAP_BOOT_CPU_FEATURE
as the boot CPU will enable POE if it has it, so secondary CPUs must also
have this feature.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Acked-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
Define the new system registers that POE introduces and context switch them.
Signed-off-by: Joey Gouly
Cc: Marc Zyngier
Cc: Oliver Upton
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_host.h | 4
arch/arm64/include/asm/vncr_mappin
To allow using newer instructions that current assemblers don't know about,
replace the `at` instruction with the underlying SYS instruction.
Signed-off-by: Joey Gouly
Cc: Marc Zyngier
Cc: Oliver Upton
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Marc Zyngier
---
arch/arm64/include/asm/
When a PTE is modified, the POIndex must be masked off so that it can be
modified.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
arch/arm64/include/asm/pgtable.h | 3 ++-
1 file changed, 2 insertions(+), 1 delet
FEAT_ATS1E1A introduces a new instruction: `at s1e1a`.
This is an address translation, without permission checks.
POE allows read permissions to be removed from S1 by the guest. This means
that an `at` instruction could fail, and not get the IPA.
Switch to using `at s1e1a` so that KVM can get th
Add the missing sanitisation of ID_AA64MMFR3_EL1, making sure we
solely expose S1POE and TCRX (we currently don't support anything
else).
[joey: Took Marc's patch for S1PIE, and changed it for S1POE]
Signed-off-by: Marc Zyngier
Signed-off-by: Joey Gouly
---
arch/arm64/kvm/sys_regs.c | 6 +-
Expose a HWCAP and ID_AA64MMFR3_EL1_S1POE to userspace, so they can be used to
check if the CPU supports the feature.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
Documentation/arch/arm64/elf_hwcaps.rst | 2 ++
VM_PKEY_BIT[012] will use VM_HIGH_ARCH_[012], move the MTE VM flags to
accommodate this.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Acked-by: Catalin Marinas
---
include/linux/mm.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git include/linux/mm.h incl
The 3-bit POIndex is stored in the PTE at bits 60..62.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Acked-by: Catalin Marinas
---
arch/arm64/include/asm/pgtable-hwdef.h | 10 ++
1 file changed, 10 insertions(+)
diff --git arch/arm64/include/asm/pgtable-hwdef.h
arch/
Modify arch_calc_vm_prot_bits() and vm_get_page_prot() such that the pkey
value is set in the vm_flags and then into the pgprot value.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
---
arch/arm64/include/asm/mman.h | 10 +-
arch/arm64/mm/mmap.c | 11 +++
If a memory fault occurs that is due to an overlay/pkey fault, report that to
userspace with a SEGV_PKUERR.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Catalin Marinas
---
arch/arm64/include/asm/traps.h | 1 +
arch/arm64/kernel/traps.c | 6
arch/arm6
We do not want take POE into account when clearing the MTE tags.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
arch/arm64/include/asm/pgtable.h | 11 +++
1 file changed, 7 insertions(+), 4 deletions(-)
d
Implement the PKEYS interface, using the Permission Overlay Extension.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Catalin Marinas
---
arch/arm64/include/asm/mmu.h | 1 +
arch/arm64/include/asm/mmu_context.h | 46 +++-
arch/arm64/include/asm/p
Add PKEY support to signals, by saving and restoring POR_EL0 from the
stackframe.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Mark Brown
Acked-by: Szabolcs Nagy
Reviewed-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
arch/arm64/include/uapi/asm/sigco
Add a regset for POE containing POR_EL0.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Mark Brown
Reviewed-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
arch/arm64/kernel/ptrace.c | 46 ++
include/uapi/linux/elf.h |
Permission Indirection Extension and Permission Overlay Extension can be
enabled independently.
When PIE is disabled and POE is enabled, the permissions set by POR_EL0 will be
applied on top of the permissions set in the PTE.
When both PIE and POE are enabled, the permissions set by POR_EL0 will
Now that PKEYs support has been implemented, enable it for CPUs that
support S1POE.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Acked-by: Catalin Marinas
Reviewed-by: Anshuman Khandual
---
arch/arm64/include/asm/pkeys.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Now that support for POE and Protection Keys has been implemented, add a
config to allow users to actually enable it.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Reviewed-by: Anshuman Khandual
Acked-by: Catalin Marinas
---
arch/arm64/Kconfig | 23 +++
1
Put this function in the header so that it can be used by other tests, without
needing to link to testcases.c.
This will be used by selftest/mm/protection_keys.c
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Andrew Morton
Cc: Shuah Khan
Cc: Dave Hansen
Cc: Aneesh Kumar K
arm64's fpregs are not at a constant offset from sigcontext. Since this is
not an important part of the test, don't print the fpregs pointer on arm64.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Andrew Morton
Cc: Shuah Khan
Cc: Dave Hansen
Cc: Aneesh Kumar K.V
Acked-by
The encoding of the pkey register differs on arm64, than on x86/ppc. On those
platforms, a bit in the register is used to disable permissions, for arm64, a
bit enabled in the register indicates that the permission is allowed.
This drops two asserts of the form:
assert(read_pkey_reg() <= o
Check that when POE is enabled, the POR_EL0 register is accessible.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Mark Brown
Cc: Shuah Khan
Reviewed-by: Mark Brown
---
tools/testing/selftests/arm64/abi/hwcap.c | 14 ++
1 file changed, 14 insertions(+)
diff -
Teach the signal frame parsing about the new POE frame, avoids warning when it
is generated.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Mark Brown
Cc: Shuah Khan
Reviewed-by: Mark Brown
---
tools/testing/selftests/arm64/signal/testcases/testcases.c | 4
1 file ch
Ensure that we get signal context for POR_EL0 if and only if POE is present
on the system.
Copied from the TPIDR2 test.
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Mark Brown
Cc: Shuah Khan
Reviewed-by: Mark Brown
Acked-by: Shuah Khan
---
.../testing/selftests/arm64/
Add new system registers:
- POR_EL1
- POR_EL0
Signed-off-by: Joey Gouly
Cc: Catalin Marinas
Cc: Will Deacon
Cc: Marc Zyngier
Cc: Oliver Upton
Cc: Shuah Khan
Reviewed-by: Mark Brown
---
tools/testing/selftests/kvm/aarch64/get-reg-list.c | 14 ++
1 file changed, 14 insertions
On 2024/8/22 22:16, LEROY Christophe wrote:
Le 22/08/2024 à 15:06, Gaosheng Cui a écrit :
[Vous ne recevez pas souvent de courriers de cuigaoshe...@huawei.com. Découvrez
pourquoi ceci est important à https://aka.ms/LearnAboutSenderIdentification ]
The _get_SP() have been removed since
commi
Hi,
Le 02/08/2024 à 04:16, Xiaolei Wang a écrit :
[Vous ne recevez pas souvent de courriers de xiaolei.w...@windriver.com.
Découvrez pourquoi ceci est important à
https://aka.ms/LearnAboutSenderIdentification ]
RESERVEDMEM_OF_DECLARE usage has been removed. For
non-popwerpc platforms, such as
Le 23/07/2024 à 23:04, Peter Xu a écrit :
>>
>>>
>>> Nornally I don't see this as much of a "code churn" category, because it
>>> doesn't changes the code itself but only move things. I personally also
>>> prefer without code churns, but only in the case where there'll be tiny
>>> little functio
Le 21/07/2024 à 01:09, Erhard Furtner a écrit :
> [Vous ne recevez pas souvent de courriers de erhar...@mailbox.org. D?couvrez
> pourquoi ceci est important ? https://aka.ms/LearnAboutSenderIdentification ]
>
> On Sat, 29 Jun 2024 15:31:28 +0200
> Erhard Furtner wrote:
>
>> I get a build fail
Le 18/07/2024 à 00:02, Peter Xu a écrit :
> Introduce two more sub-options for PGTABLE_HAS_HUGE_LEAVES:
>
>- PGTABLE_HAS_PMD_LEAVES: set when there can be PMD mappings
>- PGTABLE_HAS_PUD_LEAVES: set when there can be PUD mappings
>
> It will help to identify whether the current build ma
On Thu, Aug 22, 2024 at 05:22:03PM +, LEROY Christophe wrote:
>
>
> Le 18/07/2024 à 00:02, Peter Xu a écrit :
> > Introduce two more sub-options for PGTABLE_HAS_HUGE_LEAVES:
> >
> >- PGTABLE_HAS_PMD_LEAVES: set when there can be PMD mappings
> >- PGTABLE_HAS_PUD_LEAVES: set when ther
On 8/22/24 5:59 AM, Christoph Hellwig wrote:
[...]
>>> The overflow/underflow conditions in pata_macio_qc_prep() should never
>>> happen. But if they do there's no need to kill the system entirely, a
>>> WARN and failing the IO request should be sufficient and might allow the
>>> system to keep ru
1 - 100 of 111 matches
Mail list logo