Hello,
On Tue, Feb 02, 2021 at 08:15:41PM +1100, Alexey Kardashevskiy wrote:
> Since de78a9c "powerpc: Add a framework for Kernel Userspace Access
> Protection", user access helpers call user_{read|write}_access_{begin|end}
> when user space access is allowed.
>
> 890274c "powerpc/64s: Implement
While sampling for marked events, currently we record the sample only
if the SIAR valid bit of Sampled Instruction Event Register (SIER) is
set. SIAR_VALID bit is used for fetching the instruction address from
Sampled Instruction Address Register(SIAR). But there are some usecases,
where the user i
FYI, this is the updated version:
---
>From 664ca3378deac7530fe8fc15fe73d583ddf2 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Wed, 20 Jan 2021 14:58:27 +0100
Subject: module: pass struct find_symbol_args to find_symbol
Simplify the calling convention by passing the find_symbol_args
On Wed, 3 Feb 2021, Christoph Hellwig wrote:
> FYI, this is the updated version:
>
> ---
> >From 664ca3378deac7530fe8fc15fe73d583ddf2 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig
> Date: Wed, 20 Jan 2021 14:58:27 +0100
> Subject: module: pass struct find_symbol_args to find_symbol
>
>
On 2021/02/03 12:08PM, Sandipan Das wrote:
> The Power ISA says that the fixed-point load and update
> instructions must neither use R0 for the base address (RA)
> nor have the destination (RT) and the base address (RA) as
> the same register. In these cases, the instruction is
> invalid. This appl
Commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF
is cleared") introduced arch_faults_on_old_pte() helper to identify
platforms that don't set page access bit in HW and require a page
fault to set it.
Commit 44bf431b47b4 ("mm/memory.c: Add memory read privilege on page
fault handl
On Thu, Jan 28, 2021 at 07:14:10PM +0100, Christoph Hellwig wrote:
> drm_fb_helper_modinit has a lot of boilerplate for what is not very
> simple functionality. Just open code it in the only caller using
> IS_ENABLED and IS_MODULE, and skip the find_module check as a
> request_module is harmless i
On 03/02/21 3:19 pm, Naveen N. Rao wrote:
> [...]
>
> Wouldn't it be easier to just do the below at the end? Or, am I missing
> something?
>
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index ede093e9623472..a2d726d2a5e9d1 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b/a
On Wed, Feb 03, 2021 at 11:34:50AM +0100, Daniel Vetter wrote:
> On Thu, Jan 28, 2021 at 07:14:10PM +0100, Christoph Hellwig wrote:
> > drm_fb_helper_modinit has a lot of boilerplate for what is not very
> > simple functionality. Just open code it in the only caller using
> > IS_ENABLED and IS_MOD
On 03/02/21 3:19 pm, Naveen N. Rao wrote:
> [...]
>
> Wouldn't it be easier to just do the below at the end? Or, am I missing
> something?
>
> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
> index ede093e9623472..a2d726d2a5e9d1 100644
> --- a/arch/powerpc/lib/sstep.c
> +++ b
On Wed, 13 Jan 2021 21:20:14 +1100, Alexey Kardashevskiy wrote:
> This adds a folder per LIOBN under /sys/kernel/debug/iommu with IOMMU
> table parameters.
>
> This is enabled by CONFIG_IOMMU_DEBUGFS.
Applied to powerpc/next.
[1/1] powerpc/iommu/debug: Add debugfs entries for IOMMU tables
On Sun, 10 May 2020 01:15:59 -0400, Qian Cai wrote:
> It is safe to traverse mm->context.iommu_group_mem_list with either
> mem_list_mutex or the RCU read lock held. Silence a few RCU-list false
> positive warnings and fix a few missing RCU read locks.
>
> arch/powerpc/mm/book3s64/iommu_api.c:330
On Sun, 10 May 2020 01:13:47 -0400, Qian Cai wrote:
> It is unsafe to traverse tbl->it_group_list without the RCU read lock.
>
> WARNING: suspicious RCU usage
> 5.7.0-rc4-next-20200508 #1 Not tainted
> -
> arch/powerpc/platforms/powernv/pci-ioda-tce.c:355 RCU-list t
On Wed, 20 Jan 2021 07:49:13 + (UTC), Christophe Leroy wrote:
> PPC47x_TLBE_SIZE isn't defined for 256k pages, so
> this size of page shall not be selected for 47x.
Applied to powerpc/next.
[1/2] powerpc/47x: Disable 256k page size
https://git.kernel.org/powerpc/c/910a0cb6d259736a0c86e7
On Tue, 19 Jan 2021 07:00:00 + (UTC), Christophe Leroy wrote:
> PPC47x_TLBE_SIZE isn't defined for 256k pages, so
> this size of page shall not be selected for 47x.
Applied to powerpc/next.
[1/1] powerpc/47x: Disable 256k page size
https://git.kernel.org/powerpc/c/910a0cb6d259736a0c86e7
On Tue, 19 Jan 2021 06:36:52 + (UTC), Christophe Leroy wrote:
> book3s/32 kvm is designed with the assumption that
> an FPU is always present.
>
> Force selection of FPU support in the kernel when
> build KVM.
Applied to powerpc/next.
[1/1] powerpc/kvm: Force selection of CONFIG_PPC_FPU
On Wed, 23 Dec 2020 09:38:48 + (UTC), Christophe Leroy wrote:
> Since commit 4ad8622dc548 ("powerpc/8xx: Implement hw_breakpoint"),
> 8xx has breakpoints so there is no reason to opt breakpoint logic
> out of xmon for the 8xx.
Applied to powerpc/next.
[1/1] powerpc/xmon: Enable breakpoints on
On Fri, 18 Dec 2020 06:56:05 + (UTC), Christophe Leroy wrote:
> It is now possible to only build book3s/32 kernel for
> CPUs without hash table.
>
> Opt out hash related code when CONFIG_PPC_BOOK3S_604 is not selected.
Applied to powerpc/next.
[1/1] powerpc/32s: Only build hash code when CON
On Fri, 22 Jan 2021 08:50:29 +0100, Cédric Le Goater wrote:
> The "ibm,arch-vec-5-platform-support" property is a list of pairs of
> bytes representing the options and values supported by the platform
> firmware. At boot time, Linux scans this list and activates the
> available features it recogni
On Sat, 12 Dec 2020 15:27:07 +0100, Cédric Le Goater wrote:
> The VAS device allocates a generic interrupt to handle page faults but
> the IRQ name doesn't show under /proc. This is because it's on
> stack. Allocate the name.
Applied to powerpc/next.
[1/1] powerpc/vas: Fix IRQ name allocation
On Mon, 4 Jan 2021 15:31:43 +0100, Cédric Le Goater wrote:
> Here is an assorted collection of fixes for W=1.
>
> After this series, only a few errors are left, some missing declarations
> in arch/powerpc/kernel/sys_ppc32.c, panic_smp_self_stop() declaration
> and a few of these which I don't kno
On Thu, 28 Jan 2021 16:11:42 +0530, Ganesh Goudar wrote:
> Maximum recursive depth of MCE is 4, Considering the maximum depth
> allowed reduce the size of event to 10 from 100. This saves us ~19kB
> of memory and has no fatal consequences.
Applied to powerpc/next.
[1/2] powerpc/mce: Reduce the si
On Thu, 22 Oct 2020 14:51:19 +0800, Pingfan Liu wrote:
> When CONFIG_IRQ_TIME_ACCOUNTING and CONFIG_VIRT_CPU_ACCOUNTING_GEN, powerpc
> does not enable "sched_clock_irqtime" and can not utilize irq time
> accounting.
>
> Like x86, powerpc does not use the sched_clock_register() interface. So it
> n
On Mon, 28 Dec 2020 14:22:04 +0530, Kajol Jain wrote:
> hv_24x7 performance monitoring unit creates list of supported events
> from the event catalog obtained via HCALL. hv_24x7 catalog could also
> contain invalid or dummy events with names like RESERVED*.
> These events does not have any hardware
On Tue, 27 Aug 2019 10:23:29 +0200, Markus Elfring wrote:
> Two update suggestions were taken into account
> from static source code analysis.
>
> Markus Elfring (2):
> Delete an unnecessary of_node_put() call
> Use common error handling code
>
> [...]
Applied to powerpc/next.
[1/2] powerpc
On Tue, 27 Aug 2019 14:40:42 +0200, Markus Elfring wrote:
> Two update suggestions were taken into account
> from static source code analysis.
>
> Markus Elfring (2):
> Delete an unnecessary kfree() call
> Delete an error message for a failed string duplication
>
> [...]
Applied to powerpc/n
On Thu, 10 Dec 2020 15:35:38 +0100, Markus Elfring wrote:
> A local variable was used only within an if branch.
> Thus move the definition for the variable âmmâ into the corresponding
> code block.
>
> This issue was detected by using the Coccinelle software.
Applied to powerpc/next.
[1/1] c
On Tue, 2 Jul 2019 14:56:46 +0200, Markus Elfring wrote:
> A bit of information should be put into a sequence.
> Thus improve the execution speed for this data output by better usage
> of corresponding functions.
>
> This issue was detected by using the Coccinelle software.
Applied to powerpc/nex
On Wed, 20 Jan 2021 14:28:38 +0100, Michal Suchanek wrote:
> ./arch/powerpc/include/asm/paravirt.h:83:44: error: implicit declaration
> of function 'smp_processor_id'; did you mean 'raw_smp_processor_id'?
>
> smp_processor_id is defined in linux/smp.h but it is not included.
>
> The build error h
On Thu, 17 Dec 2020 11:53:06 +1100, Michael Ellerman wrote:
> In commit 8150a153c013 ("powerpc/64s: Use early_mmu_has_feature() in
> set_kuap()") we switched the KUAP code to use early_mmu_has_feature(),
> to avoid a bug where we called set_kuap() before feature patching had
> been done, leading to
On Mon, 18 Jan 2021 22:34:51 +1000, Nicholas Piggin wrote:
> Queued spinlocks have shown to have good performance and fairness
> properties even on smaller (2 socket) POWER systems. This selects
> them automatically for 64s. For other platforms they are de-selected,
> the standard spinlock is far s
On Tue, 3 Nov 2020 16:15:11 +1100, Oliver O'Halloran wrote:
> Pull the string -> pci_dev lookup stuff into a helper function. No functional
> change.
Applied to powerpc/next.
[1/2] powerpc/eeh: Rework pci_dev lookup in debugfs attributes
https://git.kernel.org/powerpc/c/b5e904b83067bbbd7dc
On Tue 2021-02-02 13:13:26, Christoph Hellwig wrote:
> Require an explicit call to module_kallsyms_on_each_symbol to look
> for symbols in modules instead of the call from kallsyms_on_each_symbol,
> and acquire module_mutex inside of module_kallsyms_on_each_symbol instead
> of leaving that up to th
On Wed, 2 Sep 2020 13:51:21 +1000, Oliver O'Halloran wrote:
> Nothing uses it.
Applied to powerpc/next.
[1/1] powerpc/pci: Delete traverse_pci_dn()
https://git.kernel.org/powerpc/c/7bd2b120f3fdf8e5c6d9a343517a33c2a5108794
cheers
On Tue, 3 Nov 2020 15:45:01 +1100, Oliver O'Halloran wrote:
> Hoist some of the useful test environment checking and prep code into
> eeh-functions.sh so they can be reused in other tests.
Applied to powerpc/next.
[1/3] selftests/powerpc: Hoist helper code out of eeh-basic
https://git.kerne
On Mon, 28 Dec 2020 12:34:59 +0800, Po-Hsu Lin wrote:
> The == operand is a bash extension, thus this will fail on Ubuntu with
>
> As the /bin/sh on Ubuntu is pointed to DASH.
>
> Use -eq to fix this posix compatibility issue.
Applied to powerpc/next.
[1/1] selftests/powerpc: Make the test chec
On Thu, 24 Dec 2020 21:24:46 +0800, Zheng Yongjun wrote:
> mutex lock can be initialized automatically with DEFINE_MUTEX()
> rather than explicitly calling mutex_init().
Applied to powerpc/next.
[1/1] ocxl: use DEFINE_MUTEX() for mutex lock
https://git.kernel.org/powerpc/c/52f6b0a90bcf573ba
On Thu, 24 Dec 2020 02:11:41 +0900, Masahiro Yamada wrote:
> vgettimeofday.o is unnecessarily rebuilt. Adding it to 'targets' is not
> enough to fix the issue. Kbuild is correctly rebuilding it because the
> command line is changed.
>
> PowerPC builds each vdso directory twice; first in vdso_prepa
On Sat, 23 Jan 2021 16:12:44 +1000, Nicholas Piggin wrote:
> When an asynchronous interrupt calls irq_exit, it checks for softirqs
> that may have been created, and runs them. Running softirqs enables
> local irqs, which can replay pending interrupts causing recursion in
> replay_soft_interrupts. T
On Fri, 29 Jan 2021 12:47:45 +0530, Ravi Bangoria wrote:
> Compiling kernel with -Warray-bounds throws below warning:
>
> In function 'emulate_vsx_store':
> warning: array subscript is above array bounds [-Warray-bounds]
> buf.d[2] = byterev_8(reg->d[1]);
> ~^~~
> buf.d[3] = byterev_
On Mon, 1 Feb 2021 17:05:05 -0300, Raoni Fassina Firmino wrote:
> Tested on powerpc64 and powerpc64le, with a glibc build and running the
> affected glibc's testcase[2], inspected that glibc's backtrace() now gives
> the correct result and gdb backtrace also keeps working as before.
>
> I believe
Hi Christophe,
>> select HAVE_ARCH_HUGE_VMAP if PPC_BOOK3S_64 &&
>> PPC_RADIX_MMU
>> select HAVE_ARCH_JUMP_LABEL
>> select HAVE_ARCH_KASAN if PPC32 && PPC_PAGE_SHIFT <= 14
>> -select HAVE_ARCH_KASAN_VMALLOC if PPC32 && PPC_PAGE_SHIFT <= 14
Building on the work of Christophe, Aneesh and Balbir, I've ported
KASAN to 64-bit Book3S kernels running on the Radix MMU.
v10 rebases on top of next-20210125, fixing things up to work on top
of the latest changes, and fixing some review comments from
Christophe. I have tested host and guest with
For annoying architectural reasons, it's very difficult to support inline
instrumentation on powerpc64.
Add a Kconfig flag to allow an arch to disable inline. (It's a bit
annoying to be 'backwards', but I'm not aware of any way to have
an arch force a symbol to be 'n', rather than 'y'.)
We also d
powerpc has a variable number of PTRS_PER_*, set at runtime based
on the MMU that the kernel is booted under.
This means the PTRS_PER_* are no longer constants, and therefore
breaks the build.
Define default MAX_PTRS_PER_*s in the same style as MAX_PTRS_PER_P4D.
As KASAN is the only user at the m
Allow architectures to define a kasan_arch_is_ready() hook that bails
out of any function that's about to touch the shadow unless the arch
says that it is ready for the memory to be accessed. This is fairly
uninvasive and should have a negligible performance penalty.
This will only work in outline
KASAN is supported on 32-bit powerpc and the docs should reflect this.
Document s390 support while we're at it.
Suggested-by: Christophe Leroy
Reviewed-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
Documentation/dev-tools/kasan.rst | 7 +--
Documentation/powerpc/kasan.txt | 12
kasan is already implied by the directory name, we don't need to
repeat it.
Suggested-by: Christophe Leroy
Signed-off-by: Daniel Axtens
---
arch/powerpc/mm/kasan/Makefile | 2 +-
arch/powerpc/mm/kasan/{kasan_init_32.c => init_32.c} | 0
2 files changed, 1 insertion(+), 1 d
Implement a limited form of KASAN for Book3S 64-bit machines running under
the Radix MMU, supporting only outline mode.
- Enable the compiler instrumentation to check addresses and maintain the
shadow region. (This is the guts of KASAN which we can easily reuse.)
- Require kasan-vmalloc supp
Le 03/02/2021 à 12:59, Daniel Axtens a écrit :
Implement a limited form of KASAN for Book3S 64-bit machines running under
the Radix MMU, supporting only outline mode.
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index a66f435dabbf..9a6fd603f0e7 100644
--- a/ar
Christophe Leroy writes:
> Le 03/02/2021 à 12:59, Daniel Axtens a écrit :
>> Implement a limited form of KASAN for Book3S 64-bit machines running under
>> the Radix MMU, supporting only outline mode.
>>
>
>> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
>> index a66f
Em Tue, Feb 02, 2021 at 04:02:36PM +0530, Athira Rajeev escreveu:
>
>
> On 18-Jan-2021, at 3:51 PM, kajoljain wrote:
>
>
>
> On 1/12/21 3:08 PM, Jiri Olsa wrote:
>
> On Mon, Dec 28, 2020 at 09:14:14PM -0500, Athira Rajeev wrote:
>
> SNIP
>
>
> c
Em Wed, Feb 03, 2021 at 01:55:37AM -0500, Athira Rajeev escreveu:
> To enable presenting of Performance Monitor Counter Registers
> (PMC1 to PMC6) as part of extended regsiters, patch adds these
> to sample_reg_mask in the tool side (to use with -I? option).
>
> Simplified the PERF_REG_PMU_MASK_30
Le 25/02/2020 à 18:35, Nicholas Piggin a écrit :
Implement the bulk of interrupt return logic in C. The asm return code
must handle a few cases: restoring full GPRs, and emulating stack store.
+notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs,
unsigned long msr)
+
On 1/22/21 2:31 PM, Thiago Jung Bauermann wrote:
Lakshmi Ramasubramanian writes:
IMA allocates kernel virtual memory to carry forward the measurement
list, from the current kernel to the next kernel on kexec system call,
in ima_add_kexec_buffer() function. This buffer is not freed before
com
On 1/22/21 2:30 PM, Thiago Jung Bauermann wrote:
Hi Lakshmi,
Lakshmi Ramasubramanian writes:
IMA allocates kernel virtual memory to carry forward the measurement
list, from the current kernel to the next kernel on kexec system call,
in ima_add_kexec_buffer() function. In error code paths th
Reuse the "safe" implementation from signal.c except for calling
unsafe_copy_from_user() to copy into a local buffer.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/kernel/signal.h | 30 ++
1 file changed, 30 insertions(+)
diff --git a/arch/powerpc/kernel/signa
Rework the messy ifdef breaking up the if-else for TM similar to
commit f1cf4f93de2f ("powerpc/signal32: Remove ifdefery in middle of if/else").
Unlike that commit for ppc32, the ifdef can't be removed entirely since
uc_transact in sigframe depends on CONFIG_PPC_TRANSACTIONAL_MEM.
Signed-off-by:
Just wrap __copy_tofrom_user() for the usual 'unsafe' pattern which
takes in a label to goto on error.
Signed-off-by: Christopher M. Riedl
---
arch/powerpc/include/asm/uaccess.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/arch/powerpc/include/asm/uaccess.h
b/arch/powerpc/include/asm/
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
There is no 'unsafe' version of copy_siginfo_to_user, so move it
slightly to allow for a "longer" uaccess block.
Sign
Previously setup_sigcontext() performed a costly KUAP switch on every
uaccess operation. These repeated uaccess switches cause a significant
drop in signal handling performance.
Rewrite setup_sigcontext() to assume that a userspace write access window
is open. Replace all uaccess functions with th
From: Daniel Axtens
Add uaccess blocks and use the 'unsafe' versions of functions doing user
access where possible to reduce the number of times uaccess has to be
opened/closed.
Signed-off-by: Daniel Axtens
Co-developed-by: Christopher M. Riedl
Signed-off-by: Christopher M. Riedl
---
arch/po
There are non-inline functions which get called in setup_sigcontext() to
save register state to the thread struct. Move these functions into a
separate prepare_setup_sigcontext() function so that
setup_sigcontext() can be refactored later into an "unsafe" version
which assumes an open uaccess windo
Usually sigset_t is exactly 8B which is a "trivial" size and does not
warrant using __copy_from_user(). Use __get_user() directly in
anticipation of future work to remove the trivial size optimizations
from __copy_from_user(). Calling __get_user() also results in a small
boost to signal handling th
Unlike the other MSR_TM_* macros, MSR_TM_ACTIVE does not reference or
use its parameter unless CONFIG_PPC_TRANSACTIONAL_MEM is defined. This
causes an 'unused variable' compile warning unless the variable is also
guarded with CONFIG_PPC_TRANSACTIONAL_MEM.
Reference but do nothing with the argument
Previously restore_sigcontext() performed a costly KUAP switch on every
uaccess operation. These repeated uaccess switches cause a significant
drop in signal handling performance.
Rewrite restore_sigcontext() to assume that a userspace read access
window is open. Replace all uaccess functions with
As reported by Anton, there is a large penalty to signal handling
performance on radix systems using KUAP. The signal handling code
performs many user access operations, each of which needs to switch the
KUAP permissions bit to open and then close user access. This involves a
costly 'mtspr' operati
On Wed, Feb 03, 2021 at 03:19:09PM +0530, Naveen N. Rao wrote:
> On 2021/02/03 12:08PM, Sandipan Das wrote:
> > The Power ISA says that the fixed-point load and update
> > instructions must neither use R0 for the base address (RA)
> > nor have the destination (RT) and the base address (RA) as
> > t
On Tue, Feb 02, 2021 at 08:30:50PM +0530, Sandipan Das wrote:
> This removes arch_supports_pkeys(), arch_usable_pkeys() and
> thread_pkey_regs_*() which are remnants from the following:
>
> commit 06bb53b33804 ("powerpc: store and restore the pkey state across
> context switches")
> commit 2cd4bd
This RFC is to introduce the 2nd swiotlb buffer for 64-bit DMA access. The
prototype is based on v5.11-rc6.
The state of the art swiotlb pre-allocates <=32-bit memory in order to meet
the DMA mask requirement for some 32-bit legacy device. Considering most
devices nowadays support 64-bit DMA and
This patch introduces swiotlb_get_type() in order to calculate which
swiotlb buffer the given DMA address is belong to.
This is to prepare to enable 64-bit swiotlb.
Cc: Joe Jin
Signed-off-by: Dongli Zhang
---
include/linux/swiotlb.h | 14 ++
kernel/dma/swiotlb.c| 2 ++
2 files
This is just to define new enumerated type without functional change.
The 'SWIOTLB_LO' is to index legacy 32-bit swiotlb buffer, while the
'SWIOTLB_HI' is to index the 64-bit buffer.
This is to prepare to enable 64-bit swiotlb.
Cc: Joe Jin
Signed-off-by: Dongli Zhang
---
include/linux/swiotlb
This patch is to enable the 64-bit swiotlb buffer.
The state of the art swiotlb pre-allocates <=32-bit memory in order to meet
the DMA mask requirement for some 32-bit legacy device. Considering most
devices nowadays support 64-bit DMA and IOMMU is available, the swiotlb is
not used for most of th
This patch converts several swiotlb related variables to arrays, in
order to maintain stat/status for different swiotlb buffers. Here are
variables involved:
- io_tlb_start and io_tlb_end
- io_tlb_nslabs and io_tlb_used
- io_tlb_list
- io_tlb_index
- max_segment
- io_tlb_orig_addr
- no_iotlb_memor
This patch converts several xen-swiotlb related variables to arrays, in
order to maintain stat/status for different swiotlb buffers. Here are
variables involved:
- xen_io_tlb_start and xen_io_tlb_end
- xen_io_tlb_nslabs
- MAX_DMA_BITS
There is no functional change and this is to prepare to enable
This patch is to enable the 64-bit xen-swiotlb buffer.
For Xen PVM DMA address, the 64-bit device will be able to allocate from
64-bit swiotlb buffer.
Cc: Joe Jin
Signed-off-by: Dongli Zhang
---
drivers/xen/swiotlb-xen.c | 117 --
1 file changed, 74 insertio
When adding a pte a ptesync is needed to order the update of the pte
with subsequent accesses otherwise a spurious fault may be raised.
radix__set_pte_at() does not do this for performance gains. For
non-kernel memory this is not an issue as any faults of this kind are
corrected by the page fault
On Wed, 3 Feb 2021 10:19:44 + (UTC) Christophe Leroy
wrote:
> Commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF
> is cleared") introduced arch_faults_on_old_pte() helper to identify
> platforms that don't set page access bit in HW and require a page
> fault to set it.
>
>
Sandipan Das writes:
> On 03/02/21 3:19 pm, Naveen N. Rao wrote:
>> [...]
>>
>> Wouldn't it be easier to just do the below at the end? Or, am I missing
>> something?
>>
>> diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
>> index ede093e9623472..a2d726d2a5e9d1 100644
>> --- a/ar
Athira Rajeev writes:
> While sampling for marked events, currently we record the sample only
> if the SIAR valid bit of Sampled Instruction Event Register (SIER) is
> set. SIAR_VALID bit is used for fetching the instruction address from
> Sampled Instruction Address Register(SIAR). But there are
Excerpts from Andrew Morton's message of February 4, 2021 10:46 am:
> On Wed, 3 Feb 2021 10:19:44 + (UTC) Christophe Leroy
> wrote:
>
>> Commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF
>> is cleared") introduced arch_faults_on_old_pte() helper to identify
>> platforms tha
Excerpts from Christophe Leroy's message of February 4, 2021 2:25 am:
>
>
> Le 25/02/2020 à 18:35, Nicholas Piggin a écrit :
>> Implement the bulk of interrupt return logic in C. The asm return code
>> must handle a few cases: restoring full GPRs, and emulating stack store.
>>
>
>
>> +notrace
Excerpts from Jordan Niethe's message of February 4, 2021 9:59 am:
> When adding a pte a ptesync is needed to order the update of the pte
> with subsequent accesses otherwise a spurious fault may be raised.
>
> radix__set_pte_at() does not do this for performance gains. For
> non-kernel memory thi
On Thu, Feb 4, 2021 at 2:31 PM Nicholas Piggin wrote:
>
> Excerpts from Jordan Niethe's message of February 4, 2021 9:59 am:
> > When adding a pte a ptesync is needed to order the update of the pte
> > with subsequent accesses otherwise a spurious fault may be raised.
> >
> > radix__set_pte_at() d
"Christopher M. Riedl" writes:
> On Mon Feb 1, 2021 at 10:54 AM CST, Gabriel Paubert wrote:
>> On Mon, Feb 01, 2021 at 09:55:44AM -0600, Christopher M. Riedl wrote:
>> > On Thu Jan 28, 2021 at 4:38 AM CST, David Laight wrote:
>> > > From: Christopher M. Riedl
>> > > > Sent: 28 January 2021 04:04
>
Hi Jordan,
On 2021/02/04 10:59AM, Jordan Niethe wrote:
> When adding a pte a ptesync is needed to order the update of the pte
> with subsequent accesses otherwise a spurious fault may be raised.
>
> radix__set_pte_at() does not do this for performance gains. For
> non-kernel memory this is not an
On Sat Jan 30, 2021 at 7:44 AM CST, Nicholas Piggin wrote:
> Excerpts from Michael Ellerman's message of January 30, 2021 9:32 pm:
> > "Christopher M. Riedl" writes:
> >> The idle entry/exit code saves/restores GPRs in the stack "red zone"
> >> (Protected Zone according to PowerPC64 ELF ABI v2). H
The Power ISA says that the fixed-point load and update
instructions must neither use R0 for the base address (RA)
nor have the destination (RT) and the base address (RA) as
the same register. Similarly, for fixed-point stores and
floating-point loads and stores, the instruction is invalid
when R0
Commit 8813ff49607e ("powerpc/sstep: Check instruction
validity against ISA version before emulation") introduced
a proper way to skip unknown instructions. This makes sure
that the same is used for the darn instruction when the
range selection bits have a reserved value.
Fixes: a23987ef267a ("pow
On 04/02/21 12:44 pm, Sandipan Das wrote:
> The Power ISA says that the fixed-point load and update
> instructions must neither use R0 for the base address (RA)
> nor have the destination (RT) and the base address (RA) as
> the same register. Similarly, for fixed-point stores and
> floating-point
On Wed, Feb 03, 2021 at 03:37:05PM -0800, Dongli Zhang wrote:
> This patch converts several swiotlb related variables to arrays, in
> order to maintain stat/status for different swiotlb buffers. Here are
> variables involved:
>
> - io_tlb_start and io_tlb_end
> - io_tlb_nslabs and io_tlb_used
> -
On 2021/02/04 12:44PM, Sandipan Das wrote:
> The Power ISA says that the fixed-point load and update
> instructions must neither use R0 for the base address (RA)
> nor have the destination (RT) and the base address (RA) as
> the same register. Similarly, for fixed-point stores and
> floating-point
On 04/02/21 1:09 pm, Naveen N. Rao wrote:
> [...]
>
> I'm afraid there is one more thing. scripts/checkpatch.pl reports:
>
> WARNING: 'an userspace' may be misspelled - perhaps 'a userspace'?
> #52:
> While an userspace program having an instruction word like
>
>
> ERROR: swit
93 matches
Mail list logo