Future Power architecture is introducing second DAWR. Add SPRN_ macros
for the same.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/reg.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 156ee89fa9be..062e74cf4
So far, powerpc Book3S code has been written with an assumption of only
one watchpoint. But future power architecture is introducing second
watchpoint register (DAWR). Even though this patchset does not enable
2nd DAWR, it make the infrastructure ready so that enabling 2nd DAWR
should just be a mat
Future Power architecture is introducing second DAWR. Rename current
DAWR macros as:
s/SPRN_DAWR/SPRN_DAWR0/
s/SPRN_DAWRX/SPRN_DAWRX0/
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/reg.h | 4 ++--
arch/powerpc/kernel/dawr.c | 4 ++--
arch/powerpc/kvm/book3s_
So far we had only one watchpoint, so we have hardcoded HBP_NUM to 1.
But future Power architecture is introducing 2nd DAWR and thus kernel
should be able to dynamically find actual number of watchpoints
supported by hw it's running on. Introduce function for the same.
Also convert HBP_NUM macro to
User can ask for num of available watchpoints(dbginfo.num_data_bps)
using ptrace(PPC_PTRACE_GETHWDBGINFO). Return actual number of
available watchpoints on the machine rather than hardcoded 1.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/ptrace.c | 2 +-
1 file changed, 1 insertion(+), 1
Introduce new parameter 'nr' to set_dawr() which indicates which DAWR
should be programed.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hw_breakpoint.h | 4 ++--
arch/powerpc/kernel/dawr.c | 15 ++-
arch/powerpc/kernel/process.c| 2 +-
3 files
Introduce new parameter 'nr' to __set_breakpoint() which indicates
which DAWR should be programed. Also convert current_brk variable
to an array.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/debug.h | 2 +-
arch/powerpc/include/asm/hw_breakpoint.h | 2 +-
arch/powerpc/kern
Instead of disabling only one watchpooint, get num of available
watchpoints dynamically and disable all of them.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/hw_breakpoint.h | 15 +++
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/arch/powerpc/include/asm/
Instead of disabling only first watchpoint, disable all available
watchpoints while clearing dawr_force_enable.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/dawr.c | 10 +++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/dawr.c b/arch/powerpc/ker
So far powerpc hw supported only one watchpoint. But Future Power
architecture is introducing 2nd DAWR. Convert thread_struct->hw_brk
into an array.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/include/asm/processor.h | 2 +-
arch/powerpc/kernel/process.c| 43 --
ptrace_bps is already an array of size HBP_NUM_MAX. But we use
hardcoded index 0 while fetching/updating it. Convert such code
to loop over array.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/hw_breakpoint.c | 7 +--
arch/powerpc/kernel/process.c | 6 +-
arch/powerpc/kern
Introduce is_ptrace_bp() function and move the check inside the
function. We will utilize it more in later set of patches.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/kernel/hw_breakpoint.c | 7 ++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/kernel/hw_breakpo
Currently we assume that we have only one watchpoint supported by hw.
Get rid of that assumption and use dynamic loop instead. This should
make supporting more watchpoints very easy.
So far, with only one watchpoint, the handler was simple. But with
multiple watchpoints, we need a mechanism to det
ptrace and perf watchpoints on powerpc behaves differently. Ptrace
watchpoint works in one-shot mode and generates signal before executing
instruction. It's ptrace user's job to single-step the instruction and
re-enable the watchpoint. OTOH, in case of perf watchpoint, kernel
emulates/single-steps
Xmon allows overwriting breakpoints because it's supported by only
one dawr. But with multiple dawrs, overwriting becomes ambiguous
or unnecessary complicated. So let's not allow it.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/xmon/xmon.c | 4
1 file changed, 4 insertions(+)
diff --git a
Add support for 2nd DAWR in xmon. With this, we can have two
simultaneous breakpoints from xmon.
Signed-off-by: Ravi Bangoria
---
arch/powerpc/xmon/xmon.c | 101 ++-
1 file changed, 69 insertions(+), 32 deletions(-)
diff --git a/arch/powerpc/xmon/xmon.c b/arc
On Mon, Mar 09, 2020 at 11:55:44AM +0530, Kajol Jain wrote:
> First patch of the patchset fix inconsistent results we are getting when
> we run multiple 24x7 events.
>
> Patchset adds json file metric support for the hv_24x7 socket/chip level
> events. "hv_24x7" pmu interface events needs system d
Vaibhav Jain writes:
> Add a new powerpc specific asm header named 'papr-scm.h' that descibes
> the interface between PHYP and guest kernel running as an LPAR.
>
> The HCALLs specific to managing SCM are descibed in Ref[1]. The asm
> header introduced by this patch however describes the data stru
Vaibhav Jain writes:
> Implement support for fetching dimm health information via
> H_SCM_HEALTH hcall as documented in Ref[1]. The hcall returns a pair of
> 64-bit big-endian integers which are then stored in 'struct
> papr_scm_priv' and subsequently exposed to userspace via dimm
> attribute 'pa
Vaibhav Jain writes:
> Previous commit [1] introduced 'struct nd_papr_scm_dimm_health_stat' for
> communicating health status of an nvdimm to libndctl. This struct
> however can also be used to cache the nvdimm health information in
> 'struct papr_scm_priv' instead of two '__be64' values. Benefit
Vaibhav Jain writes:
> Implement support for fetching dimm performance metrics via
> H_SCM_PERFORMANCE_HEALTH hcall as documented in Ref[1]. The hcall
> returns a structure as described in Ref[1] and defined as newly
> introduced 'struct papr_scm_perf_stats'. The struct has a header
> followed by
Vaibhav Jain writes:
> The DSM 'DSM_PAPR_SCM_HEALTH' should return a 'struct
> nd_papr_scm_dimm_health_stat' containing information in dimm health back
> to user space in response to ND_CMD_CALL. We implement this DSM by
> implementing a new function papr_scm_get_health() that queries the
> DIMM
Sure, will do it today
On March 9, 2020 6:35:06 AM GMT-03:00, Jiri Olsa wrote:
>On Mon, Mar 09, 2020 at 11:55:44AM +0530, Kajol Jain wrote:
>> First patch of the patchset fix inconsistent results we are getting
>when
>> we run multiple 24x7 events.
>>
>> Patchset adds json file metric support fo
On Mon, Mar 09, 2020 at 11:58:29AM +0800, Shengjiu Wang wrote:
> In order to align with new ESARC, we add new property fsl,asrc-format.
> The fsl,asrc-format can replace the fsl,asrc-width, driver
> can accept format from devicetree, don't need to convert it to
> format through width.
>
> Signed-o
On Mon, Mar 09, 2020 at 11:58:28AM +0800, Shengjiu Wang wrote:
> In order to support new EASRC and simplify the code structure,
> We decide to share the common structure between them. This bring
> a problem that EASRC accept format directly from devicetree, but
> ASRC accept width from devicetree.
On Mon, Mar 09, 2020 at 11:58:31AM +0800, Shengjiu Wang wrote:
> In order to move common structure to fsl_asrc_common.h
> we change the name of asrc_priv to asrc, the asrc_priv
> will be used by new struct fsl_asrc_priv.
This actually could be a cleanup patch which comes as the
first one in this s
On Sat, Mar 07, 2020 at 10:58:54AM +1000, Nicholas Piggin wrote:
> Segher Boessenkool's on March 5, 2020 8:55 pm:
> > That name looks perfect to me. You'll have to update REs expecting the
> > arch at the end (like /le$/), but you had to already I think?
>
> le$ is still okay for testing ppc64le,
On Fri, Feb 21, 2020 at 2:38 AM YueHaibing wrote:
>
> commit 3b2abda7d28c ("soc: fsl: dpio: Replace QMAN array
> mode with ring mode enqueue") introduced this, but not
> used, so remove it.
>
> Reported-by: Hulk Robot
> Signed-off-by: YueHaibing
> ---
> drivers/soc/fsl/dpio/qbman-portal.c | 4 -
A few small comments -- trying to improve readability.
On Mon, Mar 09, 2020 at 11:58:34AM +0800, Shengjiu Wang wrote:
> EASRC (Enhanced Asynchronous Sample Rate Converter) is a new IP module
> found on i.MX8MN. It is different with old ASRC module.
>
> The primary features for the EASRC are as fo
The ibmvnic driver does not check the device state when the device
is removed. If the device is removed while a device reset is being
processed, the remove may free structures needed by the reset,
causing an oops.
Fix this by checking the device state before processing device remove.
Signed-off
The ibmvnic driver does not check the device state when the device
is removed. If the device is removed while a device reset is being
processed, the remove may free structures needed by the reset,
causing an oops.
Fix this by checking the device state before processing device remove.
Signed-off
Back again, just minor changes.
v5: https://patchwork.ozlabs.org/project/linuxppc-dev/list/?series=160869
Changes since v5:
[1/8]: Patch 8/8 squashed as suggested by Andrew Donnellan
Added a note to the comment of change_page_attr()
Rename size to sz to meet
Very rudimentary, just
echo 1 > [debugfs]/check_wx_pages
and check the kernel log. Useful for testing strict module RWX.
Updated the Kconfig entry to reflect this.
Also fixed a typo.
Reviewed-by: Kees Cook
Signed-off-by: Russell Currey
---
arch/powerpc/Kconfig.debug | 6 -
With CONFIG_STRICT_KERNEL_RWX=y and CONFIG_KPROBES=y, there will be one
W+X page at boot by default. This can be tested with
CONFIG_PPC_PTDUMP=y and CONFIG_PPC_DEBUG_WX=y set, and checking the
kernel log during boot.
powerpc doesn't implement its own alloc() for kprobes like other
architectures d
The set_memory_{ro/rw/nx/x}() functions are required for STRICT_MODULE_RWX,
and are generally useful primitives to have. This implementation is
designed to be completely generic across powerpc's many MMUs.
It's possible that this could be optimised to be faster for specific
MMUs, but the focus is
To enable strict module RWX on powerpc, set:
CONFIG_STRICT_MODULE_RWX=y
You should also have CONFIG_STRICT_KERNEL_RWX=y set to have any real
security benefit.
ARCH_HAS_STRICT_MODULE_RWX is set to require ARCH_HAS_STRICT_KERNEL_RWX.
This is due to a quirk in arch/Kconfig and arch/powerpc/Kcon
skiroot_defconfig is the only powerpc defconfig with STRICT_KERNEL_RWX
enabled, and if you want memory protection for kernel text you'd want it
for modules too, so enable STRICT_MODULE_RWX there.
Acked-by: Joel Stanley
Signed-off-by: Russell Currey
---
arch/powerpc/configs/skiroot_defconfig | 1
From: Christophe Leroy
In addition to the set_memory_xx() functions which allows to change
the memory attributes of not (yet) used memory regions, implement a
set_memory_attr() function to:
- set the final memory protection after init on currently used
kernel regions.
- enable/disable kernel memo
From: Christophe Leroy
Use set_memory_attr() instead of the PPC32 specific change_page_attr()
change_page_attr() was checking that the address was not mapped by
blocks and was handling highmem, but that's unneeded because the
affected pages can't be in highmem and block mapping verification
is a
From: Juliet Kim
Date: Mon, 9 Mar 2020 19:02:04 -0500
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index c75239d8820f..7ef1ae0d49bc 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -2144,6 +2144,8 @@
On 03/07/2020 12:35 PM, Christophe Leroy wrote:
>
>
> Le 07/03/2020 à 01:56, Anshuman Khandual a écrit :
>>
>>
>> On 03/07/2020 06:04 AM, Qian Cai wrote:
>>>
>>>
On Mar 6, 2020, at 7:03 PM, Anshuman Khandual
wrote:
Hmm, set_pte_at() function is not preferred here for thes
Christophe Leroy writes:
> Le 07/03/2020 à 09:42, Christophe Leroy a écrit :
>> Le 06/03/2020 à 20:05, Nick Desaulniers a écrit :
>>> As a heads up, our CI went red last night, seems like a panic from
>>> free_initmem? Is this a known issue?
>>
>> Thanks for the heads up.
>>
>> No such issue wi
On Fri, Mar 06, 2020 at 04:01:40PM +0100, Cédric Le Goater wrote:
> When a CPU is brought up, an IPI number is allocated and recorded
> under the XIVE CPU structure. Invalid IPI numbers are tracked with
> interrupt number 0x0.
>
> On the PowerNV platform, the interrupt number space starts at 0x10
43 matches
Mail list logo