Potentially, hvc_open() can be called in parallel when two tasks calls
open() on /dev/hvcX. In such a scenario, if the hp->ops->notifier_add()
callback in the function fails, where it sets the tty->driver_data to
NULL, the parallel hvc_open() can see this NULL and cause a memory abort.
Hence, do a
[ + Dmitry & linux-input ]
Nathan Chancellor writes:
> This causes a build error with CONFIG_WALNUT because kb_cs and kb_data
> were removed in commit 917f0af9e5a9 ("powerpc: Remove arch/ppc and
> include/asm-ppc").
>
> ld.lld: error: undefined symbol: kb_cs
>> referenced by i8042-ppcio.h:28 (dri
On Tue, May 19, 2020 at 12:42:15PM -0700, Guenter Roeck wrote:
> On Tue, May 19, 2020 at 11:40:32AM -0700, Ira Weiny wrote:
> > On Tue, May 19, 2020 at 09:54:22AM -0700, Guenter Roeck wrote:
> > > On Mon, May 18, 2020 at 11:48:43AM -0700, ira.we...@intel.com wrote:
> > > > From: Ira Weiny
> > > >
On Tue, May 19, 2020 at 12:42:15PM -0700, Guenter Roeck wrote:
> On Tue, May 19, 2020 at 11:40:32AM -0700, Ira Weiny wrote:
> > On Tue, May 19, 2020 at 09:54:22AM -0700, Guenter Roeck wrote:
> > > On Mon, May 18, 2020 at 11:48:43AM -0700, ira.we...@intel.com wrote:
> > > > From: Ira Weiny
> > > >
On 5/19/20 11:45 AM, Athira Rajeev wrote:
From: Anju T Sudhakar
Add support for perf extended register capability in powerpc.
The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to indicate the
PMU which support extended registers. The generic code define the mask
of extended registers a
Aneesh,
Do these memory unplug fixes on radix look fine? Do you want these
to be rebased on recent kernel? Would you like me to test any specific
scenario with these fixes?
Regards,
Bharata.
On Mon, Apr 06, 2020 at 09:19:20AM +0530, Bharata B Rao wrote:
> Memory unplug has a few bugs which I ha
Christophe Leroy writes:
> Le 19/05/2020 à 06:05, Michael Ellerman a écrit :
>> Jordan Niethe writes:
>>> On Sun, May 17, 2020 at 4:39 AM Christophe Leroy
>>> wrote:
Le 06/05/2020 à 05:40, Jordan Niethe a écrit :
> Prefixed instructions will mean there are instructions of different
Stephen Rothwell writes:
> Hi all,
>
> Today's linux-next merge of the rcu tree got a conflict in:
>
> arch/powerpc/kernel/traps.c
>
> between commit:
>
> 116ac378bb3f ("powerpc/64s: machine check interrupt update NMI accounting")
>
> from the powerpc tree and commit:
>
> 187416eeb388 ("hard
Just a head up. Repeatedly compiling kernels for a while would trigger
endless soft-lockups since next-20200519 on both x86_64 and powerpc.
.config are in,
https://github.com/cailca/linux-mm
I did first try to revert the linux-next commit 68cd9f4e7238
("tick/nohz: Narrow down noise while se
On Wednesday, 20 May 2020 12:42:09 PM AEST Michael Ellerman wrote:
> Alistair Popple writes:
> > POWER10 introduces two new architectural features - ISAv3.1 and matrix
> > multiply accumulate (MMA) instructions. Userspace detects the presence
> > of these features via two HWCAP bits introduced in
Alistair Popple writes:
> POWER10 introduces two new architectural features - ISAv3.1 and matrix
> multiply accumulate (MMA) instructions. Userspace detects the presence
> of these features via two HWCAP bits introduced in this patch. These
> bits have been agreed to by the compiler and binutils t
Extend the alignment handler selftest to exercise prefixed load store
instructions. Add tests for prefixed VSX, floating point and integer
instructions.
Skip prefix tests if ISA version does not support prefixed instructions.
Signed-off-by: Jordan Niethe
---
.../powerpc/alignment/alignment_hand
The alignment handler selftest needs cache-inhibited memory and
currently /dev/fb0 is relied on to provided this. This prevents running
the test on systems without /dev/fb0 (e.g., mambo). Read the commandline
arguments for an optional path to be used instead, as well as an
optional offset to be for
Em Tue, May 19, 2020 at 02:15:37AM -0400, Athira Rajeev escreveu:
> From: Anju T Sudhakar
>
> Add support for perf extended register capability in powerpc.
> The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to indicate the
> PMU which support extended registers. The generic code define the
On Tue, May 19, 2020 at 05:56:32PM -0700, 'Nick Desaulniers' via Clang Built
Linux wrote:
> Looks like our CI is still red from this:
>
> https://travis-ci.com/github/ClangBuiltLinux/continuous-integration/builds/166854584
>
> Filing a bug to follow up on:
> https://github.com/ClangBuiltLinux/li
Thanks, not sure where I got that name from but it's probably wrong in a few
places. Will wait a bit in case there are any more comments and then respin
the series to update the name.
- Alistair
On Wednesday, 20 May 2020 3:51:53 AM AEST Paul A. Clarke wrote:
> On Tue, May 19, 2020 at 10:31:56AM
On Tue, May 19, 2020 at 11:40:32AM -0700, Ira Weiny wrote:
> On Tue, May 19, 2020 at 09:54:22AM -0700, Guenter Roeck wrote:
> > On Mon, May 18, 2020 at 11:48:43AM -0700, ira.we...@intel.com wrote:
> > > From: Ira Weiny
> > >
> > > The kunmap_atomic clean up failed to remove one set of pagefault/p
This patch implements support for PDSM request 'PAPR_SCM_PDSM_HEALTH'
that returns a newly introduced 'struct nd_papr_pdsm_health' instance
containing dimm health information back to user space in response to
ND_CMD_CALL. This functionality is implemented in newly introduced
papr_scm_get_health() t
'seq_buf' provides a very useful abstraction for writing to a string
buffer without needing to worry about it over-flowing. However even
though the API has been stable for couple of years now its stills not
exported to external modules limiting its usage.
Hence this patch proposes update to 'seq_b
Introduce support for Papr nvDimm Specific Methods (PDSM) in papr_scm
modules and add the command family to the white list of NVDIMM command
sets. Also advertise support for ND_CMD_CALL for the dimm
command mask and implement necessary scaffolding in the module to
handle ND_CMD_CALL ioctl and PDSM
Implement support for fetching nvdimm health information via
H_SCM_HEALTH hcall as documented in Ref[1]. The hcall returns a pair
of 64-bit big-endian integers, bitwise-and of which is then stored in
'struct papr_scm_priv' and subsequently partially exposed to
user-space via newly introduced dimm s
Add documentation to 'papr_hcalls.rst' describing the bitmap flags
that are returned from H_SCM_HEALTH hcall as per the PAPR-SCM
specification.
Cc: Dan Williams
Cc: Michael Ellerman
Cc: "Aneesh Kumar K . V"
Signed-off-by: Vaibhav Jain
---
Changelog:
Resend:
* None
v6..v7:
* None
v5..v6
* Ne
The PAPR standard[1][3] provides mechanisms to query the health and
performance stats of an NVDIMM via various hcalls as described in
Ref[2]. Until now these stats were never available nor exposed to the
user-space tools like 'ndctl'. This is partly due to PAPR platform not
having support for ACPI
On Tue, May 19, 2020 at 6:53 AM Aneesh Kumar K.V
wrote:
>
> Dan Williams writes:
>
> > On Mon, May 18, 2020 at 10:30 PM Aneesh Kumar K.V
> > wrote:
>
> ...
>
> >> Applications using new instructions will behave as expected when running
> >> on P8 and P9. Only future hardware will differentiate b
On Tue, May 19, 2020 at 09:54:22AM -0700, Guenter Roeck wrote:
> On Mon, May 18, 2020 at 11:48:43AM -0700, ira.we...@intel.com wrote:
> > From: Ira Weiny
> >
> > The kunmap_atomic clean up failed to remove one set of pagefault/preempt
> > enables when vaddr is not in the fixmap.
> >
> > Fixes: b
On Tue, May 19, 2020 at 10:31:56AM +1000, Alistair Popple wrote:
> Matrix multiple accumulate (MMA) is a new feature added to ISAv3.1 and
Conclusion is that this should be "Matrix-Multiply Assist", but then there
are a couple more below...
> POWER10. Support on powernv can be selected via a firmw
On Tue, May 19, 2020 at 10:31:51AM +1000, Alistair Popple wrote:
> POWER10 introduces two new architectural features - ISAv3.1 and matrix
> multiply accumulate (MMA) instructions. Userspace detects the presence
> of these features via two HWCAP bits introduced in this patch. These
> bits have been
On Mon, May 18, 2020 at 11:48:43AM -0700, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> The kunmap_atomic clean up failed to remove one set of pagefault/preempt
> enables when vaddr is not in the fixmap.
>
> Fixes: bee2128a09e6 ("arch/kunmap_atomic: consolidate duplicate code")
> Signed-off-b
On Mon, May 18, 2020 at 07:50:36PM -0700, Guenter Roeck wrote:
> Hi Ira,
>
> On 5/18/20 5:03 PM, Ira Weiny wrote:
> > On Sun, May 17, 2020 at 09:29:32PM -0700, Guenter Roeck wrote:
> >> On Sun, May 17, 2020 at 08:49:39PM -0700, Ira Weiny wrote:
> >>> On Sat, May 16, 2020 at 03:33:06PM -0700, Guent
On Tue, May 19, 2020 at 10:28:55AM -0500, Segher Boessenkool wrote:
> On Tue, May 19, 2020 at 10:22:40AM -0500, Paul A. Clarke wrote:
> > On Tue, May 19, 2020 at 10:05:56AM -0500, Segher Boessenkool wrote:
> > > On Tue, May 19, 2020 at 09:49:22AM -0500, Paul A. Clarke wrote:
> > > > On Tue, May 19,
On Tue, May 19, 2020 at 10:22:40AM -0500, Paul A. Clarke wrote:
> On Tue, May 19, 2020 at 10:05:56AM -0500, Segher Boessenkool wrote:
> > On Tue, May 19, 2020 at 09:49:22AM -0500, Paul A. Clarke wrote:
> > > On Tue, May 19, 2020 at 10:31:56AM +1000, Alistair Popple wrote:
> > > > Matrix multiple ac
On Tue, May 19, 2020 at 10:05:56AM -0500, Segher Boessenkool wrote:
> On Tue, May 19, 2020 at 09:49:22AM -0500, Paul A. Clarke wrote:
> > On Tue, May 19, 2020 at 10:31:56AM +1000, Alistair Popple wrote:
> > > Matrix multiple accumulate (MMA) is a new feature added to ISAv3.1 and
> >
> > nit: "Matr
On Tue, May 19, 2020 at 09:49:22AM -0500, Paul A. Clarke wrote:
> On Tue, May 19, 2020 at 10:31:56AM +1000, Alistair Popple wrote:
> > Matrix multiple accumulate (MMA) is a new feature added to ISAv3.1 and
>
> nit: "Matrix-Multiply Accelerator".
"Matrix-Multiply Assist" in fact :-)
Segher
On Tue, May 19, 2020 at 10:31:56AM +1000, Alistair Popple wrote:
> Matrix multiple accumulate (MMA) is a new feature added to ISAv3.1 and
nit: "Matrix-Multiply Accelerator".
PC
tree: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
head: 30df74d67d48949da87e3a5b57c381763e8fd526
commit: 29da4f91c0c1fbda12b8a31be0d564930208c92e [108/110] powerpc/watchpoint:
Don't allow concurrent perf and ptrace events
config: x86_64-allyesconfig (attached as .con
Dan Williams writes:
> On Mon, May 18, 2020 at 10:30 PM Aneesh Kumar K.V
> wrote:
...
>> Applications using new instructions will behave as expected when running
>> on P8 and P9. Only future hardware will differentiate between 'dcbf' and
>> 'dcbfps'
>
> Right, this is the problem. Applications
On Tue, 12 May 2020 18:22:59 +0800, Shengjiu Wang wrote:
> The processing clock is different for platforms, so it is better
> to set ASR76K and ASR56K based on processing clock, rather than
> hard coding the value for them.
Applied to
https://git.kernel.org/pub/scm/linux/kernel/git/broonie/sou
Am Dienstag, den 19.05.2020, 17:41 +0800 schrieb Shengjiu Wang:
> There are two requirements that we need to move the request
> of dma channel from probe to open.
How do you handle -EPROBE_DEFER return code from the channel request if
you don't do it in probe?
> - When dma device binds with power
There are two requirements that we need to move the request
of dma channel from probe to open.
- When dma device binds with power-domains, the power will
be enabled when we request dma channel. If the request of dma
channel happen on probe, then the power-domains will be always
enabled after kerne
tree: https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
head: 30df74d67d48949da87e3a5b57c381763e8fd526
commit: 140777a3d8dfdb3d3f20ea7707c0f1c0ce1b0aa5 [17/110] powerpc/fadump:
consider reserved ranges while reserving memory
config: powerpc-allyesconfig (attached as .confi
Hi all,
Today's linux-next merge of the rcu tree got a conflict in:
arch/powerpc/kernel/traps.c
between commit:
116ac378bb3f ("powerpc/64s: machine check interrupt update NMI accounting")
from the powerpc tree and commit:
187416eeb388 ("hardirq/nmi: Allow nested nmi_enter()")
from the
On Mon, May 18, 2020 at 10:30 PM Aneesh Kumar K.V
wrote:
>
>
> Hi Dan,
>
> Apologies for the delay in response. I was waiting for feedback from
> hardware team before responding to this email.
>
>
> Dan Williams writes:
>
> > On Tue, May 12, 2020 at 8:47 PM Aneesh Kumar K.V
> > wrote:
> >>
> >>
From: Anju T Sudhakar
Add support for perf extended register capability in powerpc.
The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to indicate the
PMU which support extended registers. The generic code define the mask
of extended registers as 0 for non supported architectures.
Patch add
We only support persistent memory on P8 and above. This is enforced by the
firmware and further checked on virtualzied platform during platform init.
Add WARN_ONCE in pmem flush routines to catch the wrong usage of these.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/cacheflush.h
nvdimm expect the flush routines to just mark the cache clean. The barrier
that mark the store globally visible is done in nvdimm_flush().
Update the papr_scm driver to a simplified nvdim_flush callback that do
only the required barrier.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/lib/pmem
of_pmem on POWER10 can now use phwsync instead of hwsync to ensure
all previous writes are architecturally visible for the platform
buffer flush.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/include/asm/cacheflush.h | 7 +++
1 file changed, 7 insertions(+)
diff --git a/arch/powerpc/incl
Architectures like ppc64 provide persistent memory specific barriers
that will ensure that all stores for which the modifications are
written to persistent storage by preceding dcbfps and dcbstps
instructions have updated persistent storage before any data
access or data transfer caused by subseque
Start using dcbstps; phwsync; sequence for flushing persistent memory range.
The new instructions are implemented as a variant of dcbf and hwsync and on
P8 and P9 they will be executed as those instructions. We avoid using them on
older hardware. This helps to avoid difficult to debug bugs.
Signed
POWER10 introduces two new variants of dcbf instructions (dcbstps and dcbfps)
that can be used to write modified locations back to persistent storage.
Additionally, POWER10 also introduce phwsync and plwsync which can be used
to establish order of these writes to persistent storage.
This patch ex
The PAPR based virtualized persistent memory devices are only supported on
POWER9 and above. In the followup patch, the kernel will switch the persistent
memory cache flush functions to use a new `dcbf` variant instruction. The new
instructions even though added in ISA 3.1 works even on P8 and P9 b
Implement a kasan_init_region() dedicated to book3s/32 that
allocates KASAN regions using BATs.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/kasan.h | 1 +
arch/powerpc/mm/kasan/Makefile| 1 +
arch/powerpc/mm/kasan/book3s_32.c | 57 +++
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with BATs.
In order to map with BATs, also enforce data alignment. Set
by default to 256M which is a good compromise for keeping
enough BATs for also KASAN and IMMR.
Signed-off-by: Christophe Leroy
---
arch/powerpc/Kcon
Implement a kasan_init_region() dedicated to 8xx that
allocates KASAN regions using huge pages.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/kasan/8xx.c| 74 ++
arch/powerpc/mm/kasan/Makefile | 1 +
2 files changed, 75 insertions(+)
create mode 100644
DEBUG_PAGEALLOC only manages RW data.
Text and RO data can still be mapped with hugepages and pinned TLB.
In order to map with hugepages, also enforce a 512kB data alignment
minimum. That's a trade-off between size of speed, taking into
account that DEBUG_PAGEALLOC is a debug option. Anyway the a
Pinned TLB are 8M. Now that there is no strict boundary anymore
between text and RO data, it is possible to use 8M pinned executable
TLB that covers both text and RO data.
When PIN_TLB_DATA or PIN_TLB_TEXT is selected, enforce 8M RW data
alignment and allow STRICT_KERNEL_RWX.
Signed-off-by: Chris
Map linear memory space with 512k and 8M pages whenever
possible.
Three mappings are performed:
- One for kernel text
- One for RO data
- One for the rest
Separating the mappings is done to be able to update the
protection later when using STRICT_KERNEL_RWX.
The ITLB miss handler now need to als
Map the IMMR area with a single 512k huge page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/nohash/8xx.c | 8 ++--
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
index 72fb75f2a5f1..f8fff1fa72e3 100644
--- a/a
Add a function to early map kernel memory using huge pages.
For 512k pages, just use standard page table and map in using 512k
pages.
For 8M pages, create a hugepd table and populate the two PGD
entries with it.
This function can only be used to create page tables at startup. Once
the regular SL
Now that linear and IMMR dedicated TLB handling is gone, kernel
boundary address comparison is similar in ITLB miss handler and
in DTLB miss handler.
Create a macro named compare_to_kernel_boundary.
When TASK_SIZE is strictly below 0x8000 and PAGE_OFFSET is
above 0x8000, it is enough to c
59 matches
Mail list logo