Ira Weiny writes:
> On Wed, May 20, 2020 at 12:30:57AM +0530, Vaibhav Jain wrote:
>> Introduce support for Papr nvDimm Specific Methods (PDSM) in papr_scm
>> modules and add the command family to the white list of NVDIMM command
>> sets. Also advertise support for ND_CMD_CALL for the dimm
>> comma
On Wed, May 20, 2020 at 07:43:08PM +0200, Laurent Dufour wrote:
> The commit 8c47b6ff29e3 ("KVM: PPC: Book3S HV: Check caller of H_SVM_*
> Hcalls") added checks of secure bit of SRR1 to filter out the Hcall
> reserved to the Ultravisor.
>
> However, the Hcall H_SVM_INIT_ABORT is made by the Ultrav
Hi all,
On Tue, 19 May 2020 17:23:16 +1000 Stephen Rothwell
wrote:
>
> Today's linux-next merge of the rcu tree got a conflict in:
>
> arch/powerpc/kernel/traps.c
>
> between commit:
>
> 116ac378bb3f ("powerpc/64s: machine check interrupt update NMI accounting")
>
> from the powerpc tree
On Wed, May 20, 2020 at 06:52:21PM -0500, Li Yang wrote:
> On Mon, May 18, 2020 at 5:57 PM Kees Cook wrote:
> > Hm, looking at this code, I see a few other things that need to be
> > fixed:
> >
> > 1) drivers/tty/serial/ucc_uart.c does not do a be32_to_cpu() conversion
> >on the length test (u
Excerpts from Segher Boessenkool's message of May 18, 2020 10:19 pm:
> Hi!
>
> On Mon, May 18, 2020 at 04:35:22PM +1000, Michael Ellerman wrote:
>> Nicholas Piggin writes:
>> > Provide an option to build big-endian kernels using the ELF V2 ABI. This
>> > works
>> > on GCC and clang (since about
PVR value of 0x0F06 means we are arch v3.1 compliant (i.e. POWER10).
This is used by phyp and kvm when booting as a pseries guest to detect
the presence of new P10 features and to enable the appropriate hwcap and
facility bits.
Signed-off-by: Alistair Popple
Signed-off-by: Cédric Le Goater
-
Matrix multiple assist (MMA) is a new feature added to ISAv3.1 and
POWER10. Support on powernv can be selected via a firmware CPU device
tree feature which enables it via a PCR bit.
Signed-off-by: Alistair Popple
---
arch/powerpc/include/asm/reg.h| 3 ++-
arch/powerpc/kernel/dt_cpu_ftrs.c |
Prefix instructions have their own FSCR bit which needs to be enabled
via a CPU feature. The kernel will save the FSCR for problem state but
it needs to be enabled initially.
Signed-off-by: Alistair Popple
---
arch/powerpc/kernel/dt_cpu_ftrs.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/
Setting the FSCR bit directly in the SPR only sets it during the initial
boot and early init of the kernel but not for the init process or any
subsequent kthreads. This is because the thread_struct for those is
copied from the current thread_struct setup at boot which doesn't
reflect any changes ma
On powernv hardware support for ISAv3.1 is advertised via a cpu feature
bit in the device tree. This patch enables the associated HWCAP bit if
the device tree indicates ISAv3.1 is available.
Signed-off-by: Alistair Popple
---
arch/powerpc/kernel/dt_cpu_ftrs.c | 6 ++
1 file changed, 6 insert
Newer ISA versions are enabled by clearing all bits in the PCR
associated with previous versions of the ISA. Enable ISA v3.1 support
by updating the PCR mask to include ISA v3.0. This ensures all PCR
bits corresponding to earlier architecture versions get cleared
thereby enabling ISA v3.1 if suppor
POWER10 introduces two new architectural features - ISAv3.1 and matrix
multiply assist (MMA) instructions. Userspace detects the presence
of these features via two HWCAP bits introduced in this patch. These
bits have been agreed to by the compiler and binutils team.
According to ISAv3.1 MMA is an
This series brings together several previously posted patches required for
POWER10 support and introduces a new patch enabling POWER10 architected
mode to enable booting as a POWER10 pseries guest.
It includes support for enabling facilities related to MMA and prefix
instructions.
Changes from v2
On Wed, May 20, 2020 at 11:27 AM Daniel Jordan
wrote:
>
> Deferred struct page init is a significant bottleneck in kernel boot.
> Optimizing it maximizes availability for large-memory systems and allows
> spinning up short-lived VMs as needed without having to leave them
> running. It also benefi
On Wed, May 20, 2020 at 02:50:56PM +0200, Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 11:58:17PM -0400, Qian Cai wrote:
> > Just a head up. Repeatedly compiling kernels for a while would trigger
> > endless soft-lockups since next-20200519 on both x86_64 and powerpc.
> > .config are in,
>
> Co
On Wed, May 20, 2020 at 06:52:21PM -0500, Li Yang wrote:
> On Mon, May 18, 2020 at 5:57 PM Kees Cook wrote:
> >
> > On Mon, May 18, 2020 at 05:19:04PM -0500, Gustavo A. R. Silva wrote:
> > > The current codebase makes use of one-element arrays in the following
> > > form:
> > >
> > > struct someth
On Mon, May 18, 2020 at 5:57 PM Kees Cook wrote:
>
> On Mon, May 18, 2020 at 05:19:04PM -0500, Gustavo A. R. Silva wrote:
> > The current codebase makes use of one-element arrays in the following
> > form:
> >
> > struct something {
> > int length;
> > u8 data[1];
> > };
> >
> > struct som
On Sat, 14 Mar 2020 11:30:36 +0800, Xiaowei Bao wrote:
> Add PCIe EP mode support for ls1088a and ls2088a, there are some
> difference between LS1 and LS2 platform, so refactor the code of
> the EP driver.
>
> Signed-off-by: Xiaowei Bao
> ---
> v2:
> - This is a new patch for supporting the ls10
On Sat, 14 Mar 2020 11:30:34 +0800, Xiaowei Bao wrote:
> The different PCIe controller in one board may be have different
> capability of MSI or MSIX, so change the way of getting the MSI
> capability, make it more flexible.
>
> Signed-off-by: Xiaowei Bao
> ---
> v2:
> - Remove the repeated assi
On Sat, Mar 14, 2020 at 11:30:31AM +0800, Xiaowei Bao wrote:
> Each PF of EP device should have it's own MSI or MSIX capabitily
s/it's/its/
> struct, so create a dw_pcie_ep_func struct and remove the msi_cap
> and msix_cap to this struct from dw_pcie_ep, and manage the PFs
> with a list.
>
> Sig
On Sat, 14 Mar 2020 11:30:28 +0800, Xiaowei Bao wrote:
> Add multiple PFs support for DWC, due to different PF have different
> config space, we use func_conf_select callback function to access
> the different PF's config space, the different chip company need to
> implement this callback function
On Wed, 20 May 2020 18:51:10 +0200
Laurent Dufour wrote:
> The commit 8c47b6ff29e3 ("KVM: PPC: Book3S HV: Check caller of H_SVM_*
> Hcalls") added checks of secure bit of SRR1 to filter out the Hcall
> reserved to the Ultravisor.
>
> However, the Hcall H_SVM_INIT_ABORT is made by the Ultravisor
Thanks for reviewing this patch Ira. My responses below:
Ira Weiny writes:
> On Wed, May 20, 2020 at 12:30:57AM +0530, Vaibhav Jain wrote:
>> Introduce support for Papr nvDimm Specific Methods (PDSM) in papr_scm
>> modules and add the command family to the white list of NVDIMM command
>> sets.
Dan Williams writes:
> On Tue, May 19, 2020 at 6:53 AM Aneesh Kumar K.V
> wrote:
>>
>> Dan Williams writes:
>>
>> > On Mon, May 18, 2020 at 10:30 PM Aneesh Kumar K.V
>> > wrote:
>>
>> ...
>>
>> >> Applications using new instructions will behave as expected when running
>> >> on P8 and P9. Only
On Wed, 20 May 2020 19:43:08 +0200
Laurent Dufour wrote:
> The commit 8c47b6ff29e3 ("KVM: PPC: Book3S HV: Check caller of H_SVM_*
> Hcalls") added checks of secure bit of SRR1 to filter out the Hcall
> reserved to the Ultravisor.
>
> However, the Hcall H_SVM_INIT_ABORT is made by the Ultravisor
Deferred struct page init is a bottleneck in kernel boot--the biggest
for us and probably others. Optimizing it maximizes availability for
large-memory systems and allows spinning up short-lived VMs as needed
without having to leave them running. It also benefits bare metal
machines hosting VMs t
padata_driver_exit() is unnecessary because padata isn't built as a
module and doesn't exit.
padata's init routine will soon allocate memory, so getting rid of the
exit function now avoids pointless code to free it.
Signed-off-by: Daniel Jordan
---
kernel/padata.c | 6 --
1 file changed, 6
Add Documentation for multithreaded jobs.
Signed-off-by: Daniel Jordan
---
Documentation/core-api/padata.rst | 41 +++
1 file changed, 31 insertions(+), 10 deletions(-)
diff --git a/Documentation/core-api/padata.rst
b/Documentation/core-api/padata.rst
index 9a24c111
padata allocates per-CPU, per-instance work structs for parallel jobs.
A do_parallel call assigns a job to a sequence number and hashes the
number to a CPU, where the job will eventually run using the
corresponding work.
This approach fit with how padata used to bind a job to each CPU
round-robin,
padata will soon initialize the system's struct pages in parallel, so it
needs to be ready by page_alloc_init_late().
The error return from padata_driver_init() triggers an initcall warning,
so add a warning to padata_init() to avoid silent failure.
Signed-off-by: Daniel Jordan
---
include/linu
Using padata during deferred init has only been tested on x86, so for
now limit it to this architecture.
If another arch wants this, it can find the max thread limit that's best
for it and override deferred_page_init_max_threads().
Signed-off-by: Daniel Jordan
---
arch/x86/mm/init_64.c| 12
Deferred struct page init is a significant bottleneck in kernel boot.
Optimizing it maximizes availability for large-memory systems and allows
spinning up short-lived VMs as needed without having to leave them
running. It also benefits bare metal machines hosting VMs that are
sensitive to downtime
Sometimes the kernel doesn't take full advantage of system memory
bandwidth, leading to a single CPU spending excessive time in
initialization paths where the data scales with memory size.
Multithreading naturally addresses this problem.
Extend padata, a framework that handles many parallel yet s
The commit 8c47b6ff29e3 ("KVM: PPC: Book3S HV: Check caller of H_SVM_*
Hcalls") added checks of secure bit of SRR1 to filter out the Hcall
reserved to the Ultravisor.
However, the Hcall H_SVM_INIT_ABORT is made by the Ultravisor passing the
context of the VM calling UV_ESM. This allows the Hypervi
Le 20/05/2020 à 19:32, Greg Kurz a écrit :
On Wed, 20 May 2020 18:51:10 +0200
Laurent Dufour wrote:
The commit 8c47b6ff29e3 ("KVM: PPC: Book3S HV: Check caller of H_SVM_*
Hcalls") added checks of secure bit of SRR1 to filter out the Hcall
reserved to the Ultravisor.
However, the Hcall H_SVM_I
Thanks for reviewing this this patch Ira. My responses below:
Ira Weiny writes:
> On Wed, May 20, 2020 at 12:30:56AM +0530, Vaibhav Jain wrote:
>> Implement support for fetching nvdimm health information via
>> H_SCM_HEALTH hcall as documented in Ref[1]. The hcall returns a pair
>> of 64-bit b
Hi Michael,
On Wed, May 20, 2020 at 04:07:00PM +1000, Michael Ellerman wrote:
> [ + Dmitry & linux-input ]
>
> Nathan Chancellor writes:
> > This causes a build error with CONFIG_WALNUT because kb_cs and kb_data
> > were removed in commit 917f0af9e5a9 ("powerpc: Remove arch/ppc and
> > include/a
s/seq_buf: Export seq_buf_printf() to external modules/
seq_buf: export seq_buf_printf/
The commit 8c47b6ff29e3 ("KVM: PPC: Book3S HV: Check caller of H_SVM_*
Hcalls") added checks of secure bit of SRR1 to filter out the Hcall
reserved to the Ultravisor.
However, the Hcall H_SVM_INIT_ABORT is made by the Ultravisor passing the
context of the VM calling UV_ESM. This allows the Hypervi
ugeth_quiesce/activate are used to halt the controller when there is a
link change that requires to reconfigure the mac.
The previous implementation called netif_device_detach(). This however
causes the initial activation of the netdevice to fail precisely because
it's detached. For details, see [
On Wed, May 20, 2020 at 12:30:57AM +0530, Vaibhav Jain wrote:
> Introduce support for Papr nvDimm Specific Methods (PDSM) in papr_scm
> modules and add the command family to the white list of NVDIMM command
> sets. Also advertise support for ND_CMD_CALL for the dimm
> command mask and implement nec
On Wed, May 20, 2020 at 12:30:56AM +0530, Vaibhav Jain wrote:
> Implement support for fetching nvdimm health information via
> H_SCM_HEALTH hcall as documented in Ref[1]. The hcall returns a pair
> of 64-bit big-endian integers, bitwise-and of which is then stored in
> 'struct papr_scm_priv' and su
On 5/20/20 7:23 PM, Christophe Leroy wrote:
Le 20/05/2020 à 15:43, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
Le 18/05/2020 à 17:19, Rui Salvaterra a écrit :
Hi again, Christophe,
On Mon, 18 May 2020 at 15:03, Christophe Leroy
wrote:
Can you try reverting 697ece78f8f749aeea40f
On Wed, May 20, 2020 at 02:50:56PM +0200, Peter Zijlstra wrote:
> On Tue, May 19, 2020 at 11:58:17PM -0400, Qian Cai wrote:
> > Just a head up. Repeatedly compiling kernels for a while would trigger
> > endless soft-lockups since next-20200519 on both x86_64 and powerpc.
> > .config are in,
>
> Co
Le 20/05/2020 à 15:43, Aneesh Kumar K.V a écrit :
Christophe Leroy writes:
Le 18/05/2020 à 17:19, Rui Salvaterra a écrit :
Hi again, Christophe,
On Mon, 18 May 2020 at 15:03, Christophe Leroy
wrote:
Can you try reverting 697ece78f8f749aeea40f2711389901f0974017a ? It may
have broken swa
On 2020-05-20 02:38, Jiri Slaby wrote:
On 15. 05. 20, 1:22, rana...@codeaurora.org wrote:
On 2020-05-13 00:04, Greg KH wrote:
On Tue, May 12, 2020 at 02:39:50PM -0700, rana...@codeaurora.org
wrote:
On 2020-05-12 01:25, Greg KH wrote:
> On Tue, May 12, 2020 at 09:22:15AM +0200, Jiri Slaby wrote
Christophe Leroy writes:
> Le 18/05/2020 à 17:19, Rui Salvaterra a écrit :
>> Hi again, Christophe,
>>
>> On Mon, 18 May 2020 at 15:03, Christophe Leroy
>> wrote:
>>>
>>> Can you try reverting 697ece78f8f749aeea40f2711389901f0974017a ? It may
>>> have broken swap.
>>
>> Yeah, that was a good c
Several strange crashes have been eventually traced back to
STRICT_KERNEL_RWX and its interaction with code patching.
Various paths in our ftrace, kprobes and other patching code need to
be hardened against patching failures, otherwise we can end up running
with partially/incorrectly patched ftrac
On Tue, May 19, 2020 at 11:58:17PM -0400, Qian Cai wrote:
> Just a head up. Repeatedly compiling kernels for a while would trigger
> endless soft-lockups since next-20200519 on both x86_64 and powerpc.
> .config are in,
Could be 90b5363acd47 ("sched: Clean up scheduler_ipi()"), although I've
not s
On 2020-05-20 01:59, Jiri Slaby wrote:
On 20. 05. 20, 8:47, Raghavendra Rao Ananta wrote:
Potentially, hvc_open() can be called in parallel when two tasks calls
open() on /dev/hvcX. In such a scenario, if the
hp->ops->notifier_add()
callback in the function fails, where it sets the tty->driver
On Wed, May 20, 2020 at 07:22:19PM +0800, Shengjiu Wang wrote:
> I see some driver also request dma channel in open() or hw_params().
> how can they avoid the defer probe issue?
> for example:
> sound/arm/pxa2xx-pcm-lib.c
> sound/soc/sprd/sprd-pcm-dma.c
Other drivers having problems means those d
Le 20/05/2020 à 14:21, Jordan Niethe a écrit :
On Wed, May 20, 2020 at 9:44 PM Michael Ellerman wrote:
In a few places we want to calculate the address of the next
instruction. Previously that was simple, we just added 4 bytes, or if
using a u32 * we incremented that pointer by 1.
But pref
On Wed, May 20, 2020 at 9:44 PM Michael Ellerman wrote:
>
> In a few places we want to calculate the address of the next
> instruction. Previously that was simple, we just added 4 bytes, or if
> using a u32 * we incremented that pointer by 1.
>
> But prefixed instructions make it more complicated,
This adds the CPU or thread number to printk messages. This helps a
lot when deciphering concurrent oopses that have been interleaved.
Example output, of PID1 (T1) triggering a warning:
[1.581678][T1] WARNING: CPU: 0 PID: 1 at crypto/rsa-pkcs1pad.c:539
pkcs1pad_verify+0x38/0x140
[
In a few places we want to calculate the address of the next
instruction. Previously that was simple, we just added 4 bytes, or if
using a u32 * we incremented that pointer by 1.
But prefixed instructions make it more complicated, we need to advance
by either 4 or 8 bytes depending on the actual i
Hi
On Wed, May 20, 2020 at 5:42 PM Lucas Stach wrote:
>
> Am Mittwoch, den 20.05.2020, 16:20 +0800 schrieb Shengjiu Wang:
> > Hi
> >
> > On Tue, May 19, 2020 at 6:04 PM Lucas Stach wrote:
> > > Am Dienstag, den 19.05.2020, 17:41 +0800 schrieb Shengjiu Wang:
> > > > There are two requirements tha
Show the address of the tasks regs in the process listing in xmon. The
regs should always be on the stack page that we also print the address
of, but it's still helpful not to have to find them by hand.
Signed-off-by: Michael Ellerman
---
arch/powerpc/xmon/xmon.c | 6 +++---
1 file changed, 3 in
On Sat, 2 May 2020 16:26:42 +0200, Wolfram Sang wrote:
> My 'pengutronix' address is defunct for years. Merge the entries and use
> the proper contact address.
Applied to powerpc/next.
[1/1] powerpc/5200: update contact email
https://git.kernel.org/powerpc/c/ad0f522df1b2f4fe5d4ae6418e1ea216
On Tue, 28 Apr 2020 13:45:04 +1000, Sam Bobroff wrote:
> Here are some fixes and cleanups that have come from other work but that I
> think stand on their own.
>
> Only one patch ("Release EEH device state synchronously", suggested by Oliver
> O'Halloran) is a significant change: it moves the clea
On Thu, 14 May 2020 16:47:25 +0530, Ravi Bangoria wrote:
> So far, powerpc Book3S code has been written with an assumption of
> only one watchpoint. But Power10[1] is introducing second watchpoint
> register (DAWR). Even though this patchset does not enable 2nd DAWR,
> it makes the infrastructure r
On Fri, 8 May 2020 14:33:52 +1000, Nicholas Piggin wrote:
> Since v3, I fixed a compile error and returned the generic machine check
> exception handler to be NMI on 32 and 64e, as caught by Christophe's
> review.
>
> Also added the last patch, just found it by looking at the code, a
> review for
On Thu, 2 Apr 2020 23:49:29 +1100, Michael Ellerman wrote:
> Currently we don't report anything useful in /proc//status:
>
> $ grep Speculation_Store_Bypass /proc/self/status
> Speculation_Store_Bypass: unknown
>
> Our mitigation is currently always a barrier instruction, which
> doesn'
On Thu, 7 May 2020 22:13:29 +1000, Michael Ellerman wrote:
>
Applied to powerpc/next.
[1/4] powerpc/64s: Always has full regs, so remove remnant checks
https://git.kernel.org/powerpc/c/feb9df3462e688d073848d85c8bb132fe8fd9ae5
[2/4] powerpc: Use set_trap() and avoid open-coding trap maskin
On Tue, 28 Apr 2020 22:31:30 +1000, Michael Ellerman wrote:
> Aneesh increased the size of struct pt_regs by 16 bytes and started
> seeing this WARN_ON:
>
> smp: Bringing up secondary CPUs ...
> [ cut here ]
> WARNING: CPU: 0 PID: 0 at arch/powerpc/kernel/process.c:45
On Sun, 26 Apr 2020 21:44:10 +1000, Michael Ellerman wrote:
> This is based on the count_instructions test.
>
> However this one also counts the number of failed stcx's, and in
> conjunction with knowing the size of the stcx loop, can calculate the
> total number of instructions executed even in t
On Thu, 23 Apr 2020 16:00:38 +1000, Michael Ellerman wrote:
> create_cpu_loop() calls smu_sat_get_sdb_partition() which does
> kmalloc() and returns the allocated buffer. In fact it's called twice,
> and neither buffer is freed.
>
> This results in a memory leak as reported by Erhard:
> unrefere
On Tue, 28 Apr 2020 22:31:52 +1000, Michael Ellerman wrote:
> There's no need to cast in task_pt_regs() as tsk->thread.regs should
> already be a struct pt_regs. If someone's using task_pt_regs() on
> something that's not a task but happens to have a thread.regs then
> we'll deal with them later.
On Wed, 6 May 2020 13:40:20 +1000, Jordan Niethe wrote:
> A future revision of the ISA will introduce prefixed instructions. A
> prefixed instruction is composed of a 4-byte prefix followed by a
> 4-byte suffix.
>
> All prefixes have the major opcode 1. A prefix will never be a valid
> word instru
On Thu, 7 May 2020 13:57:49 -0500, Gustavo A. R. Silva wrote:
> The current codebase makes use of the zero-length array language
> extension to the C90 standard, but the preferred mechanism to declare
> variable-length types such as these ones is a flexible array member[1][2],
> introduced in C99:
On Thu, 7 May 2020 13:57:55 -0500, Gustavo A. R. Silva wrote:
> The current codebase makes use of the zero-length array language
> extension to the C90 standard, but the preferred mechanism to declare
> variable-length types such as these ones is a flexible array member[1][2],
> introduced in C99:
On Sat, 09 May 2020 18:58:31 +, Geoff Levand wrote:
> This is a combined V2 of the two patch sets I sent out on March 27th,
> 'PS3 patches for v5.7' and 'powerpc: Minor updates to improve build
> debugging'.
>
> I've dropped these two patches that were in my 'PS3 patches for v5.7' set:
>
>
On Wed, 6 May 2020 06:51:59 + (UTC), Christophe Leroy wrote:
> Since 01 May 2020, our email adresses have changed to @csgroup.eu
>
> Update MAINTAINERS accordingly.
Applied to powerpc/next.
[1/1] powerpc/8xx: Update email address in MAINTAINERS
https://git.kernel.org/powerpc/c/679d74ab
On Wed, 8 Apr 2020 15:58:49 + (UTC), Christophe Leroy wrote:
> When CONFIG_KASAN is selected, the stack usage is increased.
>
> In the same way as x86 and arm64 architectures, increase
> THREAD_SHIFT when CONFIG_KASAN is selected.
Applied to powerpc/next.
[1/1] powerpc/kasan: Fix stack overf
On Mon, 20 Apr 2020 18:36:34 + (UTC), Christophe Leroy wrote:
> _ALIGN_UP() is specific to powerpc
> ALIGN() is generic and does the same
>
> Replace _ALIGN_UP() by ALIGN()
Applied to powerpc/next.
[1/5] drivers/powerpc: Replace _ALIGN_UP() by ALIGN()
https://git.kernel.org/powerpc/c/7
This reverts commit 697ece78f8f749aeea40f2711389901f0974017a.
The implementation of SWAP on powerpc requires page protection
bits to not be one of the least significant PTE bits.
Until the SWAP implementation is changed and this requirement voids,
we have to keep at least _PAGE_RW outside of the
Thanks for reviewing this patch Aneesh.
"Aneesh Kumar K.V" writes:
> Vaibhav Jain writes:
>
>
>
> +
>> +/* Papr-scm-header + payload expected with ND_CMD_CALL ioctl from libnvdimm
>> */
>> +struct nd_pdsm_cmd_pkg {
>> +struct nd_cmd_pkg hdr; /* Package header containing sub-cmd */
>
From: Anju T Sudhakar
Add extended regs to sample_reg_mask in the tool side to use
with `-I?` option. Perf tools side uses extended mask to display
the platform supported register names (with -I? option) to the user
and also send this mask to the kernel to capture the extended registers
in each s
From: Anju T Sudhakar
Add support for perf extended register capability in powerpc.
The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to indicate the
PMU which support extended registers. The generic code define the mask
of extended registers as 0 for non supported architectures.
Patch add
Patch set to add support for perf extended register capability in
powerpc. The capability flag PERF_PMU_CAP_EXTENDED_REGS, is used to
indicate the PMU which support extended registers. The generic code
define the mask of extended registers as 0 for non supported architectures.
patch 1/2 defines th
Am Mittwoch, den 20.05.2020, 16:20 +0800 schrieb Shengjiu Wang:
> Hi
>
> On Tue, May 19, 2020 at 6:04 PM Lucas Stach wrote:
> > Am Dienstag, den 19.05.2020, 17:41 +0800 schrieb Shengjiu Wang:
> > > There are two requirements that we need to move the request
> > > of dma channel from probe to open
On 15. 05. 20, 1:22, rana...@codeaurora.org wrote:
> On 2020-05-13 00:04, Greg KH wrote:
>> On Tue, May 12, 2020 at 02:39:50PM -0700, rana...@codeaurora.org wrote:
>>> On 2020-05-12 01:25, Greg KH wrote:
>>> > On Tue, May 12, 2020 at 09:22:15AM +0200, Jiri Slaby wrote:
>>> > > commit bdb498c2004061
From: Biwen Li
This removes interrupts property to drop warning as follows:
- $ hwclock.util-linux
hwclock.util-linux: select() to /dev/rtc0
to wait for clock tick timed out
My case:
- RTC ds1374's INT pin is connected to VCC on T4240RDB,
then the RTC cannot inform cpu
From: Biwen Li
This removes interrupts property to drop warning as follows:
- $ hwclock.util-linux
hwclock.util-linux: select() to /dev/rtc0
to wait for clock tick timed out
My case:
- RTC ds1339s INT pin isn't connected to cpus INT pin on T1024RDB,
then the RTC cannot
Fix the following warning:
drivers/scsi/ibmvscsi/ibmvscsi.c:2387:12: warning: symbol
'ibmvscsi_module_init' was not declared. Should it be static?
drivers/scsi/ibmvscsi/ibmvscsi.c:2409:13: warning: symbol
'ibmvscsi_module_exit' was not declared. Should it be static?
Signed-off-by: Chen Tao
---
On 20. 05. 20, 8:47, Raghavendra Rao Ananta wrote:
> Potentially, hvc_open() can be called in parallel when two tasks calls
> open() on /dev/hvcX. In such a scenario, if the hp->ops->notifier_add()
> callback in the function fails, where it sets the tty->driver_data to
> NULL, the parallel hvc_open
From: Peter Zijlstra
commit 0758cd8304942292e95a0f750c374533db378b32 upstream
Aneesh reported that:
tlb_flush_mmu()
tlb_flush_mmu_tlbonly()
tlb_flush() <-- #1
tlb_flush_mmu_free()
tlb_table_flush()
tlb_table_inval
From: Peter Zijlstra
commit 0ed1325967ab5f7a4549a2641c6ebe115f76e228 upstream
Architectures for which we have hardware walkers of Linux page table
should flush TLB on mmu gather batch allocation failures and batch flush.
Some architectures like POWER supports multiple translation modes (hash
and
From: "Aneesh Kumar K.V"
commit 12e4d53f3f04e81f9e83d6fc10edc7314ab9f6b9 upstream
Patch series "Fixup page directory freeing", v4.
This is a repost of patch series from Peter with the arch specific changes
except ppc64 dropped. ppc64 changes are added here because we are redoing
the patch seri
From: Peter Zijlstra
commit 96bc9567cbe112e9320250f01b9c060c882e8619 upstream
Make issuing a TLB invalidate for page-table pages the normal case.
The reason is twofold:
- too many invalidates is safer than too few,
- most architectures use the linux page-tables natively
and would thus req
From: Will Deacon
commit a6d60245d6d9b1caf66b0d94419988c4836980af upstream
It is common for architectures with hugepage support to require only a
single TLB invalidation operation per hugepage during unmap(), rather than
iterating through the mapping at a PAGE_SIZE increment. Currently,
however,
From: Peter Zijlstra
commit 22a61c3c4f1379ef8b0ce0d5cb78baf3178950e2 upstream
Some architectures require different TLB invalidation instructions
depending on whether it is only the last-level of page table being
changed, or whether there are also changes to the intermediate
(directory) entries h
The TLB flush optimisation (a46cc7a90f: powerpc/mm/radix: Improve TLB/PWC
flushes) may result in random memory corruption. Any concurrent page-table walk
could end up with a Use-after-Free. Even on UP this might give issues, since
mmu_gather is preemptible these days. An interrupt or preempted task
Hi
On Tue, May 19, 2020 at 6:04 PM Lucas Stach wrote:
>
> Am Dienstag, den 19.05.2020, 17:41 +0800 schrieb Shengjiu Wang:
> > There are two requirements that we need to move the request
> > of dma channel from probe to open.
>
> How do you handle -EPROBE_DEFER return code from the channel request
Vaibhav Jain writes:
+
> +/* Papr-scm-header + payload expected with ND_CMD_CALL ioctl from libnvdimm
> */
> +struct nd_pdsm_cmd_pkg {
> + struct nd_cmd_pkg hdr; /* Package header containing sub-cmd */
> + __s32 cmd_status; /* Out: Sub-cmd status returned back */
> + __
94 matches
Mail list logo