Commit 40df759e2b9e ("kbuild: Fix build with binutils <= 2.19")
introduced ar-option and KBUILD_ARFLAGS to cope with old binutils.
According to Documentation/process/changes.rst, the current minimal
supported version of binutils is 2.21 so you can assume the 'D' option
is always supported. Not onl
On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras
>>> ---
>>> a
Hello Shengjiu,
One issue for error-out and some nit-pickings inline. Thanks.
On Thu, Sep 19, 2019 at 08:11:42PM +0800, Shengjiu Wang wrote:
> There is error "aplay: pcm_write:2023: write error: Input/output error"
> on i.MX8QM/i.MX8QXP platform for S24_3LE format.
>
> In i.MX8QM/i.MX8QXP, the D
On Thu, Sep 19, 2019 at 08:11:41PM +0800, Shengjiu Wang wrote:
> When set the runtime hardware parameters, we may need to query
> the capability of DMA to complete the parameters.
>
> This patch is to Extract this operation from
> dmaengine_pcm_set_runtime_hwparams function to a separate function
On 9/20/19 1:12 PM, Leonardo Bras wrote:
> If a process (qemu) with a lot of CPUs (128) try to munmap() a large
> chunk of memory (496GB) mapped with THP, it takes an average of 275
> seconds, which can cause a lot of problems to the load (in qemu case,
> the guest will lock for this time).
>
> Tr
On 9/20/19 1:28 PM, Leonardo Bras wrote:
> On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
>> On 9/20/19 12:50 PM, Leonardo Bras wrote:
>>> Skips slow part of serialize_against_pte_lookup if there is no running
>>> lockless pagetable walk.
>>>
>>> Signed-off-by: Leonardo Bras
>>> ---
>>> a
On Fri, 2019-09-20 at 13:11 -0700, John Hubbard wrote:
> On 9/20/19 12:50 PM, Leonardo Bras wrote:
> > Skips slow part of serialize_against_pte_lookup if there is no running
> > lockless pagetable walk.
> >
> > Signed-off-by: Leonardo Bras
> > ---
> > arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
>
On 9/20/19 12:50 PM, Leonardo Bras wrote:
> Skips slow part of serialize_against_pte_lookup if there is no running
> lockless pagetable walk.
>
> Signed-off-by: Leonardo Bras
> ---
> arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/ar
If a process (qemu) with a lot of CPUs (128) try to munmap() a large
chunk of memory (496GB) mapped with THP, it takes an average of 275
seconds, which can cause a lot of problems to the load (in qemu case,
the guest will lock for this time).
Trying to find the source of this bug, I found out most
The current code allows more than one thread to run in reset. This can
corrupt struct adapter data. Check adapter->resetting before performing
a reset, if there is another reset running delay (100 msec) before trying
again.
Signed-off-by: Juliet Kim
---
drivers/net/ethernet/ibm/ibmvnic.c | 40
Commit a5681e20b541 ("net/ibmnvic: Fix deadlock problem in reset")
made the change to hold the RTNL lock during a reset to avoid deadlock
but linkwatch_event is fired during the reset and needs the RTNL lock.
That keeps linkwatch_event process from proceeding until the reset
is complete. The re
This series includes two fixes. The first improves reset code to allow
linkwatch_event to proceed during reset. The second ensures that no more
than one thread runs in reset at a time.
v2:
- Separate change param reset from do_reset()
- Return IBMVNIC_OPEN_FAILED if __ibmvnic_open fails
- Remove
On Fri, 2019-09-20 at 16:50 -0300, Leonardo Bras wrote:
> *** BLURB HERE ***
Sorry, something gone terribly wrong with my cover letter.
I will try to find it and send here, or rewrite it.
Best regards,
signature.asc
Description: This is a digitally signed message part
Applies the counting-based method for monitoring all book3s_64-related
functions that do lockless pagetable walks.
Signed-off-by: Leonardo Bras
---
It may be necessary to merge an older patch first:
powerpc: kvm: Reduce calls to get current->mm by storing the value locally
Link:
https://lore.ker
Skips slow part of serialize_against_pte_lookup if there is no running
lockless pagetable walk.
Signed-off-by: Leonardo Bras
---
arch/powerpc/mm/book3s64/pgtable.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/mm/book3s64/pgtable.c
b/arch/powerpc/mm/book3s64
Enables count-based monitoring method for lockless pagetable walks on
PowerPC book3s_64.
Other architectures/platforms fallback to using generic dummy functions.
Signed-off-by: Leonardo Bras
---
arch/powerpc/include/asm/book3s/64/pgtable.h | 5 +
1 file changed, 5 insertions(+)
diff --git
Applies the counting-based method for monitoring all book3s_hv related
functions that do lockless pagetable walks.
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/book3s_hv_nested.c | 8
arch/powerpc/kvm/book3s_hv_rm_mmu.c | 9 -
2 files changed, 16 insertions(+), 1 deletion(-
Applies the counting-based method for monitoring lockless pgtable walks on
kvmppc_e500_shadow_map().
Signed-off-by: Leonardo Bras
---
arch/powerpc/kvm/e500_mmu_host.c | 4
1 file changed, 4 insertions(+)
diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
index
Applies the counting-based method for monitoring all hash-related functions
that do lockless pagetable walks.
Signed-off-by: Leonardo Bras
---
arch/powerpc/mm/book3s64/hash_tlb.c | 2 ++
arch/powerpc/mm/book3s64/hash_utils.c | 7 +++
2 files changed, 9 insertions(+)
diff --git a/arch/powe
Applies the counting-based method for monitoring lockless pgtable walks on
read_user_stack_slow.
Signed-off-by: Leonardo Bras
---
arch/powerpc/perf/callchain.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
in
Applies the counting-based method for monitoring lockless pgtable walks on
addr_to_pfn().
Signed-off-by: Leonardo Bras
---
arch/powerpc/kernel/mce_power.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_
There is a need to monitor lockless pagetable walks, in order to avoid
doing THP splitting/collapsing during them.
Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.
In order to speedup these cases, I propose a refcount-based
As decribed, gup_pgd_range is a lockless pagetable walk. So, in order to
monitor against THP split/collapse with the couting method, it's necessary
to bound it with {start,end}_lockless_pgtbl_walk.
There are dummy functions, so it is not going to add any overhead on archs
that don't use this metho
*** BLURB HERE ***
Leonardo Bras (11):
powerpc/mm: Adds counting method to monitor lockless pgtable walks
asm-generic/pgtable: Adds dummy functions to monitor lockless pgtable
walks
mm/gup: Applies counting method to monitor gup_pgd_range
powerpc/mce_power: Applies counting method to m
It's necessary to monitor lockless pagetable walks, in order to avoid doing
THP splitting/collapsing during them.
Some methods rely on local_irq_{save,restore}, but that can be slow on
cases with a lot of cpus are used for the process.
In order to speedup some cases, I propose a refcount-based ap
The pull request you sent on Fri, 20 Sep 2019 23:22:50 +1000:
> https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
> tags/powerpc-5.4-1
has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/45824fc0da6e46cc5d563105e1eaaf3098a686f9
Thank you!
--
Deet-doot-do
On Fri, Sep 20, 2019 at 11:18 AM Qian Cai wrote:
>
> On Fri, 2019-09-20 at 19:55 +0530, Aneesh Kumar K.V wrote:
> > Qian Cai writes:
> >
> > > The linux-next commit "libnvdimm/dax: Pick the right alignment default
> > > when
> > > creating dax devices" causes powerpc failed to build with this co
On Fri, 2019-09-20 at 19:55 +0530, Aneesh Kumar K.V wrote:
> Qian Cai writes:
>
> > The linux-next commit "libnvdimm/dax: Pick the right alignment default when
> > creating dax devices" causes powerpc failed to build with this config.
> > Reverted
> > it fixed the issue.
> >
> > ERROR: "hash__h
On Fri, Sep 20, 2019 at 06:39:51PM +0300, Ilie Halip wrote:
> When building with ppc64_defconfig, the compiler reports
> that these 2 variables are not used:
> warning: unused variable 'core99_l2_cache' [-Wunused-variable]
> warning: unused variable 'core99_l3_cache' [-Wunused-variable]
>
The following commit has been merged into the perf/urgent branch of tip:
Commit-ID: 8067b3da970baa12e6045400fdf009673b8dd3c2
Gitweb:
https://git.kernel.org/tip/8067b3da970baa12e6045400fdf009673b8dd3c2
Author:Anju T Sudhakar
AuthorDate:Thu, 18 Jul 2019 23:47:47 +05:30
Commi
The following commit has been merged into the perf/urgent branch of tip:
Commit-ID: 124eb5f82bf9395419b20205c4dcc1b8fcda7f29
Gitweb:
https://git.kernel.org/tip/124eb5f82bf9395419b20205c4dcc1b8fcda7f29
Author:Anju T Sudhakar
AuthorDate:Thu, 18 Jul 2019 23:47:48 +05:30
Commi
The following commit has been merged into the perf/urgent branch of tip:
Commit-ID: 2bff2b828502b5e5d5ea5a52643d3542053df03f
Gitweb:
https://git.kernel.org/tip/2bff2b828502b5e5d5ea5a52643d3542053df03f
Author:Anju T Sudhakar
AuthorDate:Thu, 18 Jul 2019 23:47:49 +05:30
Commi
This patch corrects the SPDX License Identifier style
in header files for Open Coherent Accelerator (OCXL) compatible device
drivers. For C header files Documentation/process/license-rules.rst
mandates C-like comments (opposed to C source files where
C++ style should be used)
Changes made by using
From: Anju T Sudhakar
Use 'trace_imc/trace_cycles' as the default event for 'perf kvm record'
in powerpc.
Signed-off-by: Anju T Sudhakar
Reviewed-by: Ravi Bangoria
Cc: Alexander Shishkin
Cc: Jiri Olsa
Cc: Madhavan Srinivasan
Cc: Michael Ellerman
Cc: Namhyung Kim
Cc: Peter Zijlstra
Cc: li
From: Anju T Sudhakar
'perf kvm record' uses 'cycles'(if the user did not specify any event)
as the default event to profile the guest.
This will not provide any proper samples from the guest incase of
powerpc architecture, since in powerpc the PMUs are controlled by the
guest rather than the ho
From: Anju T Sudhakar
Move kvm-stat header file to the common include section, and make the
definitions in the header file under the conditional inclusion `#ifdef
HAVE_KVM_STAT_SUPPORT`.
This helps to define other 'perf kvm' related function prototypes in
kvm-stat header file, which may not need
Qian Cai writes:
> The linux-next commit "libnvdimm/dax: Pick the right alignment default when
> creating dax devices" causes powerpc failed to build with this config.
> Reverted
> it fixed the issue.
>
> ERROR: "hash__has_transparent_hugepage" [drivers/nvdimm/libnvdimm.ko]
> undefined!
> ERROR
The linux-next commit "libnvdimm/dax: Pick the right alignment default when
creating dax devices" causes powerpc failed to build with this config. Reverted
it fixed the issue.
ERROR: "hash__has_transparent_hugepage" [drivers/nvdimm/libnvdimm.ko] undefined!
ERROR: "radix__has_transparent_hugepage"
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Hi Linus,
Please pull powerpc updates for 5.4.
This is a bit late, partly due to me travelling, and partly due to a power
outage knocking out some of my test systems *while* I was travelling.
A few conflicts this time unfortunately. The key one is
The PAPR document specifies the TLB Block Invalidate Characteristics which
tells for each pair of segment base page size, actual page size, the size
of the block the hcall H_BLOCK_REMOVE supports.
These characteristics are loaded at boot time in a new table hblkr_size.
The table is separate from t
Since the commit ba2dd8a26baa ("powerpc/pseries/mm: call H_BLOCK_REMOVE"),
the call to H_BLOCK_REMOVE is always done if the feature is exhibited.
However, the hypervisor may not support all the block size for the hcall
H_BLOCK_REMOVE depending on the segment base page size and actual page
size.
W
Depending on the hardware and the hypervisor, the hcall H_BLOCK_REMOVE may
not be able to process all the page sizes for a segment base page size, as
reported by the TLB Invalidate Characteristics.
For each pair of base segment page size and actual page size, this
characteristic tells us the size
The original kernel still exists in the memory, clear it now.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Nicholas Piggin
Cc: Kees Cook
Reviewed-by: Christophe Leroy
Reviewed-by: Diana Craciun
Test
Add document to explain how we implement KASLR for fsl_booke32.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Nicholas Piggin
Cc: Kees Cook
---
Documentation/powerpc/kaslr-booke32.rst | 42 +++
When kaslr is enabled, the kernel offset is different for every boot.
This brings some difficult to debug the kernel. Dump out the kernel
offset when panic so that we can easily debug the kernel.
This code is derived from x86/arm64 which has similar functionality.
Signed-off-by: Jason Yan
Cc: Di
Like all other architectures such as x86 or arm64, include KASLR offset
in VMCOREINFO ELF notes to assist in debugging. After this, we can use
crash --kaslr option to parse vmcore generated from a kaslr kernel.
Note: The crash tool needs to support --kaslr too.
Signed-off-by: Jason Yan
Cc: Diana
One may want to disable kaslr when boot, so provide a cmdline parameter
'nokaslr' to support this.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Nicholas Piggin
Cc: Kees Cook
Reviewed-by: Diana Craciun
This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not su
After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.
Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/ka
Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Add a new helper create_kaslr_tlb_entry() to create a tlb entry by the
virtual and physical address. This is a preparation to support boot kernel
at a randomized address.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul M
Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
need a variable to store the kernel base.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Nicholas Piggin
Cc: Kees Cook
Reviewed-by
M_IF_NEEDED is defined too many times. Move it to a common place and
rename it to MAS2_M_IF_NEEDED which is much readable.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Nicholas Piggin
Cc: Kees Cook
Re
These two variables are both defined in init_32.c and init_64.c. Move
them to init-common.c and make them __ro_after_init.
Signed-off-by: Jason Yan
Cc: Diana Craciun
Cc: Michael Ellerman
Cc: Christophe Leroy
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Nicholas Piggin
Cc: Kees Cook
Re
This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Boo
On Wed, 2019-09-18 at 14:53:28 UTC, "Aneesh Kumar K.V" wrote:
> __find_linux_mm_pte return a page table entry pointer walking the
> page table without holding locks. To make it safe against a THP
> split and collapse, we disable interrupts around the lockless
> page table walk. We need to keep the
56 matches
Mail list logo