Commit ("03fd42d458fb powerpc/fixmap: Fix FIX_EARLY_DEBUG_BASE when
page size is 256k") reworked the setup of the early debug area and
mistakenly replaced 128 * 1024 by SZ_128.
Change to SZ_128K to restore the original 128 kbytes size of the area.
Fixes: 03fd42d458fb ("powerpc/fixmap: Fix FIX_EAR
As per draft LoPAPR (Revision 2.9_pre7), section B.5.3 "Run Time Abstaction
Services (RTAS) Node at
https://openpowerfoundation.org/wp-content/uploads/2020/07/LoPAR-20200611.pdf,
there are 2 device tree property ibm,max-associativity-domains (which
defines the maximum number of domains that the fir
cpu_has_feature(CPU_FTR_SPE) returns false when CONFIG_SPE is
not set.
There is no need to enclose the test in an #ifdef CONFIG_SPE.
Remove it.
CPU_FTR_SPE only exists on 32 bits. Define it as 0 on 64 bits.
We have a couple of places like:
#ifdef CONFIG_SPE
if (cpu_has_feature(CPU_FTR_
Add a stub for __giveup_fpu() when CONFIG_PPC_FPU is
not selected, as done for CONFIG_SPE and CONFIG_ALTIVEC.
This allows to remove some #ifdef CONFIG_PPC_FPU.
Also change one to IS_ENABLED().
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 9 -
1 file changed, 4 in
cpu_has_feature(CPU_FTR_ALTIVEC) returns false when CONFIG_ALTIVEC is
not set.
There is no need to enclose the test in an #ifdef CONFIG_ALTIVEC.
Remove it.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 4
1 file changed, 4 deletions(-)
diff --git a/arch/powerpc/kerne
cpu_has_feature(CPU_FTR_VSX) returns false when CONFIG_VSX is
not set.
There is no need to enclose the test in an #ifdef CONFIG_VSX.
Remove it.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 13 +
1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/arc
That #endif is more than 100 lines after the matching #ifdef,
and there are several #ifdef/#else/#endif inbetween.
Tag it as /* CONFIG_PPC_BOOK3S_64 */ to help locate the
matching #ifdef.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 2 +-
1 file changed, 1 insertion(+), 1
The #if defined(CONFIG_4xx) || defined(CONFIG_BOOKE) encloses some
printk which can be compiled in all cases.
Replace by IS_ENABLED().
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/powerpc
This #ifdef CONFIG_PPC_BOOK3S_64 calls preload_new_slb_context()
when radix is not enabled.
radix_enabled() is always defined, and the prototype for
preload_new_slb_context() is always present, so the #ifdef
is unneeded.
Replace it by IS_ENABLED().
Signed-off-by: Christophe Leroy
---
arch/powe
ftrace_graph_ret_addr() is always defined and returns 'ip' when
CONFIG_FUNCTION GRAPH_TRACER is not set.
So the #ifdef is not needed, remove it.
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 4
1 file changed, 4 deletions(-)
diff --git a/arch/powerpc/kernel/process.c
The #ifdef CONFIG_KALLSYMS encloses some printk which can
compile in all cases.
Replace by IS_ENABLED().
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/ker
isync() is always defined, no need for an #ifdef.
Replace it by IS_ENABLED(CONFIG_PPC_47x).
Signed-off-by: Christophe Leroy
---
arch/powerpc/kernel/process.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
in
On Thu Aug 6, 2020 at 6:27 AM CDT, Daniel Axtens wrote:
> Hi Chris,
>
> > void __set_breakpoint(int nr, struct arch_hw_breakpoint *brk);
> > +void __get_breakpoint(int nr, struct arch_hw_breakpoint *brk);
> > bool ppc_breakpoint_available(void);
> > #ifdef CONFIG_PPC_ADV_DEBUG_REGS
> > exter
Nathan Lynch writes:
> Michael Ellerman writes:
>> Tyrel Datwyler writes:
>>> On 8/11/20 6:20 PM, Nathan Lynch wrote:
+static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb)
+{
+ const unsigned int resched_interval = 20;
+
+ BUG_ON(lmb < drmem_inf
On Thu Aug 6, 2020 at 8:24 AM CDT, Daniel Axtens wrote:
> "Christopher M. Riedl" writes:
>
> > When code patching a STRICT_KERNEL_RWX kernel the page containing the
> > address to be patched is temporarily mapped with permissive memory
> > protections. Currently, a per-cpu vmalloc patch area is us
__machine_check_early_realmode_p*() are currently declared
as extern in cputable.c and because of this when compiled
with "C=1" (which enables semantic checker) produces these
warnings.
CHECK arch/powerpc/kernel/mce_power.c
arch/powerpc/kernel/mce_power.c:709:6: warning: symbol
'__machine_che
Add a raw mode cputable entry for POWER10. Copies most of the fields
from commit a3ea40d5c736 ("powerpc: Add POWER10 architected mode")
except for oprofile_cpu_type, machine_check_early, pvr_mask and pvr_mask
fields. On bare metal systems we use DT CPU features, which doesn't need a
cputable entry.
On 14/08/20 6:19 pm, Heiner Kallweit wrote:
> On 14.08.2020 04:48, Chris Packham wrote:
>> Hi,
>>
>> I'm seeing a problem with accessing spi-nor after upgrading a T2081
>> based system to linux v5.7.15
>>
>> For this board u-boot and the u-boot environment live on spi-nor.
>>
>> When I use fw_sete
On platforms that define HAVE_ARCH_HUGE_VMAP and support PMD vmaps,
vmalloc will attempt to allocate PMD-sized pages first, before falling
back to small pages.
Allocations which use something other than PAGE_KERNEL protections are
not permitted to use huge pages yet, not all callers expect this (e
As a side-effect, the order of flush_cache_vmap() and
arch_sync_kernel_mappings() calls are switched, but that now matches
the other callers in this file.
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 17 +
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/mm/vma
This is a generic kernel virtual memory mapper, not specific to ioremap.
Signed-off-by: Nicholas Piggin
---
include/linux/vmalloc.h | 2 +
mm/ioremap.c| 192
mm/vmalloc.c| 191 +++
3 files chan
This changes the awkward approach where architectures provide init
functions to determine which levels they can provide large mappings for,
to one where the arch is queried for each call.
This removes code and indirection, and allows constant-folding of dead
code for unsupported levels.
This also
This will be moved to mm/ and used as a generic kernel virtual mapping
function, so re-name it in preparation.
Signed-off-by: Nicholas Piggin
---
mm/ioremap.c | 55 ++--
1 file changed, 23 insertions(+), 32 deletions(-)
diff --git a/mm/ioremap.c b
The vmalloc mapper operates on a struct page * array rather than a
linear physical address, re-name it to make this distinction clear.
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 28
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/mm/vmalloc.c b
Signed-off-by: Nicholas Piggin
---
mm/memory.c | 60 +++--
1 file changed, 44 insertions(+), 16 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index c39a13b09602..1d5f3093c249 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2260,13 +2260,20 @@
vmalloc_to_page returns NULL for addresses mapped by larger pages[*].
Whether or not a vmap is huge depends on the architecture details,
alignments, boot options, etc., which the caller can not be expected
to know. Therefore HUGE_VMAP is a regression for vmalloc_to_page.
This change teaches vmallo
Let's try again.
Thanks,
Nick
Since v3:
- Fixed an off-by-one bug in a loop
- Fix !CONFIG_HAVE_ARCH_HUGE_VMAP build fail
- Hopefully this time fix the arm64 vmap stack bug, thanks Jonathan
Cameron for debugging the cause of this (hopefully).
Since v2:
- Rebased on vmalloc cleanups, split serie
27 matches
Mail list logo