also adds a prot argument to the arch query. This is unused
currently but could help with some architectures (e.g., some powerpc
processors can't map uncacheable memory with large pages).
Signed-off-by: Nicholas Piggin
---
arch/arm64/mm/mmu.c | 10 +--
arch/powerpc/mm/boo
As a side-effect, the order of flush_cache_vmap() and
arch_sync_kernel_mappings() calls are switched, but that now matches
the other callers in this file.
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 17 +
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/mm
This is a generic kernel virtual memory mapper, not specific to ioremap.
Signed-off-by: Nicholas Piggin
---
include/linux/vmalloc.h | 2 +
mm/ioremap.c| 192
mm/vmalloc.c| 191 +++
3 files
pings")
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 40 ++--
1 file changed, 26 insertions(+), 14 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index b482d240f9a2..49f225b0f855 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -38,6 +38,7 @@
f77c7798a7 ("powerpc/64s/radix: flush remote CPUs out of
single-threaded mm_cpumask")
not-yet-Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/tlb.h | 13 -
arch/powerpc/mm/book3s64/radix_tlb.c | 23 ---
2 files changed, 16 insertion
mm_cpumask clearing code. The
optimisation could be effectively restored by sending IPIs to mm_cpumask
members and having them remove themselves from mm_cpumask. This is more
tricky so I leave it as an exercise for someone with a sparc64 SMP.
powerpc has a (currently similarly broken) example.
not-
Excerpts from pet...@infradead.org's message of August 12, 2020 8:35 pm:
> On Wed, Aug 12, 2020 at 06:18:28PM +1000, Nicholas Piggin wrote:
>> Excerpts from pet...@infradead.org's message of August 7, 2020 9:11 pm:
>> >
>> > What's wrong with somethi
Excerpts from pet...@infradead.org's message of August 19, 2020 1:41 am:
> On Tue, Aug 18, 2020 at 05:22:33PM +1000, Nicholas Piggin wrote:
>> Excerpts from pet...@infradead.org's message of August 12, 2020 8:35 pm:
>> > On Wed, Aug 12, 2020 at 06:18:28PM
, which should help reduce remote accesses
on well localised workloads, but that adds some complexity with hotplug,
so until we get a less artificial workload to test with, let's keep it
simple.
Signed-off-by: Nicholas Piggin
---
mm/filemap.c | 24 +---
1 file change
Excerpts from Boqun Feng's message of November 14, 2020 1:30 am:
> Hi Nicholas,
>
> On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
>> All the cool kids are doing it.
>>
>> Signed-off-by: Nicholas Piggin
>> ---
>&g
his more carefully in the first place.
>
> Cc: Michael Ellerman
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: linuxppc-...@lists.ozlabs.org
> Cc: Nicholas Piggin
> Cc: Catalin Marinas
> Cc: Will Deacon
> Cc: linux-arm-ker...@lists.infradead.org
> Cc: M
Excerpts from Andy Lutomirski's message of December 29, 2020 7:06 am:
> On Mon, Dec 28, 2020 at 12:32 PM Mathieu Desnoyers
> wrote:
>>
>> - On Dec 28, 2020, at 2:44 PM, Andy Lutomirski l...@kernel.org wrote:
>>
>> > On Mon, Dec 28, 2020 at 11:09 AM Russell King - ARM Linux admin
>> > wrote:
>
Excerpts from Andy Lutomirski's message of December 29, 2020 10:56 am:
> On Mon, Dec 28, 2020 at 4:36 PM Nicholas Piggin wrote:
>>
>> Excerpts from Andy Lutomirski's message of December 29, 2020 7:06 am:
>> > On Mon, Dec 28, 2020 at 12:32 PM Mathieu Desnoyers
>
Excerpts from Andy Lutomirski's message of December 29, 2020 10:36 am:
> On Mon, Dec 28, 2020 at 4:11 PM Nicholas Piggin wrote:
>>
>> Excerpts from Andy Lutomirski's message of December 28, 2020 4:28 am:
>> > The old sync_core_before_usermode() comments said tha
Excerpts from Russell King - ARM Linux admin's message of December 29, 2020
8:44 pm:
> On Tue, Dec 29, 2020 at 01:09:12PM +1000, Nicholas Piggin wrote:
>> I think it should certainly be documented in terms of what guarantees
>> it provides to application, _not_ the kinds of in
Excerpts from Russell King - ARM Linux admin's message of December 30, 2020
8:58 pm:
> On Wed, Dec 30, 2020 at 10:00:28AM +, Russell King - ARM Linux admin
> wrote:
>> On Wed, Dec 30, 2020 at 12:33:02PM +1000, Nicholas Piggin wrote:
>> > Excerpts from Russell King -
Excerpts from Christophe Leroy's message of December 22, 2020 11:28 pm:
> Let do_break() retrieve address and errorcode from regs.
>
> This simplifies the code and shouldn't impeed performance as
> address and errorcode are likely still hot in the cache.
>
> S
Excerpts from Christophe Leroy's message of December 22, 2020 11:28 pm:
> The address argument is not used by bad_page_fault().
>
> Remove it.
>
> Suggested-by: Nicholas Piggin
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/bug.h |
Excerpts from Stephen Rothwell's message of March 24, 2021 6:58 am:
> Hi all,
>
> On Thu, 18 Mar 2021 20:56:07 +1100 Stephen Rothwell
> wrote:
>>
>> After merging the akpm-current tree, today's linux-next build (sparc
>> defconfig) failed like this:
>>
>> In file included from arch/sparc/inclu
}
>
> Fix this by setting area to NULL to avoid the uninitialized read
> of area.
>
> Addresses-Coverity: ("Uninitialized pointer read")
> Fixes: 92db9fec381b ("mm/vmalloc: hugepage vmalloc mappings")
> Signed-off-by: Colin Ian King
Looks good to me.
Excerpts from Matthew Wilcox's message of March 19, 2021 11:25 am:
> On Fri, Mar 19, 2021 at 10:56:45AM +1100, Balbir Singh wrote:
>> On Fri, Mar 05, 2021 at 04:18:37AM +, Matthew Wilcox (Oracle) wrote:
>> > A struct folio refers to an entire (possibly compound) page. A function
>> > which tak
one looks at it? Something like this
/*
* smp_cond_load_relaxed was found to have performance problems if
* implemented with spin_begin()/spin_end().
*/
I wonder if it should have a Fixes: tag to the original commit as
well.
Otherwise,
Acked-by: Nicholas Piggin
Thanks,
Nick
>
> Data is from three ben
Excerpts from Andrew Morton's message of April 16, 2021 4:55 am:
> On Thu, 15 Apr 2021 12:23:55 +0200 Christophe Leroy
> wrote:
>> > + * is done. STRICT_MODULE_RWX may require extra work to support this
>> > + * too.
>> > + */
>> >
>> > - return __vmalloc_node_range(size, 1, MODULES_VAD
Excerpts from Segher Boessenkool's message of April 14, 2021 7:58 am:
> On Tue, Apr 13, 2021 at 06:33:19PM +0200, Christophe Leroy wrote:
>> Le 12/04/2021 à 23:54, Segher Boessenkool a écrit :
>> >On Thu, Apr 08, 2021 at 03:33:44PM +, Christophe Leroy wrote:
>> >>For clear bits, on 32 bits 'rlw
Excerpts from Michael Ellerman's message of April 1, 2021 12:39 pm:
> Segher Boessenkool writes:
>> On Wed, Mar 31, 2021 at 08:58:17PM +1100, Michael Ellerman wrote:
>>> So perhaps:
>>>
>>> EXC_SYSTEM_RESET
>>> EXC_MACHINE_CHECK
>>> EXC_DATA_STORAGE
>>> EXC_DATA_SEGMENT
>>> EXC_INST_STO
Excerpts from Segher Boessenkool's message of April 2, 2021 2:11 am:
> On Thu, Apr 01, 2021 at 10:55:58AM +0800, Xiongwei Song wrote:
>> Segher Boessenkool 于2021年4月1日周四 上午6:15写道:
>>
>> > On Wed, Mar 31, 2021 at 08:58:17PM +1100, Michael Ellerman wrote:
>> > > So perhaps:
>> > >
>> > > EXC_SYSTE
306 25.473
0-1276.223 27.814 28.029
Signed-off-by: Nicholas Piggin
---
kernel/irq/spurious.c | 4
1 file changed, 4 insertions(+)
diff --git a/kernel/irq/spurious.c b/kernel/irq/spurious.c
index f865e5f4d382..c481d8458325 100644
--- a/kernel/irq/spurious.c
+++ b/kernel/
hs
*** BLURB HERE ***
Nicholas Piggin (14):
ARM: mm: add missing pud_page define to 2-level page tables
mm/vmalloc: fix HUGE_VMAP regression by enabling huge pages in
vmalloc_to_page
mm: apply_to_pte_range warn and fail if a large pte is encountered
mm/vmalloc: rename vmap_*_range vmap_
pings")
Reviewed-by: Miaohe Lin
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 41 ++---
1 file changed, 26 insertions(+), 15 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4f5f8c907897..98e697ac764c 10064
ARM uses its own PMD folding scheme which is missing pud_page which
should just pass through to pmd_page. Move this from the 3-level
page table to common header.
Cc: Russell King
Cc: Ding Tianhong
Cc: linux-arm-ker...@lists.infradead.org
Signed-off-by: Nicholas Piggin
---
arch/arm/include/asm
This will be used as a generic kernel virtual mapping function, so
re-name it in preparation.
Reviewed-by: Miaohe Lin
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/ioremap.c | 64 +++-
1 file changed, 33 insertions(+), 31
apply_to_pte_range might mistake a large pte for bad, or treat it as a
page table, resulting in a crash or corruption. Add a test to warn and
return error if large entries are found.
Reviewed-by: Miaohe Lin
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/memory.c | 66
d.org
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Cc: "H. Peter Anvin"
Reviewed-by: Ding Tianhong
Acked-by: Catalin Marinas [arm64]
Signed-off-by: Nicholas Piggin
---
arch/arm64/include/asm/vmalloc.h | 8 ++
arch/arm64/mm/mmu.c
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: linuxppc-...@lists.ozlabs.org
Acked-by: Michael Ellerman
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/vmalloc.h
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Cc: "H. Peter Anvin"
Signed-off-by: Nicholas Piggin
---
arch/
The vmalloc mapper operates on a struct page * array rather than a
linear physical address, re-name it to make this distinction clear.
Reviewed-by: Miaohe Lin
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16
1 file changed, 8 insertions
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Catalin Marinas
Cc: Will Deacon
Cc: linux-arm-ker...@lists.infradead.org
Acked-by: Catalin Marinas
Signed-off-by: Nicholas Piggin
---
arch/arm64/in
If an architecture doesn't support a particular page table level as
a huge vmap page size then allow it to skip defining the support
query function.
Suggested-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
arch/arm64/include/asm/vmalloc.h | 7 +++
arch/powerpc/includ
As a side-effect, the order of flush_cache_vmap() and
arch_sync_kernel_mappings() calls are switched, but that now matches
the other callers in this file.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16 +---
1 file changed, 13 insertions(+), 3
allocation, an option nohugevmalloc is added to disable at boot.
Signed-off-by: Nicholas Piggin
---
arch/Kconfig| 11 ++
include/linux/vmalloc.h | 21
mm/page_alloc.c | 5 +-
mm/vmalloc.c| 216 +++-
4 files changed, 206
This reduces TLB misses by nearly 30x on a `git diff` workload on a
2-node POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%, due
to vfs hashes being allocated with 2MB pages.
Cc: linuxppc-...@lists.ozlabs.org
Acked-by: Michael Ellerman
Signed-off-by: Nicholas Piggin
---
.../admin-gu
This is a generic kernel virtual memory mapper, not specific to ioremap.
Code is unchanged other than making vmap_range non-static.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
include/linux/vmalloc.h | 3 +
mm/ioremap.c| 203
help reduce remote accesses
on well localised workloads, but that adds some complexity with indexing
and hotplug, so until we get a less artificial workload to test with,
keep it simple.
Signed-off-by: Nicholas Piggin
---
kernel/sched/wait_bit.c | 30 +++---
mm/filemap.c
Excerpts from Ingo Molnar's message of March 17, 2021 6:38 pm:
>
> * Nicholas Piggin wrote:
>
>> The page waitqueue hash is a bit small (256 entries) on very big systems. A
>> 16 socket 1536 thread POWER9 system was found to encounter hash collisions
>> and exc
Excerpts from Rasmus Villemoes's message of March 17, 2021 8:12 pm:
> On 17/03/2021 08.54, Nicholas Piggin wrote:
>
>> +#if CONFIG_BASE_SMALL
>> +static const unsigned int page_wait_table_bits = 4;
>> static wait_queue_head_t page_wait_table[PAGE_WAIT_TABLE_
Excerpts from Linus Torvalds's message of March 18, 2021 5:26 am:
> On Wed, Mar 17, 2021 at 3:44 AM Nicholas Piggin wrote:
>>
>> Argh, because I didn't test small. Sorry I had the BASE_SMALL setting in
>> another patch and thought it would be a good idea to mash
Excerpts from Andrew Morton's message of March 18, 2021 8:58 am:
> On Wed, 17 Mar 2021 16:23:48 +1000 Nicholas Piggin wrote:
>
>>
>> *** BLURB HERE ***
>>
>
> That's really not what it means ;)
Sigh, wasn't having a good yesterday.
> Cou
Thanks for working on this, I think it's a nice cleanup and helps
non-powerpc people understand the code a bit better.
Excerpts from Xiongwei Song's message of April 10, 2021 12:28 am:
> From: Xiongwei Song
>
> Create a new header named traps.h, define macros to list ppc interrupt
> types in tra
also prevents the cpumask
from being trimmed back to local mode, which means continual broadcast
IPIs or TLBIEs are needed for TLB flushing. This patch prevents that
situation too.
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/book3s/64/mmu.h | 12
arch/power
This fixes a race in powerpc mm_cpumask code, I hope the core kernel
patch looks okay and we could take it through the powerpc tree with
an ack from someone (Peter or Thomas, perhaps?)
Thanks,
Nick
Nicholas Piggin (2):
kernel/cpu: add arch override for clear_tasks_mm_cpumask() mm handling
ned-off-by: Nicholas Piggin
---
kernel/cpu.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6ff2578ecf17..2b8d7a5db383 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -815,6 +815,10 @@ void __init cpuhp_threads_init(void)
}
Excerpts from Andy Lutomirski's message of December 4, 2020 3:26 pm:
> The core scheduler isn't a great place for
> membarrier_mm_sync_core_before_usermode() -- the core scheduler doesn't
> actually know whether we are lazy. With the old code, if a CPU is
> running a membarrier-registered task, go
Excerpts from Andy Lutomirski's message of December 4, 2020 3:26 pm:
> This is a mockup. It's designed to illustrate the algorithm and how the
> code might be structured. There are several things blatantly wrong with
> it:
>
> The coding stype is not up to kernel standards. I have prototypes in
Excerpts from Edgecombe, Rick P's message of December 1, 2020 6:21 am:
> On Sun, 2020-11-29 at 01:25 +1000, Nicholas Piggin wrote:
>> Support huge page vmalloc mappings. Config option
>> HAVE_ARCH_HUGE_VMALLOC
>> enables support on architectures that define HAVE_ARCH_HUG
Excerpts from Edgecombe, Rick P's message of December 5, 2020 4:33 am:
> On Fri, 2020-12-04 at 18:12 +1000, Nicholas Piggin wrote:
>> Excerpts from Edgecombe, Rick P's message of December 1, 2020 6:21
>> am:
>> > On Sun, 2020-11-29 at 01:25 +1000, Nicholas Pigg
Excerpts from Andy Lutomirski's message of December 5, 2020 12:37 am:
>
>
>> On Dec 3, 2020, at 11:54 PM, Nicholas Piggin wrote:
>>
>> Excerpts from Andy Lutomirski's message of December 4, 2020 3:26 pm:
>>> This is a mockup. It's designed
apply_to_pte_range might mistake a large pte for bad, or treat it as a
page table, resulting in a crash or corruption. Add a test to warn and
return error if large entries are found.
Signed-off-by: Nicholas Piggin
---
mm/memory.c | 66 +++--
1
The vmalloc mapper operates on a struct page * array rather than a
linear physical address, re-name it to make this distinction clear.
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: linuxppc-...@lists.ozlabs.org
Acked-by: Michael Ellerman
Signed-off-by: Nicholas Piggin
---
arch/powerpc/include/asm/vmalloc.h
This will be used as a generic kernel virtual mapping function, so
re-name it in preparation.
Signed-off-by: Nicholas Piggin
---
mm/ioremap.c | 64 +++-
1 file changed, 33 insertions(+), 31 deletions(-)
diff --git a/mm/ioremap.c b/mm/ioremap.c
d.org
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Cc: "H. Peter Anvin"
Acked-by: Catalin Marinas [arm64]
Signed-off-by: Nicholas Piggin
---
arch/arm64/include/asm/vmalloc.h | 8 +++
arch/arm64/mm/mmu.c | 10 +--
arch/powe
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Cc: "H. Peter Anvin"
Signed-off-by: Nicholas Piggin
---
arch/
As a side-effect, the order of flush_cache_vmap() and
arch_sync_kernel_mappings() calls are switched, but that now matches
the other callers in this file.
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16 +---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/mm
misses by nearly 30x on a `git diff` workload on a 2-node
POWER9 (59,800 -> 2,100) and reduces CPU cycles by 0.54%.
This can result in more internal fragmentation and memory overhead for a
given allocation, an option nohugevmalloc is added to disable at boot.
Signed-off-by: Nicholas Pig
pings")
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 41 ++---
1 file changed, 26 insertions(+), 15 deletions(-)
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6ae491a8b210..f85124e88bdb 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -34,7 +34,7 @@
This allows unsupported levels to be constant folded away, and so
p4d_free_pud_page can be removed because it's no longer linked to.
Cc: Catalin Marinas
Cc: Will Deacon
Cc: linux-arm-ker...@lists.infradead.org
Acked-by: Catalin Marinas
Signed-off-by: Nicholas Piggin
---
arch/arm64/in
Cc: linuxppc-...@lists.ozlabs.org
Signed-off-by: Nicholas Piggin
---
Documentation/admin-guide/kernel-parameters.txt | 2 ++
arch/powerpc/Kconfig| 1 +
arch/powerpc/kernel/module.c| 13 +++--
3 files changed, 14 insertions(+), 2 deletions
Since v2:
- Rebased on vmalloc cleanups, split series into simpler pieces.
- Fixed several compile errors and warnings
- Keep the page array and accounting in small page units because
struct vm_struct is an interface (this should fix x86 vmap stack debug
assert). [Thanks Zefan]
Nicholas Piggi
This is a generic kernel virtual memory mapper, not specific to ioremap.
Signed-off-by: Nicholas Piggin
---
include/linux/vmalloc.h | 3 +
mm/ioremap.c| 197
mm/vmalloc.c| 196 +++
3 files
Excerpts from Andy Lutomirski's message of December 3, 2020 3:09 pm:
> On Tue, Dec 1, 2020 at 6:50 PM Nicholas Piggin wrote:
>>
>> Excerpts from Andy Lutomirski's message of November 29, 2020 3:55 am:
>> > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin wro
Excerpts from Andy Lutomirski's message of December 6, 2020 2:11 am:
>
>> On Dec 5, 2020, at 12:00 AM, Nicholas Piggin wrote:
>>
>>
>> I disagree. Until now nobody following it noticed that the mm gets
>> un-lazied in other cases, because that wa
Excerpts from Andy Lutomirski's message of December 6, 2020 10:36 am:
> On Sat, Dec 5, 2020 at 3:15 PM Nicholas Piggin wrote:
>>
>> Excerpts from Andy Lutomirski's message of December 6, 2020 2:11 am:
>> >
>
>> If an mm was lazy tlb for a kernel t
Excerpts from Andy Lutomirski's message of November 29, 2020 10:36 am:
> On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin wrote:
>>
>> NOMMU systems could easily go without this and save a bit of code
>> and the refcount atomics, because their mm switch is a no-op. I
&g
Excerpts from Andy Lutomirski's message of November 29, 2020 3:55 am:
> On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin wrote:
>>
>> And get rid of the generic sync_core_before_usermode facility. This is
>> functionally a no-op in the core scheduler code, but it also ca
Excerpts from Andy Lutomirski's message of November 29, 2020 10:38 am:
> On Sat, Nov 28, 2020 at 8:01 AM Nicholas Piggin wrote:
>>
>> This is called at points where a lazy mm is switched away or made not
>> lazy (by its owner switching back).
>>
>> Signed-of
Excerpts from Andy Lutomirski's message of November 29, 2020 1:54 pm:
> On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin wrote:
>>
>> On big systems, the mm refcount can become highly contented when doing
>> a lot of context switching with threaded applications (particu
ote:
>>
>> On Sat, Nov 28, 2020 at 7:54 PM Andy Lutomirski wrote:
>> >
>> > On Sat, Nov 28, 2020 at 8:02 AM Nicholas Piggin wrote:
>> > >
>> > > On big systems, the mm refcount can become highly contented when doing
>> > > a lot
Excerpts from Peter Zijlstra's message of December 3, 2020 6:44 pm:
> On Wed, Dec 02, 2020 at 09:25:51PM -0800, Andy Lutomirski wrote:
>
>> power: same as ARM, except that the loop may be rather larger since
>> the systems are bigger. But I imagine it's still faster than Nick's
>> approach -- a c
se up by using
> non-magic start/stop symbols for all sections, and relying on KEEP()
> instead where needed.
>
>> There are a lot of KEEP usage. Perhaps some can be dropped to facilitate
>> ld --gc-sections.
>
> I see a lot of these were added by Nick Piggin (added
Excerpts from Christophe Leroy's message of March 11, 2021 10:38 pm:
>
>
> Le 11/03/2021 à 11:38, Christophe Leroy a écrit :
>>
>>
>> Le 10/03/2021 à 02:33, Nicholas Piggin a écrit :
>>> Excerpts from Christophe Leroy's message of March 9, 2021 10:
Excerpts from Christophe Leroy's message of March 5, 2021 6:54 pm:
>
>
> Le 09/02/2021 à 08:49, Nicholas Piggin a écrit :
>> Excerpts from Christophe Leroy's message of February 9, 2021 4:18 pm:
>>>
>>>
>>> Le 09/02/2021 à 02:11, Nicholas Pi
user time accounting).
>
> Fixes: 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers")
> Signed-off-by: Christophe Leroy
Reviewed-by: Nicholas Piggin
This should go in as a fix for this release I think.
> ---
> arch/powerpc/include/asm/interrupt.h | 3 ++
ough here and fall through again and warn again, etc.
Putting the infinite loop is good enough I think (and better than there
was previously).
Otherwise
Reviewed-by: Nicholas Piggin
Thanks,
Nick
Excerpts from Christophe Leroy's message of March 9, 2021 10:09 pm:
> book3e/64 is the last one calling __bad_page_fault()
> from assembly.
>
> Save non volatile registers before calling do_page_fault()
> and modify do_page_fault() to call __bad_page_fault()
> for all platforms.
>
> Then it can b
Excerpts from Christophe Leroy's message of March 9, 2021 10:10 pm:
> No need to do that is assembly, do it in C.
Hmm. No issues with the patch as such, but why does ppc32 need this but
not 64? AFAIKS 64 sets this when a thread is created.
Thanks,
Nick
>
> Signed-off-by: Christophe Leroy
> --
tub
> for when CONFIG_PPC_KUAP is not selected.
Looks pretty straightforward to me.
While you're renaming things, could kuap_check_amr() be changed to
kuap_assert_locked() or similar? Otherwise,
Reviewed-by: Nicholas Piggin
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerp
Excerpts from Leonardo Bras's message of February 5, 2021 5:01 pm:
> Hey Nick, thanks for reviewing :)
>
> On Fri, 2021-02-05 at 16:28 +1000, Nicholas Piggin wrote:
>> Excerpts from Leonardo Bras's message of February 5, 2021 4:06 pm:
>> > Before guest entry, TB
Excerpts from Christophe Leroy's message of February 5, 2021 6:56 pm:
> For unimplemented instructions or unimplemented SPRs, the 8xx triggers
> a "Software Emulation Exception" (0x1000). That interrupt doesn't set
> reason bits in SRR1 as the "Program Check Exception" does.
>
> Go through emulati
Excerpts from David Laight's message of January 25, 2021 10:24 pm:
> From: Christophe Leroy
>> Sent: 25 January 2021 09:15
>>
>> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> > Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
&g
Excerpts from Christophe Leroy's message of January 26, 2021 12:48 am:
> Only PPC64 has scv. No need to check the 0x7ff0 trap on PPC32.
>
> And ignore the scv parameter in syscall_exit_prepare (Save 14 cycles
> 346 => 332 cycles)
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/kernel/e
Excerpts from Christophe Leroy's message of January 26, 2021 12:48 am:
> syscall_64.c will be reused almost as is for PPC32.
>
> Rename it syscall.c
Could you rename it to interrupt.c instead? A system call is an
interrupt, and the file now also has code to return from other
interrupts as well,
Excerpts from Ding Tianhong's message of January 26, 2021 4:59 pm:
> On 2021/1/26 12:45, Nicholas Piggin wrote:
>> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
>> enables support on architectures that define HAVE_ARCH_HUGE_VMAP and
>> suppor
Excerpts from Christophe Leroy's message of January 26, 2021 12:48 am:
> Save r3 in regs->orig_r3 in system_call_exception()
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/kernel/entry_64.S | 1 -
> arch/powerpc/kernel/syscall.c | 2 ++
> 2 files changed, 2 insertions(+), 1 deletion(-
Excerpts from Christophe Leroy's message of January 26, 2021 12:48 am:
> When r3 is not modified, reload it from regs->orig_r3 to free
> volatile registers. This avoids a stack frame for the likely part
> of system_call_exception()
>
> Before the patch:
>
> c000b4d4 :
> c000b4d4: 7c 08 02 a6
Excerpts from Christophe Leroy's message of January 25, 2021 7:14 pm:
>
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> Support huge page vmalloc mappings. Config option HAVE_ARCH_HUGE_VMALLOC
>> enables support on architectures that define HAVE_ARCH_HUGE_VMAP
Excerpts from Christophe Leroy's message of January 25, 2021 6:42 pm:
>
>
> Le 24/01/2021 à 09:22, Nicholas Piggin a écrit :
>> This allows unsupported levels to be constant folded away, and so
>> p4d_free_pud_page can be removed because it's no longer linked t
apply_to_pte_range might mistake a large pte for bad, or treat it as a
page table, resulting in a crash or corruption. Add a test to warn and
return error if large entries are found.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/memory.c | 66
This will be used as a generic kernel virtual mapping function, so
re-name it in preparation.
Signed-off-by: Nicholas Piggin
---
mm/ioremap.c | 64 +++-
1 file changed, 33 insertions(+), 31 deletions(-)
diff --git a/mm/ioremap.c b/mm/ioremap.c
d.org
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: x...@kernel.org
Cc: "H. Peter Anvin"
Acked-by: Catalin Marinas [arm64]
Signed-off-by: Nicholas Piggin
---
arch/arm64/include/asm/vmalloc.h | 8 ++
arch/arm64/mm/mmu.c | 10 +--
arch/powe
The vmalloc mapper operates on a struct page * array rather than a
linear physical address, re-name it to make this distinction clear.
Reviewed-by: Christoph Hellwig
Signed-off-by: Nicholas Piggin
---
mm/vmalloc.c | 16
1 file changed, 8 insertions(+), 8 deletions(-)
diff
201 - 300 of 1038 matches
Mail list logo