From: Mike Rapoport
For architectures that enable ARCH_HAS_SET_MEMORY having the ability to
verify that a page is mapped in the kernel direct map can be useful
regardless of hibernation.
Add RISC-V implementation of kernel_page_present(), update its forward
declarations and stubs to be a part of
From: Mike Rapoport
The design of DEBUG_PAGEALLOC presumes that __kernel_map_pages() must never
fail. With this assumption is wouldn't be safe to allow general usage of
this function.
Moreover, some architectures that implement __kernel_map_pages() have this
function guarded by #ifdef DEBUG_PAGE
From: Mike Rapoport
When DEBUG_PAGEALLOC or ARCH_HAS_SET_DIRECT_MAP is enabled a page may be
not present in the direct map and has to be explicitly mapped before it
could be copied.
Introduce hibernate_map_page() and hibernation_unmap_page() that will
explicitly use set_direct_map_{default,inval
From: Mike Rapoport
Instead of using slab_kernel_map() with 'map' parameter to remap pages when
DEBUG_PAGEALLOC is enabled, use dedicated helpers slab_kernel_map() and
slab_kernel_unmap().
Signed-off-by: Mike Rapoport
---
mm/slab.c | 26 +++---
1 file changed, 15 insertions
From: Mike Rapoport
When CONFIG_DEBUG_PAGEALLOC is enabled, it unmaps pages from the kernel
direct mapping after free_pages(). The pages than need to be mapped back
before they could be used. Theese mapping operations use
__kernel_map_pages() guarded with with debug_pagealloc_enabled().
The only
From: Mike Rapoport
Hi,
During recent discussion about KVM protected memory, David raised a concern
about usage of __kernel_map_pages() outside of DEBUG_PAGEALLOC scope [1].
Indeed, for architectures that define CONFIG_ARCH_HAS_SET_DIRECT_MAP it is
possible that __kernel_map_pages() would fail,
https://bugzilla.kernel.org/show_bug.cgi?id=209733
--- Comment #2 from Cameron (c...@neo-zeon.de) ---
Verified this happens with 5.9.6 and and Debian vendor kernel of
linux-image-5.9.0-1-powerpc64le.
Might also be worth mentioning this is occurring with qemu-system-ppc package
version 1:3.1+dfsg-
From: Kaixu Xia
Fix the following coccicheck warning:
./arch/powerpc/kvm/booke.c:503:6-16: WARNING: Comparison to bool
./arch/powerpc/kvm/booke.c:505:6-17: WARNING: Comparison to bool
./arch/powerpc/kvm/booke.c:507:6-16: WARNING: Comparison to bool
Reported-by: Tosk Robot
Signed-off-by: Kaixu
Le 06/11/2020 à 12:36, Christophe Leroy a écrit :
Last use of RFI on PPC64 was removed by
commit b8e90cb7bc04 ("powerpc/64: Convert the syscall exit path to
use RFI_TO_USER/KERNEL").
Remove the macro.
Forget this crazy patch. I missed two RFI in head_64.S
Christophe
Signed-off-by:
On Nov 07 2020, Serge Belyshev wrote:
> Christophe Leroy writes:
>
>> When calling early_hash_table(), the kernel hasn't been yet
>> relocated to its linking address, so data must be addressed
>> with relocation offset.
>>
>> Add relocation offset to write into Hash in early_hash_table().
>>
>> R
On Fri, Nov 6, 2020 at 4:25 AM Michael Ellerman wrote:
> So something seems to have gone wrong linking this, I see eg:
>
> 10004a8c :
> 10004a8c: 2b 10 40 3c lis r2,4139
> 10004a90: 88 f7 42 38 addir2,r2,-2168
> 10004a94: a6 02 08 7c mflrr0
>
From: Kaixu Xia
Fix the following coccinelle warnings:
./arch/powerpc/kvm/book3s_xics.c:476:3-15: WARNING: Assignment of 0/1 to bool
variable
./arch/powerpc/kvm/book3s_xics.c:504:3-15: WARNING: Assignment of 0/1 to bool
variable
Reported-by: Tosk Robot
Signed-off-by: Kaixu Xia
---
arch/pow
On Sat, Nov 07, 2020 at 08:12:13AM +0100, Gabriel Paubert wrote:
> On Sat, Nov 07, 2020 at 01:23:28PM +1000, Nicholas Piggin wrote:
> > ISA v2.06 (POWER7 and up) as well as e6500 support lbarx and lwarx.
>
> Hmm, lwarx exists since original Power AFAIR,
Almost: it was new on PowerPC.
Segher
Before commit 3f388f28639f ("panic: dump registers on panic_on_warn"),
__warn() was calling show_regs() when regs was not NULL, and
show_stack() otherwise.
After that commit, show_stack() is called regardless of whether
show_regs() has been called or not, leading to duplicated Call Trace:
[7.
Le 07/11/2020 à 03:33, Nicholas Piggin a écrit :
It's often useful to know the register state for interrupts in
the stack frame. In the below example (with this patch applied),
the important information is the state of the page fault.
A blatant case like this probably rather should have the p
Le 06/11/2020 à 16:59, Nicholas Piggin a écrit :
This series attempts to improve the speed of interrupts and system calls
in two major ways.
Firstly, the SRR/HSRR registers do not need to be reloaded if they were
not used or clobbered fur the duration of the interrupt.
Secondly, an alternate
Christophe Leroy writes:
> When calling early_hash_table(), the kernel hasn't been yet
> relocated to its linking address, so data must be addressed
> with relocation offset.
>
> Add relocation offset to write into Hash in early_hash_table().
>
> Reported-by: Erhard Furtner
> Reported-by: Andrea
Le 05/11/2020 à 15:34, Nicholas Piggin a écrit :
Christophe asked about doing this, most of the code is still in
asm but maybe it's slightly nicer? I don't know if it's worthwhile.
Heu... I don't think I was asking for that, but why not, see later comments.
At first I was just asking to wri
Le 29/10/2020 à 22:07, Andreas Schwab a écrit :
On Okt 01 2020, Christophe Leroy wrote:
At the time being, an early hash table is set up when
CONFIG_KASAN is selected.
There is nothing wrong with setting such an early hash table
all the time, even if it is not used. This is a statically
all
https://bugzilla.kernel.org/show_bug.cgi?id=209869
--- Comment #11 from Christophe Leroy (christophe.le...@csgroup.eu) ---
Can (In reply to Erhard F. from comment #10)
> (In reply to Christophe Leroy from comment #9)
> > Ok, what about 5.10-rc1 + KASAN without reverting the patch ?
> Nope, does no
When calling early_hash_table(), the kernel hasn't been yet
relocated to its linking address, so data must be addressed
with relocation offset.
Add relocation offset to write into Hash in early_hash_table().
Reported-by: Erhard Furtner
Reported-by: Andreas Schwab
Fixes: 69a1593abdbc ("powerpc/3
Le 07/11/2020 à 04:23, Nicholas Piggin a écrit :
ISA v2.06 (POWER7 and up) as well as e6500 support lbarx and lwarx.
Add a compile option that allows code to use it, and add support in
cmpxchg and xchg 8 and 16 bit values.
Do you mean lharx ? Because lwarx exists on all powerpcs I think.
22 matches
Mail list logo