On Thu, Feb 20, 2020 at 11:04:51AM +0530, Santosh Sivaraj wrote:
> The TLB flush optimisation (a46cc7a90f: powerpc/mm/radix: Improve TLB/PWC
> flushes) may result in random memory corruption. Any concurrent page-table
> walk
> could end up with a Use-after-Free. Even on UP this might give issues,
From: Peter Zijlstra
Aneesh reported that:
tlb_flush_mmu()
tlb_flush_mmu_tlbonly()
tlb_flush() <-- #1
tlb_flush_mmu_free()
tlb_table_flush()
tlb_table_invalidate()
tlb_flush_mmu_tlbonly()
From: Peter Zijlstra
Architectures for which we have hardware walkers of Linux page table
should flush TLB on mmu gather batch allocation failures and batch flush.
Some architectures like POWER supports multiple translation modes (hash
and radix) and in the case of POWER only radix translation mo
From: "Aneesh Kumar K.V"
Patch series "Fixup page directory freeing", v4.
This is a repost of patch series from Peter with the arch specific changes
except ppc64 dropped. ppc64 changes are added here because we are redoing
the patch series on top of ppc64 changes. This makes it easy to backpor
From: Peter Zijlstra
Make issuing a TLB invalidate for page-table pages the normal case.
The reason is twofold:
- too many invalidates is safer than too few,
- most architectures use the linux page-tables natively
and would thus require this.
Make it an opt-out, instead of an opt-in.
No
From: Will Deacon
It is common for architectures with hugepage support to require only a
single TLB invalidation operation per hugepage during unmap(), rather than
iterating through the mapping at a PAGE_SIZE increment. Currently,
however, the level in the page table where the unmap() operation o
The TLB flush optimisation (a46cc7a90f: powerpc/mm/radix: Improve TLB/PWC
flushes) may result in random memory corruption. Any concurrent page-table walk
could end up with a Use-after-Free. Even on UP this might give issues, since
mmu_gather is preemptible these days. An interrupt or preempted task
From: Peter Zijlstra
Some architectures require different TLB invalidation instructions
depending on whether it is only the last-level of page table being
changed, or whether there are also changes to the intermediate
(directory) entries higher up the tree.
Add a new bit to the flags bitfield in
ping...
on 2020/2/13 11:00, Jason Yan wrote:
Hi everyone, any comments or suggestions?
Thanks,
Jason
on 2020/2/6 10:58, Jason Yan wrote:
This is a try to implement KASLR for Freescale BookE64 which is based on
my earlier implementation for Freescale BookE32:
https://patchwork.ozlabs.org/proje
On Thu, Feb 6, 2020 at 2:41 PM Roy Pledge wrote:
>
> On 12/12/2019 12:01 PM, Youri Querry wrote:
> > This patch set consists of:
> > - We added an interface to enqueue several packets at a time and
> >improve performance.
> > - Make the algorithm decisions once at initialization and use
> >
On 02/19/2020 at 4:21 PM Christophe Leroy wrote:
> > Radu Rendec a écrit :
> >> On 02/19/2020 at 10:11 AM Radu Rendec wrote:
> >>> On 02/18/2020 at 1:08 PM Christophe Leroy wrote:
> Le 18/02/2020 à 18:07, Radu Rendec a écrit :
> > The saved NIP seems to be broken inside machine_check_
Christophe Leroy a écrit :
Radu Rendec a écrit :
On 02/19/2020 at 10:11 AM Radu Rendec wrote:
On 02/18/2020 at 1:08 PM Christophe Leroy wrote:
Le 18/02/2020 à 18:07, Radu Rendec a écrit :
> The saved NIP seems to be broken inside machine_check_exception() on
> MPC8378, running Linux 4.9.
Radu Rendec a écrit :
On 02/19/2020 at 10:11 AM Radu Rendec wrote:
On 02/18/2020 at 1:08 PM Christophe Leroy wrote:
> Le 18/02/2020 à 18:07, Radu Rendec a écrit :
> > The saved NIP seems to be broken inside machine_check_exception() on
> > MPC8378, running Linux 4.9.191. The value is 0x900 m
On Tue, Feb 18, 2020 at 02:39:36PM +0800, Shengjiu Wang wrote:
> EASRC (Enhanced Asynchronous Sample Rate Converter) is a new
> IP module found on i.MX8MN.
>
> Signed-off-by: Shengjiu Wang
> ---
> .../devicetree/bindings/sound/fsl,easrc.txt | 57 +++
> 1 file changed, 57 insert
On 02/19/2020 at 10:11 AM Radu Rendec wrote:
> On 02/18/2020 at 1:08 PM Christophe Leroy wrote:
> > Le 18/02/2020 à 18:07, Radu Rendec a écrit :
> > > The saved NIP seems to be broken inside machine_check_exception() on
> > > MPC8378, running Linux 4.9.191. The value is 0x900 most of the times,
>
Hello Michael,
On Tue, 2020-02-18 at 15:36 +1100, Michael Ellerman wrote:
> In kvmppc_unmap_free_pte() in book3s_64_mmu_radix.c, we use the
> non-constant value PTE_INDEX_SIZE to clear a PTE page.
>
> We can instead use the constant RADIX_PTE_INDEX_SIZE, because we know
> this code will only be r
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the
address so they can be converted to a "const" version for const-safety
and consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() helpers have inconsistent interface. On some architectures
void *__iomem address argument is a pointer to const, on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
consistency among
The ioreadX() and ioreadX_rep() helpers have inconsistent interface. On
some architectures void *__iomem address argument is a pointer to const,
on some not.
Implementations of ioreadX() do not modify the memory under the address
so they can be converted to a "const" version for const-safety and
Hi,
Changes since v1
https://lore.kernel.org/lkml/1578415992-24054-1-git-send-email-k...@kernel.org/
1. Constify also ioreadX_rep() and mmio_insX(),
2. Squash lib+alpha+powerpc+parisc+sh into one patch for bisectability,
3. Add acks and reviews,
4. Re-order patches so all optiona
In case (k_start & PAGE_MASK) doesn't equal (kstart), 'va' will never be
NULL allthough 'block' is NULL
Check the return of memblock_alloc() directly instead of
the resulting address in the loop.
Fixes: 509cd3f2b473 ("powerpc/32: Simplify KASAN init")
Signed-off-by: Christophe Leroy
---
arch/po
With CONFIG_KASAN_VMALLOC, new page tables are created at the time
shadow memory for vmalloc area in unmapped. If some parts of the
page table still has entries to the zero page shadow memory, the
entries are wrongly marked RW.
Make sure new page tables are populated with RO entries once
kasan_rem
On 02/18/2020 at 1:08 PM Christophe Leroy wrote:
> Le 18/02/2020 à 18:07, Radu Rendec a écrit :
> > The saved NIP seems to be broken inside machine_check_exception() on
> > MPC8378, running Linux 4.9.191. The value is 0x900 most of the times,
> > but I have seen other weird values.
> >
> > I've be
'mem=" option is an easy way to put high pressure on memory during some
test. Hence after applying the memory limit, instead of total mem, the
actual usable memory should be considered when reserving mem for
crashkernel. Otherwise the boot up may experience OOM issue.
E.g. it would reserve 4G prio
On Wed, Feb 19, 2020 at 01:07:55PM +0100, Christophe Leroy wrote:
>
> Le 16/02/2020 à 09:18, Mike Rapoport a écrit :
> > diff --git a/arch/powerpc/mm/ptdump/ptdump.c
> > b/arch/powerpc/mm/ptdump/ptdump.c
> > index 206156255247..7bd4b81d5b5d 100644
> > --- a/arch/powerpc/mm/ptdump/ptdump.c
> > +++
On Wed, Feb 19, 2020 at 10:52:16AM +0100, Arnd Bergmann wrote:
> On Wed, Feb 19, 2020 at 9:45 AM Christophe Leroy
> wrote:
> > Le 16/02/2020 à 19:10, Arnd Bergmann a écrit :
> > > On Sat, Jan 11, 2020 at 12:33 PM Segher Boessenkool
> > > wrote:
> > >>
> > >> On Fri, Jan 10, 2020 at 07:45:44AM +01
On Tue, 2020-02-18 at 14:09:29 UTC, Christophe Leroy wrote:
> Fixes: 12c3f1fd87bf ("powerpc/32s: get rid of CPU_FTR_601 feature")
> Cc: sta...@vger.kernel.org
> Signed-off-by: Christophe Leroy
Applied to powerpc fixes, thanks.
https://git.kernel.org/powerpc/c/9eb425b2e04e0e3006adffea5bf5f227a896
On Mon, 2020-02-17 at 04:13:43 UTC, Oliver O'Halloran wrote:
> The ls (lookup symbol) and zr (reboot) commands use xmon's getstring()
> helper to read a string argument from the xmon prompt. This function skips
> over leading whitespace, but doesn't check if the first "non-whitespace"
> character i
On Sat, 2020-02-15 at 10:14:25 UTC, Christophe Leroy wrote:
> hash_page() needs to read page tables from kernel memory. When entire
> kernel memory is mapped by BATs, which is normally the case when
> CONFIG_STRICT_KERNEL_RWX is not set, it works even if the page hosting
> the page table is not ref
On Fri, 2020-02-14 at 08:39:50 UTC, Christophe Leroy wrote:
> With CONFIG_VMAP_STACK, data MMU has to be enabled
> to read data on the stack.
>
> Fixes: cd08f109e262 ("powerpc/32s: Enable CONFIG_VMAP_STACK")
> Signed-off-by: Christophe Leroy
Applied to powerpc fixes, thanks.
https://git.kernel.
On Fri, 2020-02-14 at 06:53:00 UTC, Christophe Leroy wrote:
> power_save_ppc32_restore() is called during exception entry, before
> re-enabling the MMU. It substracts KERNELBASE from the address
> of nap_save_msscr0 to access it.
>
> With CONFIG_VMAP_STACK enabled, data MMU translation has already
On Tue, 2020-02-11 at 03:38:29 UTC, Gustavo Luiz Duarte wrote:
> After a treclaim, we expect to be in non-transactional state. If we don't
> clear
> the current thread's MSR[TS] before we get preempted, then
> tm_recheckpoint_new_task() will recheckpoint and we get rescheduled in
> suspended trans
On Sun, 2020-02-09 at 18:14:42 UTC, Christophe Leroy wrote:
> In ITLB miss handled the line supposed to clear bits 20-23 on the
> L2 ITLB entry is buggy and does indeed nothing, leading to undefined
> value which could allow execution when it shouldn't.
>
> Properly do the clearing with the releva
On Sun, 2020-02-09 at 16:02:41 UTC, Christophe Leroy wrote:
> With HW assistance all page tables must be 4k aligned, the 8xx
> drops the last 12 bits during the walk.
>
> Redefine HUGEPD_SHIFT_MASK to mask last 12 bits out.
> HUGEPD_SHIFT_MASK is used to for alignment of page table cache.
>
> Fix
On Fri, 2020-02-07 at 04:57:31 UTC, Sam Bobroff wrote:
> Recovering a dead PHB can currently cause a deadlock as the PCI
> rescan/remove lock is taken twice.
>
> This is caused as part of an existing bug in
> eeh_handle_special_event(). The pe is processed while traversing the
> PHBs even though t
On Thu, 2020-02-06 at 13:50:28 UTC, Christophe Leroy wrote:
> Commit 55c8fc3f4930 ("powerpc/8xx: reintroduce 16K pages with HW
> assistance") redefined pte_t as a struct of 4 pte_basic_t, because
> in 16K pages mode there are four identical entries in the
> page table. But the size of hugepage tabl
Le 16/02/2020 à 09:18, Mike Rapoport a écrit :
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate and replace 5level-fixup.h with pgtable-nop4d.h.
Signed-off-by: Mike Rapoport
Tested-by: Christophe Leroy # 8xx and 83xx
--
Tulio Magno Quites Machado Filho's on January 30, 2020 1:51 am:
> Nicholas Piggin writes:
>
>> Adhemerval Zanella's on January 29, 2020 3:26 am:
>>>
>>> We already had to push a similar hack where glibc used to abort transactions
>>> prior syscalls to avoid some side-effects on kernel (commit 56
In xmon we have two variables that are used by the dump commands.
There's ndump which is the number of bytes to dump using 'd', and
nidump which is the number of instructions to dump using 'di'.
ndump starts as 64 and nidump starts as 16, but both can be set by the
user.
It's fairly common to be
On Wed, Feb 19, 2020 at 9:45 AM Christophe Leroy
wrote:
> Le 16/02/2020 à 19:10, Arnd Bergmann a écrit :
> > On Sat, Jan 11, 2020 at 12:33 PM Segher Boessenkool
> > wrote:
> >>
> >> On Fri, Jan 10, 2020 at 07:45:44AM +0100, Christophe Leroy wrote:
> >>> Le 09/01/2020 à 21:07, Segher Boessenkool a
Le 16/02/2020 à 19:10, Arnd Bergmann a écrit :
On Sat, Jan 11, 2020 at 12:33 PM Segher Boessenkool
wrote:
On Fri, Jan 10, 2020 at 07:45:44AM +0100, Christophe Leroy wrote:
Le 09/01/2020 à 21:07, Segher Boessenkool a écrit :
It looks like the compiler did loop peeling. What GCC version is
kprobe does not handle events happening in real mode, all
functions running with MMU disabled have to be blacklisted.
As already done for PPC64, do it for PPC32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/ppc_asm.h | 10 +++
arch/powerpc/kernel/cpu_setup_6xx.S
At the time being we have something like
if (something) {
p = get();
if (p) {
if (something_wrong)
goto out;
...
return;
} else if (a != b
49 matches
Mail list logo