Hi All,
I debugged the bootwrapper code by printing messages in RAM using memcpy.
The bootwrapper code appears to be executing correctly. I ensured that the
"serial_console_init" function has executed successfully. But I am still not
able to get any of the "printf" output in the wrapper code on th
Hi Florian,
On 24/09/2018 21:17, Florian Fainelli wrote:
> On 09/24/2018 12:04 PM, Corentin Labbe wrote:
>> This patch convert meson stmmac glue driver to use all xxxsetbits32
>> functions.
>>
>> Signed-off-by: Corentin Labbe
>> ---
>> .../net/ethernet/stmicro/stmmac/dwmac-meson8b.c| 56
>>
On Mon, Sep 10, 2018 at 07:19:14PM +0530, Nipun Gupta wrote:
> Nipun Gupta (7):
> Documentation: fsl-mc: add iommu-map device-tree binding for fsl-mc
> bus
> iommu/of: make of_pci_map_rid() available for other devices too
> iommu/of: support iommu configuration for fsl-mc devices
> iomm
Reading through the code and studying how mem_hotplug_lock is to be used,
I noticed that there are two places where we can end up calling
device_online()/device_offline() - online_pages()/offline_pages() without
the mem_hotplug_lock. And there are other places where we call
device_online()/device_o
remove_memory() is exported right now but requires the
device_hotplug_lock, which is not exported. So let's provide a variant
that takes the lock and only export that one.
The lock is already held in
arch/powerpc/platforms/pseries/hotplug-memory.c
drivers/acpi/acpi_memhotplug.c
add_memory() currently does not take the device_hotplug_lock, however
is aleady called under the lock from
arch/powerpc/platforms/pseries/hotplug-memory.c
drivers/acpi/acpi_memhotplug.c
to synchronize against CPU hot-remove and similar.
In general, we should hold the device_hotplug
There seem to be some problems as result of 30467e0b3be ("mm, hotplug:
fix concurrent memory hot-add deadlock"), which tried to fix a possible
lock inversion reported and discussed in [1] due to the two locks
a) device_lock()
b) mem_hotplug_lock
While add_memory() first takes b), f
device_online() should be called with device_hotplug_lock() held.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Rashmica Gupta
Cc: Balbir Singh
Cc: Michael Neuling
Reviewed-by: Pavel Tatashin
Reviewed-by: Rashmica Gupta
Signed-off-by: David Hildenbrand
---
arch/p
Let's perform all checking + offlining + removing under
device_hotplug_lock, so nobody can mess with these devices via
sysfs concurrently.
Cc: Benjamin Herrenschmidt
Cc: Paul Mackerras
Cc: Michael Ellerman
Cc: Rashmica Gupta
Cc: Balbir Singh
Cc: Michael Neuling
Reviewed-by: Pavel Tatashin
R
Let's document the magic a bit, especially why device_hotplug_lock is
required when adding/removing memory and how it all play together with
requests to online/offline memory from user space.
Cc: Jonathan Corbet
Cc: Michal Hocko
Cc: Andrew Morton
Reviewed-by: Pavel Tatashin
Reviewed-by: Rashmi
Current we store the userspace r1 to PACATMSCRATCH before finally
saving it to the thread struct.
In theory an exception could be taken here (like a machine check or
SLB miss) that could write PACATMSCRATCH and hence corrupt the
userspace r1. The SLB fault currently doesn't touch PACATMSCRATCH, bu
Michael Neuling writes:
> Current we store the userspace r1 to PACATMSCRATCH before finally
> saving it to the thread struct.
>
> In theory an exception could be taken here (like a machine check or
> SLB miss) that could write PACATMSCRATCH and hence corrupt the
> userspace r1. The SLB fault curre
The purpose of this serie is to activate CONFIG_THREAD_INFO_IN_TASK which
moves the thread_info into task_struct.
Moving thread_info into task_struct has the following advantages:
- It protects thread_info from corruption in the case of stack
overflows.
- Its address is harder to determine if stac
When activating CONFIG_THREAD_INFO_IN_TASK, linux/sched.h
includes asm/current.h. This generates a circular dependency.
To avoid that, asm/processor.h shall not be included in mmu-hash.h
In order to do that, this patch moves into a new header called
asm/task_size.h the information from asm/process
At the time being, the thread_info struct is located in the beginning
of the stack. There is an asm const called THREAD_INFO which is the
offset of the stack pointer in the task_struct.
In preparation of moving thread_info into task_struct, this patch
renames the THREAD_INFO const to TASK_STACK.
In order to be able to include asm-offset.h in smp.h for PPC32,
all definitions which are conflicting with C need new names.
TASK_SIZE is nowhere used in asm.
PPC_DBELL_SERVER_SERVER and PPC_DBELL_SERVER_MSGTYPE are only
needed on PPC64 in asm.
MAS0 ... MAS7 conflict with 'struct tlbcam' fields.
This patch cleans the powerpc kernel before activating
CONFIG_THREAD_INFO_IN_TASK:
- The purpose of the pointer given to call_do_softirq() and
call_do_irq() is to point the new stack ==> change it to void*
- current_pt_regs() is in the stack, not in thread_info.
- Don't use CURRENT_THREAD_INFO() to
This patch activates CONFIG_THREAD_INFO_IN_TASK which
moves the thread_info into task_struct.
Moving thread_info into task_struct has the following advantages:
- It protects thread_info from corruption in the case of stack
overflows.
- Its address is harder to determine if stack addresses are
leak
thread_info is not anymore in the stack, so the entire stack
can now be used.
In the meantime, all pointers to the stacks are not anymore
pointers to thread_info so this patch changes them to void*
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/irq.h | 10 +-
arch/po
The table of pointers 'current_set' has been used for retrieving
the stack and current. They used to be thread_info pointers as
they were pointing to the stack and current was taken from the
'task' field of the thread_info.
Now, the pointers of 'current_set' table are now both pointers
to task_str
Now that thread_info is similar to task_struct, it's address is in r2
so CURRENT_THREAD_INFO() macro is useless. This patch removes it.
At the same time, as the 'cpu' field is not anymore in thread_info,
this patch renames it to TASK_CPU.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include
CURRENT_THREAD_INFO() now uses the PACA to retrieve 'current' pointer,
it doesn't use 'sp' anymore.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/exception-64s.h | 4 ++--
arch/powerpc/include/asm/thread_info.h | 2 +-
arch/powerpc/kernel/entry_64.S
On Tue, Sep 25, 2018 at 07:34:15AM +0200, Christophe LEROY wrote:
>
>
> Le 24/09/2018 à 17:52, Christophe Leroy a écrit :
> > When switching powerpc to CONFIG_THREAD_INFO_IN_TASK, include/sched.h
> > has to be included in asm/smp.h for the following change, in order
> > to avoid uncomplete defini
On Tue, Sep 25, 2018 at 11:14:56AM +0200, David Hildenbrand wrote:
> Let's perform all checking + offlining + removing under
> device_hotplug_lock, so nobody can mess with these devices via
> sysfs concurrently.
>
> Cc: Benjamin Herrenschmidt
> Cc: Paul Mackerras
> Cc: Michael Ellerman
> Cc: Ra
Le 25/09/2018 à 14:11, Mark Rutland a écrit :
On Tue, Sep 25, 2018 at 07:34:15AM +0200, Christophe LEROY wrote:
Le 24/09/2018 à 17:52, Christophe Leroy a écrit :
When switching powerpc to CONFIG_THREAD_INFO_IN_TASK, include/sched.h
has to be included in asm/smp.h for the following change,
Arnd Bergmann writes:
> On Tue, Sep 25, 2018 at 2:48 AM Michael Ellerman wrote:
>> Arnd Bergmann writes:
>> > On Tue, Sep 18, 2018 at 2:15 PM Firoz Khan wrote:
>> >> On 14 September 2018 at 15:31, Arnd Bergmann wrote:
>> >> > On Fri, Sep 14, 2018 at 10:33 AM Firoz Khan
>> >> > wrote:
>> >
>>
Currently associativity is used to lookup node-id even if the preceding
VPHN hcall failed. However this can cause CPU to be made part of the
wrong node, (most likely to be node 0). This is because VPHN is not
enabled on kvm guests.
With 2ea6263 ("powerpc/topology: Get topology for shared processor
Michael Ellerman writes:
> Christophe LEROY writes:
>> Le 24/09/2018 à 14:10, Michael Ellerman a écrit :
>>> Christophe Leroy writes:
I'm trying to implement TLS based stack protector in the Linux Kernel.
For that I need to give to GCC the offset at which it will find the
canary (
On Wed, Sep 19, 2018 at 11:12:57AM +0100, Robin Murphy wrote:
> The external interface to get/set window attributes is already
> abstracted behind iommu_domain_{get,set}_attr(), so there's no real
> reason for the internal interface to be different. Since we only have
> one window-based driver anyw
PPC32 uses nonrecoverable_exception() while PPC64 uses
unrecoverable_exception().
Both functions are doing almost the same thing.
This patch removes nonrecoverable_exception()
Signed-off-by: Christophe Leroy
---
v2: Removed call to debugger() as die() already calls debugger()
arch/powerpc/in
There is a mismatch between function pnv_platform_error_reboot() definition
and declaration regarding function modifiers. In the declaration part, it
contains the function attribute __noreturn, while function definition
itself lacks it.
This was reported by sparse tool as an error:
arch/powerpc
Hi Mikey,
On 09/25/2018 02:24 AM, Michael Neuling wrote:
> On Mon, 2018-09-24 at 11:32 -0300, Breno Leitao wrote:
>> Hi Mikey,
>>
>> On 09/24/2018 04:27 AM, Michael Neuling wrote:
>>> When we treclaim we store the userspace checkpointed r13 to a scratch
>>> SPR and then later save the scratch SPR
Snowpatch reports failure on pmac32_defconfig, as follows:
arch/powerpc/platforms/powermac/bootx_init.o: In function `bootx_printf':
/var/lib/jenkins-slave/workspace/snowpatch/snowpatch-linux-sparse/linux/arch/powerpc/platforms/powermac/bootx_init.c:88:
undefined reference to `__stack_chk_fail_l
Le 19/09/2018 à 05:03, Aneesh Kumar K.V a écrit :
On 9/18/18 10:27 PM, Christophe Leroy wrote:
In order to allow the 8xx to handle pte_fragments, this patch
extends the use of pte_fragments to nohash/32 platforms.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-40x.h
Le 19/09/2018 à 04:56, Aneesh Kumar K.V a écrit :
On 9/18/18 10:27 PM, Christophe Leroy wrote:
There is no point in taking the page table lock as
pte_frag is always NULL when we have only one fragment.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable-frag.c | 3 +++
1 file cha
The purpose of this serie is to implement hardware assistance for TLB table walk
on the 8xx.
First part switches to patch_site instead of patch_instruction,
as it makes the code clearer and avoids pollution with global symbols.
Optimise access to perf counters (hence reduce number of registers us
This reverts commit 4f94b2c7462d9720b2afa7e8e8d4c19446bb31ce.
That commit was buggy, as it used rlwinm instead of rlwimi.
Instead of fixing that bug, we revert the previous commit in order to
reduce the dependency between L1 entries and L2 entries
Fixes: 4f94b2c7462d9 ("powerpc/8xx: Use L1 entry
This patch adds a helper to get the address of a patch_site
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/code-patching.h | 5 +
1 file changed, 5 insertions(+)
diff --git a/arch/powerpc/include/asm/code-patching.h
b/arch/powerpc/include/asm/code-patching.h
index 31733a95bbd
The 8xx TLB miss routines are patched at startup at several places.
This patch uses the new patch_site functionality in order
to get a better code readability and avoid a label mess when
dumping the code with 'objdump -d'
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-8xx.h |
The 8xx TLB miss routines are patched when (de)activating
perf counters.
This patch uses the new patch_site functionality in order
to get a better code readability and avoid a label mess when
dumping the code with 'objdump -d'
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu-8xx
In order to simplify time critical exceptions handling 8xx
specific SW perf counters, this patch moves the counters into
the beginning of memory. This is possible because .text is readable
and the counters are never modified outside of the handlers.
By doing this, we avoid having to set a second r
In preparation of making use of hardware assistance in TLB handlers,
this patch temporarily disables 16K pages and 512K pages. The reason
is that when using HW assistance in 4K pages mode, the linux model
fit with the HW model for 4K pages and 8M pages.
However for 16K pages and 512K mode some add
Today, on the 8xx the TLB handlers do SW tablewalk by doing all
the calculation in ASM, in order to match with the Linux page
table structure.
The 8xx offers hardware assistance which allows significant size
reduction of the TLB handlers, hence also reduces the time spent
in the handlers.
However
For using 512k pages with hardware assistance, the PTEs have to be spread
every 128 bytes in the L2 table.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/hugetlb.h | 4 +++-
arch/powerpc/mm/hugetlbpage.c | 13 +
arch/powerpc/mm/tlb_nohash.c | 3 +++
3 files
This patch reworks the TLB Miss handler in order to not use r12
register, hence avoiding having to save it into SPRN_SPRG_SCRATCH2.
In the DAR Fixup code we can now use SPRN_M_TW, freeing
SPRN_SPRG_SCRATCH2.
Then SPRN_SPRG_SCRATCH2 may be used for something else in the future.
Signed-off-by: Chr
As this is running with MMU off, the CPU only does speculative
fetch for code in the same page.
Following the significant size reduction of TLB handler routines,
the side handlers can be brought back close to the main part,
ie in the same page.
Signed-off-by: Christophe Leroy
---
arch/powerpc/k
In the same way as PPC64, let's handle pte allocation directly
in kernel_map_page() when slab is not available.
The new function early_pte_alloc_kernel() is put in as inline in
platforms pgalloc.h, this will allow to have different ones later.
It is not an issue because early_pte_alloc_kernel() is
As in PPC64, inline pte_alloc_one() and pte_alloc_one_kernel()
in PPC32. This will allow to switch nohash/32 to pte_fragment
without impacting hash/32.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 22 --
arch/powerpc/include/asm/nohash/32
BOOK3S/32 cannot be BOOKE, so remove useless code
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/pgalloc.h | 18 --
arch/powerpc/include/asm/book3s/32/pgtable.h | 14 --
2 files changed, 32 deletions(-)
diff --git a/arch/powerpc/include/asm/bo
In preparation of next patch which generalises the use of
pte_fragment_alloc() for all, this patch moves the related functions
in a place that is common to all subarches.
The 8xx will need that for supporting 16k pages, as in that mode
page tables still have a size of 4k.
Since pte_fragment with
There is no point in taking the page table lock as pte_frag or
pmd_frag are always NULL when we have only one fragment.
Signed-off-by: Christophe Leroy
---
arch/powerpc/mm/pgtable-book3s64.c | 3 +++
arch/powerpc/mm/pgtable-frag.c | 3 +++
2 files changed, 6 insertions(+)
diff --git a/arch/
The purpose of this patch is to move platform specific
mmu-xxx.h files in platform directories like pte-xxx.h files.
In the meantime this patch creates common nohash and
nohash/32 + nohash/64 mmu.h files for future common parts.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu.h
This patch move pgtable_t into platform headers.
It gets rid of the CONFIG_PPC_64K_PAGES case for PPC64
as nohash/64 doesn't support CONFIG_PPC_64K_PAGES.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/book3s/32/mmu-hash.h | 2 ++
arch/powerpc/include/asm/book3s/64/mmu.h |
In order to allow the 8xx to handle pte_fragments, this patch
extends the use of pte_fragments to nohash/32 platforms.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/mmu_context.h | 2 +-
arch/powerpc/include/asm/nohash/32/mmu-40x.h | 1 +
arch/powerpc/include/asm/nohash/32
commit 1bc54c03117b9 ("powerpc: rework 4xx PTE access and TLB miss")
introduced non atomic PTE updates and started the work of removing
PTE updates in TLB miss handlers, but kept PTE_ATOMIC_UPDATES for the
8xx with the following comment:
/* Until my rework is finished, 8xx still needs atomic PTE up
Using this HW assistance implies some constraints on the
page table structure:
- Regardless of the main page size used (4k or 16k), the
level 1 table (PGD) contains 1024 entries and each PGD entry covers
a 4Mbytes area which is managed by a level 2 table (PTE) containing
also 1024 entries each desc
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we have to set it in the PMD
In order to allow this, this patch splits the VM alloc space in two
parts, one for VM alloc and non Guarded IO, and one for G
On the 8xx, the GUARDED attribute of the pages is managed in the
L1 entry, therefore to avoid having to copy it into L1 entry
at each TLB miss, we set it in the PMD.
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/nohash/32/pte-8xx.h | 3 ++-
arch/powerpc/kernel/head_8xx.S
On 09/25/2018 12:15 PM, Christophe LEROY wrote:
Le 25/09/2018 à 14:11, Mark Rutland a écrit :
On Tue, Sep 25, 2018 at 07:34:15AM +0200, Christophe LEROY wrote:
Le 24/09/2018 à 17:52, Christophe Leroy a écrit :
When switching powerpc to CONFIG_THREAD_INFO_IN_TASK, include/sched.h
has to
Hi,
On Thu, Aug 23, 2018 at 11:36 PM Alexandre Belloni
wrote:
>
> If the qman driver (qman_ccsr) doesn't probe or fail to probe before
> qman_portal, qm_ccsr_start will be either NULL or a stale pointer to an
> unmapped page.
>
> This leads to a crash when probing qman_portal as the init_pcfg f
Looking at the code I think this commit is simply broken for
architectures not using arch_setup_dma_ops, but instead setting up
the dma ops through arch specific magic.
I'll revert the patch.
On Tue, Sep 25, 2018 at 2:47 PM Olof Johansson wrote:
>
> Hi,
>
>
> On Thu, Aug 23, 2018 at 11:36 PM Alexandre Belloni
> wrote:
> >
> > If the qman driver (qman_ccsr) doesn't probe or fail to probe before
> > qman_portal, qm_ccsr_start will be either NULL or a stale pointer to an
> > unmapped pag
On 09/22/2018 04:03 AM, Gautham R Shenoy wrote:
> Without this patchset, the SMT domain would be defined as the group of
> threads that share L2 cache.
Could you try to make a more clear, concise statement about the current
state of the art vs. what you want it to be? Right now, the sched
domains
Christophe Leroy writes:
> In preparation of next patch which generalises the use of
> pte_fragment_alloc() for all, this patch moves the related functions
> in a place that is common to all subarches.
>
> The 8xx will need that for supporting 16k pages, as in that mode
> page tables still have a
Christophe Leroy writes:
> There is no point in taking the page table lock as pte_frag or
> pmd_frag are always NULL when we have only one fragment.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/mm/pgtable-book3s64.c | 3 +++
> arch/powerpc/mm/pgtable
Christophe Leroy writes:
> The purpose of this patch is to move platform specific
> mmu-xxx.h files in platform directories like pte-xxx.h files.
>
> In the meantime this patch creates common nohash and
> nohash/32 + nohash/64 mmu.h files for future common parts.
>
Reviewed-by: Aneesh Kumar K.V
Christophe Leroy writes:
> This patch move pgtable_t into platform headers.
>
> It gets rid of the CONFIG_PPC_64K_PAGES case for PPC64
> as nohash/64 doesn't support CONFIG_PPC_64K_PAGES.
>
Reviewed-by: Aneesh Kumar K.V
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/book3
Christophe Leroy writes:
> In order to allow the 8xx to handle pte_fragments, this patch
> extends the use of pte_fragments to nohash/32 platforms.
>
> Signed-off-by: Christophe Leroy
> ---
> arch/powerpc/include/asm/mmu_context.h | 2 +-
> arch/powerpc/include/asm/nohash/32/mmu-40x.h |
On Tue, 2018-09-25 at 22:00 +1000, Michael Ellerman wrote:
> Michael Neuling writes:
> > Current we store the userspace r1 to PACATMSCRATCH before finally
> > saving it to the thread struct.
> >
> > In theory an exception could be taken here (like a machine check or
> > SLB miss) that could write
Christoph Hellwig writes:
> Looking at the code I think this commit is simply broken for
> architectures not using arch_setup_dma_ops, but instead setting up
> the dma ops through arch specific magic.
I arch_setup_dma_ops() called from of_dma_configure(), but
pci_dma_configure() doesn't call tha
This patch exports the raw per-CPU VPA data via debugfs.
A per-CPU file is created which exports the VPA data of
that CPU to help debug some of the VPA related issues or
to analyze the per-CPU VPA related statistics.
Signed-off-by: Aravinda Prasad
---
arch/powerpc/platforms/pseries/lpar.c | 57
Hello Dave,
On Tue, Sep 25, 2018 at 03:16:30PM -0700, Dave Hansen wrote:
> On 09/22/2018 04:03 AM, Gautham R Shenoy wrote:
> > Without this patchset, the SMT domain would be defined as the group of
> > threads that share L2 cache.
>
> Could you try to make a more clear, concise statement about th
72 matches
Mail list logo