The purpose of this set of patches is to continue on TLB handling
optimisation on the 8xx with the handling of IMMR area as a
single 512k area instead of multiple 4k pages.
This set includes a rework of linear RAM mapping in order to not use
page table but direct linear mapping. The result is equi
Memory: 124428K/131072K available (3748K kernel code, 188K rwdata,
648K rodata, 508K init, 290K bss, 6644K reserved)
Kernel virtual memory layout:
* 0xfffdf000..0xf000 : fixmap
* 0xfde0..0xfe00 : consistent mem
* 0xfddf6000..0xfde0 : early ioremap
* 0xc900..0xfddf6000
Once the linear memory space has been mapped with 8Mb pages, as
seen in the related commit, we get 11 millions DTLB missed during
the reference 600s period. 77% of the misses are on user addresses
and 23% are on kernel addresses (1 fourth for linear address space
and 3 fourth for virtual address sp
IMMR is now mapped by a fixed 512k page managed by the TLB miss
handler so it is not anymore necessary to PIN TLBs
Signed-off-by: Christophe Leroy
---
v2: No change
v3: No change
arch/powerpc/Kconfig.debug | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/powerpc/Kconfig.debug b/arch/powe
Bootloader may have pinned some TLB entries so the kernel must
unpin them before flushing TLBs with tlbia otherwise pinned TLB
entries won't get flushed
Signed-off-by: Christophe Leroy
---
v2: No change
v3: No change
arch/powerpc/kernel/head_8xx.S | 18 ++
1 file changed, 10 ins
Instead of using the first level page table to define mappings for
the linear memory space, we can use direct mapping from the TLB
handling routines. This has several advantages:
* No need to read the tables at each TLB miss
* No issue in 16k pages mode where the 1st level table maps 64 Mbytes
The
CONFIG_PIN_TLB maps IMMR area and the first 24 Mbytes of memory.
In some circunstances it might be more interesting to not map
IMMR but map 32 Mbytes of memory instead.
Therefore we add config option CONFIG_PIN_TLB_IMMR to select if
IMMR shall be pinned or not, hence whether we pin 24 or 32 Mbytes
On recent kernels, with some debug options like for instance
CONFIG_LOCKDEP, the BSS requires more than 8M memory, allthough
the kernel code fits in the first 8M.
Today, it is necessary to activate CONFIG_PIN_TLB to get more than 8M
at startup, allthough pinning TLB is not necessary for that.
We c
This is all derived from Scott Bauer's x86 patches
(https://lkml.org/lkml/2016/3/29/788). Have tested rc5 with these patches on:
- BE VM
- BE bare metal
- LE VM
- LE bare metal
with the Linux Test Project runltp test script with all default configs. From
rc5 to r
This is based off Scotty's patch: https://lkml.org/lkml/2016/3/29/792.
The only difference being that the sig_cookie is apart of the struct
sighand_struct instead of task_struct so the the sig_cookie is shared
between threads.
Signed-off-by: Rashmica Gupta
---
fs/exec.c | 4
i
This patch adds SROP mitigation logic to the powerpc signal delivery
and sigreturn code. The cookie is placed in the sigframe just after (at
a lower address) the ABI gap.
This is derived from the x86 SROP mitigation patch:
https://lkml.org/lkml/2016/3/29/791.
Signed-off-by: Rashmica Gupta
---
a
change fome v1:
separate into 6 pathes from one patch
some minor code changes.
benchmark test results are below.
run 3 tests on pseries IBM,8408-E8E with 32cpus, 64GB memory
perf bench futex hash
perf bench futex lock-pi
perf record -advRT || perf bench sched messaging -g 1000 || p
Base code to enable qspinlock on powerpc. this patch add some #ifdef
here and there. Although there is no paravirt related code, we can
successfully build a qspinlock kernel after apply this patch.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/qspinlock.h | 22 +
pseries will use qspinlock by default.
Signed-off-by: Pan Xinhui
---
arch/powerpc/platforms/pseries/Kconfig | 1 +
1 file changed, 1 insertion(+)
diff --git a/arch/powerpc/platforms/pseries/Kconfig
b/arch/powerpc/platforms/pseries/Kconfig
index bec90fb..f669323 100644
--- a/arch/powerpc/platfo
pv-qspinlock core has pv_wait/pv_kick which will give a better
performace by yielding and kicking cpu at some cases.
lets support them by adding two corresponding helper functions.
Signed-off-by: Pan Xinhui
---
arch/powerpc/include/asm/spinlock.h | 4
arch/powerpc/lib/locks.c|
As we need let pv-qspinlock-kernel run on all environment which might
have no powervm, we should runtime choose which qspinlock version to
use. The default pv-qspinlock use native version. pv_lock initialization
should be done in bootstage with irq disabled. And if possible, restore
pv_lock_ops cal
cmpxchg_release is lighter, we can gain a better performace then.
Suggested-by: Boqun Feng
Signed-off-by: Pan Xinhui
---
kernel/locking/qspinlock_paravirt.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/locking/qspinlock_paravirt.h
b/kernel/locking/qspinlock_paravi
pseries can use pv-qspinlock.
Signed-off-by: Pan Xinhui
---
arch/powerpc/kernel/Makefile | 1 +
arch/powerpc/platforms/pseries/Kconfig | 8
2 files changed, 9 insertions(+)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 2da380f..ae7c2f1 100644
On 12/05/2016 16:23, Laurent Vivier wrote:
>
>
> On 12/05/2016 11:27, Alexander Graf wrote:
>> On 05/12/2016 11:10 AM, Laurent Vivier wrote:
>>>
>>> On 11/05/2016 13:49, Alexander Graf wrote:
On 05/11/2016 01:14 PM, Laurent Vivier wrote:
> On 11/05/2016 12:35, Alexander Graf wrote:
>>>
On 05/17/2016 10:35 AM, Laurent Vivier wrote:
On 12/05/2016 16:23, Laurent Vivier wrote:
On 12/05/2016 11:27, Alexander Graf wrote:
On 05/12/2016 11:10 AM, Laurent Vivier wrote:
On 11/05/2016 13:49, Alexander Graf wrote:
On 05/11/2016 01:14 PM, Laurent Vivier wrote:
On 11/05/2016 12:35, Al
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kernel/traps.c | 7 +++
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/powerpc/include/asm/ppc-opcode.h
b/arch/powerpc/include/asm/ppc-opcode.h
index 1d035c1..37e7b9d
On processors like the 8xx, the machine check exception can also
happen directly on the load/store instruction itself, so that case
needs to be handled as well
Signed-off-by: Christophe Leroy
---
arch/powerpc/include/asm/ppc-opcode.h | 1 +
arch/powerpc/kernel/traps.c | 13 +++
currently working on a project involving MPC8360 board, and was
handed a patch to the system .dts file that contained the following
snippet for the muram node:
muram@1 {
- #address-cells = <1>;
+ #address-cells = <2>;
On 17/05/2016 10:37, Alexander Graf wrote:
> On 05/17/2016 10:35 AM, Laurent Vivier wrote:
>>
>> On 12/05/2016 16:23, Laurent Vivier wrote:
>>>
>>> On 12/05/2016 11:27, Alexander Graf wrote:
On 05/12/2016 11:10 AM, Laurent Vivier wrote:
> On 11/05/2016 13:49, Alexander Graf wrote:
>>
Le 17/05/2016 à 01:11, Scott Wood a écrit :
On Fri, 2016-05-13 at 11:25 +0200, Christophe Leroy wrote:
Le 11/05/2016 à 22:38, Scott Wood a écrit :
On Wed, 2016-05-11 at 17:03 +0200, Christophe Leroy wrote:
Memory: 124428K/131072K available (3748K kernel code, 188K rwdata,
648K rodata, 508K i
Here are the RFC patchset of the kprobes jump optimization
(a.k.a OPTPROBES)for powerpc. Kprobe being an inevitable tool
for kernel developers,enhancing the performance of kprobe has
got much importance.
Currently kprobes inserts a trap instruction to probe a running kernel.
Jump optimization allo
Detour buffer contains instructions to create an in memory pt_regs.
After the execution of prehandler a call is made for instruction emulation.
The NIP is decided after the probed instruction is executed. Hence a branch
instruction is created to the NIP returned by emulate_step().
Signed-off-by: A
Instruction slot for detour buffer is allocated from
the reserved area. For the time being 64KB is reserved
in memory for this purpose. ppc_get_optinsn_slot() and
ppc_free_optinsn_slot() are geared towards the allocation and freeing
of memory from this area.
Signed-off-by: Anju T
---
arch/powerp
Signed-off-by: Anju T
---
Documentation/features/debug/optprobes/arch-support.txt | 2 +-
arch/powerpc/Kconfig| 1 +
arch/powerpc/kernel/Makefile| 1 +
3 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/Documentation/featu
+++ Josh Poimboeuf [28/04/16 15:44 -0500]:
[snip]
diff --git a/Documentation/livepatch/livepatch.txt
b/Documentation/livepatch/livepatch.txt
index 6c43f6e..bee86d0 100644
--- a/Documentation/livepatch/livepatch.txt
+++ b/Documentation/livepatch/livepatch.txt
@@ -72,7 +72,8 @@ example, they add
On 5/14/2016 6:11 AM, Christian Lamparter wrote:
> On Thursday, May 12, 2016 11:40:28 AM John Youn wrote:
>> On 5/12/2016 6:30 AM, Christian Lamparter wrote:
>>> On Thursday, May 12, 2016 01:55:44 PM Arnd Bergmann wrote:
On Thursday 12 May 2016 11:58:18 Christian Lamparter wrote:
Dete
This is just a smattering of things picked up by sparse that should
be made static.
Signed-off-by: Daniel Axtens
---
arch/powerpc/kernel/crash.c | 2 +-
arch/powerpc/kernel/sysfs.c | 2 +-
arch/powerpc/platforms/powernv/idle.c | 2 +-
arch/powerp
Sparse picked up a number of functions that are implemented in C and
then only referred to in asm code.
This introduces asm-prototypes.h, which provides a place for
prototypes of these functions.
This silences some sparse warnings.
Signed-off-by: Daniel Axtens
---
arch/powerpc/include/asm/asm-
Sometimes headers that provide prototypes for functions are
accidentally omitted from the files that define the functions.
Fix a couple of times that occurs.
Signed-off-by: Daniel Axtens
---
arch/powerpc/kernel/smp.c | 1 +
arch/powerpc/platforms/pseries/power.c | 2 ++
2 files cha
Sparse complains that it doesn't know what REG_BYTE is:
arch/powerpc/kernel/align.c:313:29: error: undefined identifier 'REG_BYTE'
arch/powerpc/kernel/align.c:320:37: error: undefined identifier 'REG_BYTE'
arch/powerpc/kernel/align.c:328:29: error: cast from unknown type
arch/powerpc/kernel/align.
The domain/PHB field of PCI addresses has its value obtained from a
global variable, incremented each time a new domain (represented by
struct pci_controller) is added on the system. The domain addition
process happens during boot or due to PHB hotplug add.
As recent kernels are using predictable
On Tue, May 03, 2016 at 01:54:30PM +0530, Shreyas B. Prabhu wrote:
> CHECK_HMI_INTERRUPT is used to check for HMI's in reset vector. Move
> the macro to a common location (exception-64s.h)
> This patch does not change any functionality.
>
I suppose this code movement is to facilitate the invocati
Hi Shreyas,
On Tue, May 03, 2016 at 01:54:31PM +0530, Shreyas B. Prabhu wrote:
> In the current code, when the thread wakes up in reset vector, some
> of the state restore code and check for whether a thread needs to
> branch to kvm is duplicated. Reorder the code such that this
> duplication is a
Hi Shreyas,
On Tue, May 03, 2016 at 01:54:32PM +0530, Shreyas B. Prabhu wrote:
> CPU-idle related code like context save/restore functions idle_power7.S
> can reused for adding stop instruction support. Move this
> code to a new commonly accessible location.
[..snip..]
> diff --git a/arch/powerp
Hi Shreyas,
On Tue, May 03, 2016 at 01:54:33PM +0530, Shreyas B. Prabhu wrote:
> power7_powersave_common does common steps needed before entering idle
> state and eventually changes MSR to MSR_IDLE and does rfid to
> power7_enter_nap_mode.
>
> Make it more generic by passing the rfid address as a
On Tue, May 03, 2016 at 01:54:35PM +0530, Shreyas B. Prabhu wrote:
> pnv_init_idle_states discovers supported idle states from the
> device tree and does the required initialization. Set power_save
> function pointer only after this initialization is done
>
> Signed-off-by: Shreyas B. Prabhy
Rev
On 05/18/2016 12:07 PM, Gautham R Shenoy wrote:
> Hi Shreyas,
>
> On Tue, May 03, 2016 at 01:54:33PM +0530, Shreyas B. Prabhu wrote:
>> power7_powersave_common does common steps needed before entering idle
>> state and eventually changes MSR to MSR_IDLE and does rfid to
>> power7_enter_nap_mode.
42 matches
Mail list logo