[PATCH v2] include: mman: Use bool instead of int for the return value of arch_validate_prot

2016-07-23 Thread chengang
From: Chen Gang For pure bool function's return value, bool is a little better more or less than int. Signed-off-by: Chen Gang --- arch/powerpc/include/asm/mman.h | 8 include/linux/mman.h| 2 +- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/powerpc/i

[v3] UCC_GETH/UCC_FAST: Use IS_ERR_VALUE_U32 API to avoid IS_ERR_VALUE abuses.

2016-07-23 Thread Arvind Yadav
IS_ERR_VALUE() assumes that its parameter is an unsigned long. It can not be used to check if an 'unsigned int' reflects an error. As they pass an 'unsigned int' into a function that takes an 'unsigned long' argument. This happens to work because the type is sign-extended on 64-bit architectures be

Re: [PATCH] powerpc/64: implement a slice mask cache

2016-07-23 Thread Balbir Singh
On Sat, Jul 23, 2016 at 05:10:36PM +1000, Nicholas Piggin wrote: > On Sat, 23 Jul 2016 12:19:37 +1000 > Balbir Singh wrote: > > > On Fri, Jul 22, 2016 at 10:57:28PM +1000, Nicholas Piggin wrote: > > > Calculating the slice mask can become a signifcant overhead for > > > get_unmapped_area. The mas

[PATCH for-4.8 V2 10/10] powerpc/mm: Catch the usage of cpu/mmu_has_feature before jump label init

2016-07-23 Thread Aneesh Kumar K.V
This enable us to catch the wrong usage of cpu_has_feature and mmu_has_feature in the code. We need to use the feature bit based check in show_regs because that is used in the reporting code. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/Kconfig.debug | 11 +++ arch/powerp

[PATCH for-4.8 V2 09/10] powerpc: use jump label for mmu_has_feature

2016-07-23 Thread Aneesh Kumar K.V
From: Kevin Hao The mmu features are fixed once the probe of mmu features are done. And the function mmu_has_feature() does be used in some hot path. The checking of the mmu features for each time of invoking of mmu_has_feature() seems suboptimal. This tries to reduce this overhead of this check

[PATCH for-4.8 V2 08/10] powerpc: use the jump label for cpu_has_feature

2016-07-23 Thread Aneesh Kumar K.V
From: Kevin Hao The cpu features are fixed once the probe of cpu features are done. And the function cpu_has_feature() does be used in some hot path. The checking of the cpu features for each time of invoking of cpu_has_feature() seems suboptimal. This tries to reduce this overhead of this check

[PATCH for-4.8 V2 07/10] powerpc: move the cpu_has_feature to a separate file

2016-07-23 Thread Aneesh Kumar K.V
From: Kevin Hao We plan to use jump label for cpu_has_feature. In order to implement this we need to include the linux/jump_label.h in asm/cputable.h. But it seems that asm/cputable.h is so basic header file for ppc that it is almost included by all the other header files. The including of the li

[PATCH for-4.8 V2 06/10] powerpc: kill mfvtb()

2016-07-23 Thread Aneesh Kumar K.V
From: Kevin Hao This function is only used by get_vtb(). They are almost the same except the reading from the real register. Move the mfspr() to get_vtb() and kill the function mfvtb(). With this, we can eliminate the use of cpu_has_feature() in very core header file like reg.h. This is a prepara

[PATCH for-4.8 V2 05/10] powerpc: Call jump_label_init early

2016-07-23 Thread Aneesh Kumar K.V
Call jump_label_init early so that can use static keys for cpu and mmu feature check. We should have finalzed all the cpu/mmu features when we call setup_system and we also did feature fixup for ASM based code. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/lib/feature-fixups.c | 6 ++ 1 f

[PATCH for-4.8 V2 04/10] jump_label: make it possible for the archs to invoke jump_label_init() much earlier

2016-07-23 Thread Aneesh Kumar K.V
From: Kevin Hao For some archs (such as powerpc) would want to invoke jump_label_init() in a much earlier stage. So check static_key_initialized in order to make sure this function run only once. Signed-off-by: Kevin Hao Signed-off-by: Aneesh Kumar K.V --- kernel/jump_label.c | 3 +++ 1 file

[PATCH for-4.8 V2 03/10] powerpc/mm/radix: Add radix_set_pte to use in early init

2016-07-23 Thread Aneesh Kumar K.V
We want to use the static key based feature check in set_pte_at. Since we call radix__map_kernel_page early in boot before jump label is initialized we can't call set_pte_at there. Add radix__set_pte for the same. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/mm/pgtable-radix.c | 23 +

[PATCH for-4.8 V2 02/10] powerpc/mm: Convert early cpu/mmu feature check to use the new helpers

2016-07-23 Thread Aneesh Kumar K.V
This switch the early feature check to use the non static key variant of the function. In later patches we will be switching cpu_has_feature and mmu_has_feature to use static keys and we can use them only after static key/jump label is initialized. Any check for feature before jump label init shoul

[PATCH for-4.8 V2 01/10] powerpc/mm: Add __cpu/__mmu_has_feature

2016-07-23 Thread Aneesh Kumar K.V
In later patches, we will be switching cpu and mmu feature check to use static keys. This would require us to have a variant of feature check that can be used in early boot before jump label is initialized. This patch adds the same. We also add a variant for radix_enabled() check We also update th

[PATCH for-4.8 V2 00/10] Use jump label for cpu/mmu_has_feature

2016-07-23 Thread Aneesh Kumar K.V
Changes from V1: * Update "powerpc/mm: Convert early cpu/mmu feature check to use the new helpers" based on resend code changes in this area. We now do feature fixup early and hence we can reduce the usage of __cpu/__mmu_has_feature. Aneesh Kumar K.V (5): powerpc/mm: Add __cpu/__mmu_has_fea

Re: [PATCH] powerpc/64: implement a slice mask cache

2016-07-23 Thread Benjamin Herrenschmidt
On Sat, 2016-07-23 at 17:10 +1000, Nicholas Piggin wrote: > I wanted to avoid doing more work under slice_convert_lock, but > we should just make that a per-mm lock anyway shouldn't we? Aren't the readers under the mm sem taken for writing or has this changed ? Cheers, Ben. _

[PATCH] Optimise syscall entry for virtual, relocatable case

2016-07-23 Thread Nicholas Piggin
The mflr r10 instruction was left over saving of lr when the code used lr to branch to system_call_entry from the exception handler. That was changed by 6a404806d to use the count register. The value is never used now, so mflr can be removed, and r10 can be used for storage rather than spilling to

Re: [PATCH] powerpc/64: implement a slice mask cache

2016-07-23 Thread Nicholas Piggin
On Sat, 23 Jul 2016 12:19:37 +1000 Balbir Singh wrote: > On Fri, Jul 22, 2016 at 10:57:28PM +1000, Nicholas Piggin wrote: > > Calculating the slice mask can become a signifcant overhead for > > get_unmapped_area. The mask is relatively small and does not change > > frequently, so we can cache it