From: Chen Gang
For pure bool function's return value, bool is a little better more or
less than int.
Signed-off-by: Chen Gang
---
arch/powerpc/include/asm/mman.h | 8
include/linux/mman.h| 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/i
IS_ERR_VALUE() assumes that its parameter is an unsigned long.
It can not be used to check if an 'unsigned int' reflects an error.
As they pass an 'unsigned int' into a function that takes an
'unsigned long' argument. This happens to work because the type
is sign-extended on 64-bit architectures be
On Sat, Jul 23, 2016 at 05:10:36PM +1000, Nicholas Piggin wrote:
> On Sat, 23 Jul 2016 12:19:37 +1000
> Balbir Singh wrote:
>
> > On Fri, Jul 22, 2016 at 10:57:28PM +1000, Nicholas Piggin wrote:
> > > Calculating the slice mask can become a signifcant overhead for
> > > get_unmapped_area. The mas
This enable us to catch the wrong usage of cpu_has_feature and
mmu_has_feature in the code. We need to use the feature bit based
check in show_regs because that is used in the reporting code.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/Kconfig.debug | 11 +++
arch/powerp
From: Kevin Hao
The mmu features are fixed once the probe of mmu features are done.
And the function mmu_has_feature() does be used in some hot path.
The checking of the mmu features for each time of invoking of
mmu_has_feature() seems suboptimal. This tries to reduce this
overhead of this check
From: Kevin Hao
The cpu features are fixed once the probe of cpu features are done.
And the function cpu_has_feature() does be used in some hot path.
The checking of the cpu features for each time of invoking of
cpu_has_feature() seems suboptimal. This tries to reduce this
overhead of this check
From: Kevin Hao
We plan to use jump label for cpu_has_feature. In order to implement
this we need to include the linux/jump_label.h in asm/cputable.h.
But it seems that asm/cputable.h is so basic header file for ppc that
it is almost included by all the other header files. The including of
the li
From: Kevin Hao
This function is only used by get_vtb(). They are almost the same
except the reading from the real register. Move the mfspr() to
get_vtb() and kill the function mfvtb(). With this, we can eliminate
the use of cpu_has_feature() in very core header file like reg.h.
This is a prepara
Call jump_label_init early so that can use static keys for cpu and
mmu feature check. We should have finalzed all the cpu/mmu features when
we call setup_system and we also did feature fixup for ASM based code.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/lib/feature-fixups.c | 6 ++
1 f
From: Kevin Hao
For some archs (such as powerpc) would want to invoke jump_label_init()
in a much earlier stage. So check static_key_initialized in order to
make sure this function run only once.
Signed-off-by: Kevin Hao
Signed-off-by: Aneesh Kumar K.V
---
kernel/jump_label.c | 3 +++
1 file
We want to use the static key based feature check in set_pte_at. Since
we call radix__map_kernel_page early in boot before jump label is
initialized we can't call set_pte_at there. Add radix__set_pte for the
same.
Signed-off-by: Aneesh Kumar K.V
---
arch/powerpc/mm/pgtable-radix.c | 23 +
This switch the early feature check to use the non static key
variant of the function. In later patches we will be switching
cpu_has_feature and mmu_has_feature to use static keys and we can use
them only after static key/jump label is initialized. Any check for
feature before jump label init shoul
In later patches, we will be switching cpu and mmu feature check to
use static keys. This would require us to have a variant of feature
check that can be used in early boot before jump label is initialized.
This patch adds the same. We also add a variant for radix_enabled()
check
We also update th
Changes from V1:
* Update "powerpc/mm: Convert early cpu/mmu feature check to use the new
helpers"
based on resend code changes in this area.
We now do feature fixup early and hence we can reduce the usage of
__cpu/__mmu_has_feature.
Aneesh Kumar K.V (5):
powerpc/mm: Add __cpu/__mmu_has_fea
On Sat, 2016-07-23 at 17:10 +1000, Nicholas Piggin wrote:
> I wanted to avoid doing more work under slice_convert_lock, but
> we should just make that a per-mm lock anyway shouldn't we?
Aren't the readers under the mm sem taken for writing or has this
changed ?
Cheers,
Ben.
_
The mflr r10 instruction was left over saving of lr when the code
used lr to branch to system_call_entry from the exception handler.
That was changed by 6a404806d to use the count register.
The value is never used now, so mflr can be removed, and r10 can be
used for storage rather than spilling to
On Sat, 23 Jul 2016 12:19:37 +1000
Balbir Singh wrote:
> On Fri, Jul 22, 2016 at 10:57:28PM +1000, Nicholas Piggin wrote:
> > Calculating the slice mask can become a signifcant overhead for
> > get_unmapped_area. The mask is relatively small and does not change
> > frequently, so we can cache it
17 matches
Mail list logo