By using the compiler intrinsics instead of hand-crafted opaque inline
assembler for byte-swapping, we let the compiler see what's actually
happening and it gets to use lwbrx/stwbrx instructions instead of a
normal load/store coupled with a sequence of rlwimi instructions to
move bits around.

Compiled-tested only. It gave a code size reduction of almost 4% for
ext2, and more like 2.5% for ext3/ext4.

Signed-off-by: David Woodhouse <david.woodho...@intel.com>
Acked-by: H. Peter Anvin <h...@linux.intel.com>
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 951a517..064e418 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -146,6 +146,7 @@ config PPC
        select MODULES_USE_ELF_RELA
        select GENERIC_KERNEL_EXECVE
        select CLONE_BACKWARDS
+       select ARCH_USE_BUILTIN_BSWAP
 
 config EARLY_PRINTK
        bool
-- 
1.8.0.1


-- 
dwmw2

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to