On Thu, 2015-10-29 at 11:43 +1100, Anton Blanchard wrote:

> mtmsrd_isync() will do an mtmsrd followed by an isync on older
> processors. On newer processors we avoid the isync via a feature fixup.
> 
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index ef64219..5bf8ec2 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -130,7 +130,10 @@ void enable_kernel_fp(void)
>               check_if_tm_restore_required(current);
>               giveup_fpu(current);
>       } else {
> -             giveup_fpu(NULL);       /* just enables FP for kernel */
> +             u64 oldmsr = mfmsr();
> +
> +             if (!(oldmsr & MSR_FP))
> +                     mtmsr_isync(oldmsr | MSR_FP);

You seem to do this pattern at all call sites.

So should we instead have a helper that makes sure an MSR bit(s) is/are set?

Maybe:

static inline void msr_enable(unsigned long enable_bits)
{
        u64 val, msr = mfmsr();

        if (msr & enable_bits)
                return;

        val = msr | enable_bits;
        asm volatile(__MTMSR " %0; " ASM_FTR_IFCLR("isync", "nop", %1) : :
                        "r" (val), "i" (CPU_FTR_ARCH_206) : "memory");
}


Thoughts?

cheers

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Reply via email to