On Mon, 19 Mar 2018 14:43:00 +0530
"Naveen N. Rao" <naveen.n....@linux.vnet.ibm.com> wrote:

> diff --git a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S 
> b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> index 3f3e81852422..fdf702b4df25 100644
> --- a/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> +++ b/arch/powerpc/kernel/trace/ftrace_64_mprofile.S
> @@ -60,6 +60,19 @@ _GLOBAL(ftrace_caller)
>       mfxer   r10
>       mfcr    r11
>  
> +#ifdef CONFIG_KVM
> +     lbz     r3, PACA_FTRACE_DISABLED(r13)
> +     cmpdi   r3, 0
> +     beq     1f
> +     mflr    r3
> +     mtctr   r3
> +     REST_GPR(3, r1)
> +     addi    r1, r1, SWITCH_FRAME_SIZE
> +     mtlr    r0
> +     bctr
> +1:
> +#endif

I wonder if we should try to move the return out of the fast path (for
cache reasons), as most of the time the above compare will be true. That
is:

#ifdef CONFIG_KVM
        lbz     r3, PACA_FTRACE_DISABLED(r13)
        cmpdi   r3, 0
        bne     no_trace
#endif

/* rest of ftrace_caller code */

/* after ftrace_caller... */
        bctr                    /* jump after _mcount site */

#ifdef  CONFIG_KVM
no_trace:
        mflr    r3
        mtctr   r3
        REST_GPR(3, r1)
        addi    r1, r1, SWITCH_FRAME_SIZE
        mtlr    r0
        bctr
#endif


-- Steve

> +
>       /* Get the _mcount() call site out of LR */
>       mflr    r7
>       /* Save it as pt_regs->nip */

Reply via email to