> -----Original Message----- > From: Wood Scott-B07421 > Sent: Tuesday, August 20, 2013 8:39 AM > To: Wang Dongsheng-B40534 > Cc: Wood Scott-B07421; Kumar Gala; linuxppc-dev@lists.ozlabs.org > Subject: Re: [PATCH 1/2] powerpc/85xx: add hardware automatically enter > altivec idle state > > On Sun, 2013-08-18 at 21:53 -0500, Wang Dongsheng-B40534 wrote: > > Thanks for your feedback. > > > > > -----Original Message----- > > > From: Wood Scott-B07421 > > > Sent: Saturday, August 17, 2013 12:51 AM > > > To: Kumar Gala > > > Cc: Wang Dongsheng-B40534; linuxppc-dev@lists.ozlabs.org > > > Subject: Re: [PATCH 1/2] powerpc/85xx: add hardware automatically > enter > > > altivec idle state > > > > > > On Fri, 2013-08-16 at 06:02 -0500, Kumar Gala wrote: > > > > On Aug 16, 2013, at 2:23 AM, Dongsheng Wang wrote: > > > > > > > > > From: Wang Dongsheng <dongsheng.w...@freescale.com> > > > > > > > > > > Each core's AltiVec unit may be placed into a power savings mode > > > > > by turning off power to the unit. Core hardware will > automatically > > > > > power down the AltiVec unit after no AltiVec instructions have > > > > > executed in N cycles. The AltiVec power-control is triggered by > > > hardware. > > > > > > > > > > Signed-off-by: Wang Dongsheng <dongsheng.w...@freescale.com> > > > > > > > > Why treat this as a idle HW governor vs just some one time setup at > > > boot of the time delay? > > > > > > It is being done as one-time setup, despite the function name. > > > > > > Maybe it should be moved into __setup/restore_cpu_e6500 (BTW, we > really > > > should refactor those to reduce duplication) with the timebase bit > > > number hardcoded rather than a time in us. > > > > > The frequency of the different platforms timebase is not the same. > > It's close enough. Remember, this number is a vague initial guess, not > something that's been carefully calibrated. Though, it would make it > harder to substitute the number with one that's been more carefully > calibrated... but wouldn't different chips possibly have different > optimal delays anyway? > > > If we use it, we need to set different timebase bit on each platform. > > That is why I did not put the code in __setup/restore_cpu_e6500, I need > > use tb_ticks_per_usec to get timebase bit. So we only need to set a > time > > for each platform, and not set different timebase bit. > > It just seems wrong to have an ad-hoc mechanism for running > core-specific code when we have cputable... If we really need this, > maybe we should add a "cpu_setup_late" function pointer. > > With your patch, when does the power management register get set when > hot plugging a cpu? > Um.. I don't deal with this situation. I will fix it. __setup/restore_cpu_e6500 looks good. But only bootcpu call __setup_cpu_e6500, not on each cpu. I think this is a bug.
Fix code: diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S index 2cfed9e..2a9a718 100644 --- a/arch/powerpc/kernel/head_64.S +++ b/arch/powerpc/kernel/head_64.S @@ -603,6 +603,9 @@ __secondary_start: /* Set thread priority to MEDIUM */ HMT_MEDIUM +#ifdef CONFIG_PPC_BOOK3E + bl call_setup_cpu /* Call setup_cpu for this CPU */ +#endif /* Initialize the kernel stack */ LOAD_REG_ADDR(r3, current_set) sldi r28,r24,3 /* get current_set[cpu#] */ diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S index b863b87..7d84bf4 100644 --- a/arch/powerpc/kernel/misc_64.S +++ b/arch/powerpc/kernel/misc_64.S @@ -55,6 +55,17 @@ _GLOBAL(call_handle_irq) mtlr r0 blr +_GLOBAL(call_setup_cpu) + LOAD_REG_ADDR(r4, cur_cpu_spec) + ld r4, 0(r4) + ld r5, CPU_SPEC_SETUP(r4) + + cmpwi 0, r5, 0 + beqlr + ld r5, 0(r5) + mtctr r5 + bctr + .section ".toc","aw" PPC64_CACHES: .tc ppc64_caches[TC],ppc64_caches > > > As for the PVR check, the upstream kernel doesn't need to care about > > > rev1, so knowing it's an e6500 is good enough. > > > > > But AltiVec idle & PW20 cannot work on rev1 platform. > > We didn't have to deal with it? > > Upstream does not run on rev1. > :), But already have customers in the use of rev1. Why we don't need to care about that? _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev