Re: Config fragment for Versatile Express

2012-03-31 Thread Jon Medhurst (Tixy)
On Fri, 2012-03-30 at 10:15 -0700, John Stultz wrote:
> Right right right. I forgot with the new topic branch method, everything 
> based on mainline and not a tree Andrey maintains, so you don't have a 
> reference to the config tree.

Yes, Andrey's tree is a merge of all the LT and working group topics.

> 
> In that case, just go ahead and push the full config to the config tree. 
> If we need to do have fullly-enabled vs upstream builds we can deal with 
> the warnings in the latter case (or maybe further split the board 
> configs into -upstream and -lt ?).

So this means Landing Teams should host the configs for their boards and
you will host the linaro-base, ubuntu and android fragments?

We almost certainly need board specific android and ubuntu fragments as
well, so I'll add vexpress-android.conf and vexpress-ubuntu.conf as
well. (Unless there is some magic to have conditional config options in
a fragment?)

-- 
Tixy


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev


Re: [linux-pm] [PATCH] cpuidle : use percpu cpuidle in the core code

2012-03-31 Thread Srivatsa S. Bhat
On 03/30/2012 09:48 PM, Daniel Lezcano wrote:

> On 03/30/2012 01:59 PM, Srivatsa S. Bhat wrote:
>> On 03/30/2012 05:15 PM, Daniel Lezcano wrote:
>>
>>> On 03/30/2012 01:25 PM, Srivatsa S. Bhat wrote:
 On 03/30/2012 04:18 PM, Daniel Lezcano wrote:

> The usual cpuidle initialization routines are to register the
> driver, then register a cpuidle device per cpu.
>
> With the device's state count default initialization with the
> driver's state count, the code initialization remains mostly the
> same in the different drivers.
>
> We can then add a new function 'cpuidle_register' where we register
> the driver and the devices. These devices can be defined in a global
> static variable in cpuidle.c. We will be able to factor out and
> remove a lot of duplicate lines of code.
>
> As we still have some drivers, with different initialization routines,
> we keep 'cpuidle_register_driver' and 'cpuidle_register_device' as low
> level initialization routines to do some specific operations on the
> cpuidle devices.
>
> Signed-off-by: Daniel Lezcano
> ---
>drivers/cpuidle/cpuidle.c |   34 ++
>include/linux/cpuidle.h   |3 +++
>2 files changed, 37 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
> index b8a1faf..2a174e8 100644
> --- a/drivers/cpuidle/cpuidle.c
> +++ b/drivers/cpuidle/cpuidle.c
> @@ -23,6 +23,7 @@
>#include "cpuidle.h"
>
>DEFINE_PER_CPU(struct cpuidle_device *, cpuidle_devices);
> +DEFINE_PER_CPU(struct cpuidle_device, cpuidle_device);
>
>DEFINE_MUTEX(cpuidle_lock);
>LIST_HEAD(cpuidle_detected_devices);
> @@ -391,6 +392,39 @@ int cpuidle_register_device(struct
> cpuidle_device *dev)
>
>EXPORT_SYMBOL_GPL(cpuidle_register_device);
>
> +int cpuidle_register(struct cpuidle_driver *drv)
> +{
> +int ret, cpu;
> +struct cpuidle_device *dev;
> +
> +ret = cpuidle_register_driver(drv);
> +if (ret)
> +return ret;
> +
> +for_each_online_cpu(cpu) {
> +dev =&per_cpu(cpuidle_device, cpu);
> +dev->cpu = cpu;
> +
> +ret = cpuidle_register_device(dev);
> +if (ret)
> +goto out_unregister;
> +}
> +


 Isn't this racy with respect to CPU hotplug?
>>>
>>> No, I don't think so. Do you see a race ?
>>
>>
>> Well, that depends on when/where this function gets called.
>> This patch introduces the function. Where is the caller?
> 
> There is no caller for the moment because they are in the different arch
> specific code in the different trees.
> 
> But the callers will be in the init calls at boot up.
> 
>> As of now, if you are calling this in boot-up code, its not racy.
> 
> Most of the caller are in the boot-up code, in device_init or
> module_init. The other ones are doing some specific initialization on
> the cpuidle_device (cpuinit, like acpi) and can't use the
> cpuidle_register function.
> 
>> However, there have been attempts to speed up boot times by trying
>> to online cpus in parallel with the rest of the kernel initialization[1].
>> In that case, unless your call is an early init call, it can race
>> with CPU hotplug.
>>
>> [1]. https://lkml.org/lkml/2012/1/30/647
> 
> Aha ! Now I understand the race you were talking about. Thanks for the
> pointer. It is very interesting.
> 
> I realize if the cpus boot up in parallel, that will break a lot of
> things and, for my concern, that will break most of the cpuidle drivers.
> 


Exactly!

> So far the cpu bootup parallelization is not there, so from my POV, my
> patch is correct as we will factor out in a single place some code which
> will be potentially broken by this parallelization in the future. It
> will be easier to fix that in a single place rather in multiple drivers.
> 
> Thanks for spotting this potential problem. This is something I will
> keep in mind for the future.
>


Sure, that would be great! 

> +out:
> +return ret;
> +
> +out_unregister:
> +for_each_online_cpu(cpu) {
> +dev =&per_cpu(cpuidle_device, cpu);
> +cpuidle_unregister_device(dev);
> +}
> +


 This could be improved I guess.. What if the registration fails
 for the first cpu itself? Then looping over entire online cpumask
 would be a waste of time..
>>>
>>> Certainly in a critical section that would make sense, but for 4,8 or 16
>>> cpus in an initialization path at boot time... Anyway, I can add what is
>>> proposed in https://lkml.org/lkml/2012/3/22/143.
>>>
>>
>>
>> What about servers with a lot more CPUs, like say 128 or even more? :-)
>>
>> Moreover I don't see any downsides to the optimization. So should be good
>> to add it in any case...
> 
> Yes, no problem. I will ad

Re: [PATCH 0/2] change lpj in arm smp common code

2012-03-31 Thread Richard Zhao
On Wed, Feb 29, 2012 at 10:21:19AM -0800, Kevin Hilman wrote:
> Richard Zhao  writes:
> 
> > The two patches were originally in [PATCH V6 0/7] add a generic cpufreq 
> > driver.
> > I seperated them and hope they can go to upstream earlier.
> >
> > Richard Zhao (2):
> >   ARM: add cpufreq transiton notifier to adjust loops_per_jiffy for smp
> >   cpufreq: OMAP: remove loops_per_jiffy recalculate for smp
> 
> The first one should go into Russell's patch system.  Once he queues
> that, I can queue the OMAP one for the CPUfreq maintainer.
Hi Russel & Kevin,

Could you double check whether you've picked the patch searies?
I can not find them in 3.4rc.

Thanks
Richard
> 
> Kevin
> 
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
> 


___
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev