Hi Oleg,

This patch looks OK to me. But while reading this I got some doubts
in nearby places, so BTW 2 small questions:

1) ... workqueue_cpu_callback(...)
{
        ...
        list_for_each_entry(wq, &workqueues, list) {
                cwq = per_cpu_ptr(wq->cpu_wq, cpu);

                switch (action) {
                case CPU_UP_PREPARE:
                ...

It looks like not all CPU_ cases are served here: shouldn't
list_for_each_entry() be omitted for them?

2) ... __create_workqueue_key(...)
{
        ...
        if (singlethread) {
                ...
        } else {
                get_online_cpus();
                spin_lock(&workqueue_lock);
                list_add(&wq->list, &workqueues);

Shouldn't this list_add() be done after all these inits below?

                spin_unlock(&workqueue_lock);

                for_each_possible_cpu(cpu) {
                        cwq = init_cpu_workqueue(wq, cpu);
                        ...
                }
                ...
Thanks,
Jarek P.
 

On Sat, Feb 16, 2008 at 08:22:59PM +0300, Oleg Nesterov wrote:
> When cpu_populated_map was introduced, it was supposed that cwq->thread can
> survive after CPU_DEAD, that is why we never shrink cpu_populated_map.
> 
> This is not very nice, we can safely remove the already dead CPU from the map.
> The only required change is that destroy_workqueue() must hold the hotplug 
> lock
> until it destroys all cwq->thread's, to protect the cpu_populated_map. We 
> could
> make the local copy of cpu mask and drop the lock, but sizeof(cpumask_t) may 
> be
> very large.
> 
> Also, fix the comment near queue_work(). Unless _cpu_down() happens we do
> guarantee the cpu-affinity of the work_struct, and we have users which rely on
> this.
> 
> Signed-off-by: Oleg Nesterov <[EMAIL PROTECTED]>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to