Hi Ingo, > So we cannot call set_task_cpu() because in the normal life time > of a task the ->cpu value gets set on wakeup. So if a task is > blocked right now, and its affinity changes, it ought to get a > correct ->cpu selected on wakeup. The affinity mask and the > current value of ->cpu getting out of sync is thus 'normal'. > > (Check for example how set_cpus_allowed_ptr() works: we first set > the new allowed mask, then do we migrate the task away if > necessary.) > > In the kthread_bind() case this is explicitly assumed: it only > calls do_set_cpus_allowed(). > > But obviously the bug triggers in kernel/smpboot.c, and that > assert shows a real bug - and your patch makes the assert go > away, so the question is, how did the kthread get woken up and > put on a runqueue without its ->cpu getting set?
I started going down this line earlier today, and found things like: select_task_rq_fair: if (p->nr_cpus_allowed == 1) return prev_cpu; I tried returning cpumask_first(tsk_cpus_allowed()) instead, and while I couldn't hit the BUG I did manage to get a scheduler lockup during testing. At that point I thought the previous task_cpu() was somewhat ingrained in the scheduler and came up with the patch. If not, we could go on a hunt to see what else needs fixing. Anton _______________________________________________ Linuxppc-dev mailing list Linuxppc-dev@lists.ozlabs.org https://lists.ozlabs.org/listinfo/linuxppc-dev