On Fri, 27 Nov 2015, Jiang Liu wrote: Please trim your replies. > On 2015/11/26 5:12, Thomas Gleixner wrote: > > Looks a bit overkill with the extra cpumask. I tried a simple counter > > but that does not work versus cpu unplug as we do not know whether the > > outgoing cpu is involved in the cleanup or not. And if the cpu is > > involved we starve assign_irq_vector() .... > > > > The upside of this is that we get rid of that atomic allocation in > > __send_cleanup_vector(). > > Maybe more headache for you now:) > It seems there are still rooms for improvements. First it > seems we could just reuse old_domain instead of adding cleanup_mask.
I really like to get rid of that atomic allocation in __send_cleanup_vector() > Second I found another race window among x86_vector_free_irqs(), > __send_cleanup_vector() and smp_irq_move_cleanup_interrupt(). What's the race there? > I'm trying to refine your patch based following rules: > 1) move_in_progress controls whether we need to send IPIs > 2) old_domain controls which CPUs we should do clean up > 3) assign_irq_vector checks both move_in_progress and old_domain. > Will send out the patch soon for comments:) Sure. Thanks, tglx -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/