On 12/30/2013 07:56 AM, rui wang wrote:
> An irq can be mapped to only one vector number, but can have multiple > destination CPUs. i.e. the same irq/vector can appear on multiple > CPUs' vector_irq[]. So checking data->affinity is necessary I think. That's true Rui -- but here's what I think the scenario actually is. Suppose we have a 4-cpu system, and we have an IRQ that is mapped to multiple cpu's vector_irq[]. For example, we have IRQ 200 that is mapped to CPU 2 vector_irq[50], and CPU 3 vector_irq[60]. Now I 'echo 0 > /sys/devices/system/cpu/cpu3/online'. cpu_disable is called and the kernel migrates IRQs off to other cpus. Regardless if IRQ 200 is already mapped to CPU2 vector_irq[50], the mapping for CPU 3 vector_irq[60] *must be migrated* to another CPU. It has a valid irq handler and the IRQ is active. It doesn't just disappear because the CPU went down. ie) AFAICT we should not differentiate between a multiple mapped IRQ and a singly mapped IRQ when traversing the vector_irq[] for CPU 3. I'm probably being dense on this but I'm not seeing a problem with migrating the IRQ. > But notice that data->affinity is updated in chip->irq_set_affinity() > inside fixup_irqs(), while cpu_online_mask is updated in > remove_cpu_from_maps() inside cpu_disable_common(). It shouldn't matter that the maps are updated in different areas during the execution as we're in stop_machine(). They are updated > in different places. So the algorithm to check them against each other > should be different, depending on where you put the check_vectors(). > That's my understanding. > P. > Thanks > Rui -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/