+ for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { + irq = __this_cpu_read(vector_irq[vector]); + if (irq >= 0) { + desc = irq_to_desc(irq); + data = irq_desc_get_irq_data(desc); + cpumask_copy(&affinity_new, data->affinity); + cpu_clear(this_cpu, affinity_new); + /* + * The check below determines if this irq requires + * an empty vector_irq[irq] entry on an online + * cpu. + * + * The code only counts active non-percpu irqs, and + * those irqs that are not linked to on an online cpu. + * The first test is trivial, the second is not. + * + * The second test takes into account the + * account that a single irq may be mapped to multiple + * cpu's vector_irq[] (for example IOAPIC cluster + * mode). In this case we have two + * possibilities: + * + * 1) the resulting affinity mask is empty; that is + * this the down'd cpu is the last cpu in the irq's + * affinity mask, and Code says "||" - so I think comment should say "or". + * + * 2) the resulting affinity mask is no longer + * a subset of the online cpus but the affinity + * mask is not zero; that is the down'd cpu is the + * last online cpu in a user set affinity mask. + * + * In both possibilities, we need to remap the irq + * to a new vector_irq[]. + * + */ + if (irq_has_action(irq) && !irqd_is_per_cpu(data) && + (cpumask_empty(&affinity_new) || + !cpumask_subset(&affinity_new, &online_new))) + this_count++; + }
That's an impressive 6:1 ratio of lines-of-comment to lines-of-code! Perhaps it would be less scary if the test were broken up into the easy/obvious part and the one that has taken all these revisions to work out? E.g. /* no need to count inactive or per-cpu irqs */ if (!irq_has_action(irq) || irqd_is_per_cpu(data)) continue; /* * We need to look for a new home for this irq if: ... paste in 1), 2) from above here ... (but s/and/or/ to match code) */ if (cpumask_empty(&affinity_new) || !cpumask_subset(&affinity_new, &online_new)) this_count++; -Tony -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/