On 12/30/2013 02:44 AM, Chen, Gong wrote: > On Sat, Dec 28, 2013 at 12:10:38PM -0500, Prarit Bhargava wrote: >> Gong and Rui, >> >> After looking at this in detail I realized I made a mistake in my patch by >> including the check for the smp_affinity. Simply put, it shouldn't be there >> given Rui's explanation above. >> >> So I think the patch simply needs to do: >> >> this_count = 0; >> for (vector = FIRST_EXTERNAL_VECTOR; vector < NR_VECTORS; vector++) { >> irq = __this_cpu_read(vector_irq[vector]); >> if (irq >= 0) { >> desc = irq_to_desc(irq); >> data = irq_desc_get_irq_data(desc); >> affinity = data->affinity; >> if (irq_has_action(irq) && !irqd_is_per_cpu(data)) >> this_count++; >> } >> } >> >> Can the two of you confirm the above is correct? It would be greatly >> appreciated. >> > > No, I don't think it is correct. We still need to consider smp_affinity. > > fixup_irqs > irq_set_affinity(native_ioapic_set_affinity) > __ioapic_set_affinity > assign_irq_vector > __assign_irq_vector > cpu_mask_to_apicid_and > /* now begin to set ioapic RET */ > > __assign_irq_vector(int irq, struct irq_cfg *cfg, const struct cpumask *mask) > { > ... > apic->vector_allocation_domain(cpu, tmp_mask, mask); > ... > for_each_cpu_and(new_cpu, tmp_mask, cpu_online_mask) > per_cpu(vector_irq, new_cpu)[vector] = irq; > cfg->vector = vector; > cpumask_copy(cfg->domain, tmp_mask); > ... > } > > On same vecotr on all related vector_irq, irq is set. So such kind of > irq should happen in multiple vector_irq. In cpu_mask_to_apicid_and(e.g. > x2apic_cpu_mask_to_apicid_and for cluster mode), apic is updated > depending on new mask. That's why I think this kind of interrupt > should be bypassed.
Hmm ... okay. I'll take a closer look at this. Thanks for the additional information. P. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/