On Tue, 6 Dec 2016, Julien Grall wrote: > > > This window may be bigger with LPIs, because a single vCPU may have > > > thousand > > > interrupts routed. This would take a long time to move all of them when > > > the > > > vCPU is migrating. So we may want to take a lazy approach and moving them > > > when > > > they are received on the "wrong" pCPU. > > > > That's possible. The only downside is that modifying the irq migration > > workflow is difficult and we might want to avoid it if possible. > > I don't think this would modify the irq migration work flow. If you look at > the implementation of arch_move_irqs, it will just go over the vIRQ and call > irq_set_affinity. > > irq_set_affinity will directly modify the hardware and that's all. > > > > > Another approach is to let the scheduler know that migration is slower. > > In fact this is not a new problem: it can be slow to migrate interrupts, > > even few non-LPIs interrupts, even on x86. I wonder if the Xen scheduler > > has any knowledge of that (CC'ing George and Dario). I guess that's the > > reason why most people run with dom0_vcpus_pin. > > I gave a quick look at x86, arch_move_irqs is not implemented. Only PIRQ are > migrated when a vCPU is moving to another pCPU. > > The function pirq_set_affinity, will change the affinity of a PIRQ but only in > software (see irq_set_affinity). This is not yet replicated the configuration > into the hardware. > > In the case of ARM, we directly modify the configuration of the hardware. This > adds much more overhead because you have to do an hardware access for every > single IRQ.
George, Dario, any comments on whether this would make sense and how to do it? _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel