Hi Stefano,
On 01/03/17 22:15, Stefano Stabellini wrote:
Move the atomic write of rank->vcpu, which sets the new vcpu target, to
vgic_migrate_irq, at the beginning of the lock protected area (protected
by the vgic lock).
This code movement reduces race conditions between vgic_migrate_irq and
setting rank->vcpu on one pcpu and gic_update_one_lr on another pcpu.
When gic_update_one_lr and vgic_migrate_irq take the same vgic lock,
there are no more race conditions with this patch. When vgic_migrate_irq
is called multiple times while GIC_IRQ_GUEST_MIGRATING is already set, a
race condition still exists because in that case gic_update_one_lr and
vgic_migrate_irq take different vgic locks.
Signed-off-by: Stefano Stabellini <sstabell...@kernel.org>
---
xen/arch/arm/vgic-v2.c | 5 ++---
xen/arch/arm/vgic-v3.c | 4 +---
xen/arch/arm/vgic.c | 15 ++++++++++-----
xen/include/asm-arm/vgic.h | 3 ++-
4 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/xen/arch/arm/vgic-v2.c b/xen/arch/arm/vgic-v2.c
index 0674f7b..43b4ac3 100644
--- a/xen/arch/arm/vgic-v2.c
+++ b/xen/arch/arm/vgic-v2.c
@@ -158,10 +158,9 @@ static void vgic_store_itargetsr(struct domain *d, struct
vgic_irq_rank *rank,
{
vgic_migrate_irq(d->vcpu[old_target],
d->vcpu[new_target],
- virq);
+ virq,
+ &rank->vcpu[offset]);
}
-
- write_atomic(&rank->vcpu[offset], new_target);
With this change rank->vcpu[offset] will not be updated for virtual SPIs
(e.g p->desc != NULL). And therefore affinity for them will not work.
However, from my understanding the problem you are trying to solve with
this patch is having rank->vcpu[offset] to be set as soon as possible.
It does not really matter if it is protected by the lock, what you care
is rank->vcpu[offset] been seen before the lock has been released.
So if GIC_IRQ_GUEST_MIGRATE is set and gic_update_one_lr is running
straight after the lock is released, rank->vcpu[offset] will contain the
correct vCPU.
A better approach would be to move write_atomic(...) before
vgic_migrated_irq(...). What do you think?
Cheers,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel